跳转至

添加工具

为了处理聊天机器人无法"从记忆中"回答的查询,请集成网络搜索工具。聊天机器人可以使用此工具查找相关信息并提供更好的回应。

Note

本教程基于构建基础聊天机器人

先决条件

在开始本教程之前,请确保您已具备以下条件:

1. 安装搜索引擎

安装使用 Tavily 搜索引擎 的依赖项:

pip install -U langchain-tavily

2. 配置您的环境

使用您的搜索引擎 API 密钥配置您的环境:

import os

os.environ["TAVILY_API_KEY"] = "tvly-..."

3. 定义工具

定义网络搜索工具:

API Reference: TavilySearch

from langchain_tavily import TavilySearch

tool = TavilySearch(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")

这些结果是我们的聊天机器人可以用来回答问题的页面摘要:

{'query': "What's a 'node' in LangGraph?",
'follow_up_questions': None,
'answer': None,
'images': [],
'results': [{'title': "Introduction to LangGraph: A Beginner's Guide - Medium",
'url': 'https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141',
'content': 'Stateful Graph: LangGraph revolves around the concept of a stateful graph, where each node in the graph represents a step in your computation, and the graph maintains a state that is passed around and updated as the computation progresses. LangGraph supports conditional edges, allowing you to dynamically determine the next node to execute based on the current state of the graph. We define nodes for classifying the input, handling greetings, and handling search queries. def classify_input_node(state): LangGraph is a versatile tool for building complex, stateful applications with LLMs. By understanding its core concepts and working through simple examples, beginners can start to leverage its power for their projects. Remember to pay attention to state management, conditional edges, and ensuring there are no dead-end nodes in your graph.',
'score': 0.7065353,
'raw_content': None},
{'title': 'LangGraph Tutorial: What Is LangGraph and How to Use It?',
'url': 'https://www.datacamp.com/tutorial/langgraph-tutorial',
'content': 'LangGraph is a library within the LangChain ecosystem that provides a framework for defining, coordinating, and executing multiple LLM agents (or chains) in a structured and efficient manner. By managing the flow of data and the sequence of operations, LangGraph allows developers to focus on the high-level logic of their applications rather than the intricacies of agent coordination. Whether you need a chatbot that can handle various types of user requests or a multi-agent system that performs complex tasks, LangGraph provides the tools to build exactly what you need. LangGraph significantly simplifies the development of complex LLM applications by providing a structured framework for managing state and coordinating agent interactions.',
'score': 0.5008063,
'raw_content': None}],
'response_time': 1.38}

4. 定义图

对于您在第一个教程中创建的StateGraph,在LLM上添加bind_tools。这让LLM知道如果它想使用搜索引擎,应该使用正确的JSON格式。

首先,让我们选择我们的LLM:

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

llm = init_chat_model("openai:gpt-4.1")

👉 Read the OpenAI integration docs

pip install -U "langchain[anthropic]"
import os
from langchain.chat_models import init_chat_model

os.environ["ANTHROPIC_API_KEY"] = "sk-..."

llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")

👉 Read the Anthropic integration docs

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"

llm = init_chat_model(
    "azure_openai:gpt-4.1",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)

👉 Read the Azure integration docs

pip install -U "langchain[google-genai]"
import os
from langchain.chat_models import init_chat_model

os.environ["GOOGLE_API_KEY"] = "..."

llm = init_chat_model("google_genai:gemini-2.0-flash")

👉 Read the Google GenAI integration docs

pip install -U "langchain[aws]"
from langchain.chat_models import init_chat_model

# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html

llm = init_chat_model(
    "anthropic.claude-3-5-sonnet-20240620-v1:0",
    model_provider="bedrock_converse",
)

👉 Read the AWS Bedrock integration docs

现在我们可以将其整合到StateGraph中:

API Reference: StateGraph | START | END | add_messages

from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

# 修改:告诉LLM它可以调用哪些工具
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

python

现在,创建一个函数来运行被调用的工具。通过将工具添加到一个名为 BasicToolNode 的新节点中来实现,该节点检查状态中的最新消息,如果消息包含 tool_calls 则调用工具。它依赖于 LLM 的 tool_calling 支持,该功能在 Anthropic、OpenAI、Google Gemini 以及许多其他 LLM 提供商中可用。

API Reference: ToolMessage

import json

from langchain_core.messages import ToolMessage

class BasicToolNode:
    """A node that runs the tools requested in the last AIMessage."""

    def __init__(self, tools: list) -> None:
        self.tools_by_name = {tool.name: tool for tool in tools}

    def __call__(self, inputs: dict):
        if messages := inputs.get("messages", []):
            message = messages[-1]
        else:
            raise ValueError("No message found in input")
        outputs = []
        for tool_call in message.tool_calls:
            tool_result = self.tools_by_name[tool_call["name"]].invoke(
                tool_call["args"]
            )
            outputs.append(
                ToolMessage(
                    content=json.dumps(tool_result),
                    name=tool_call["name"],
                    tool_call_id=tool_call["id"],
                )
            )
        return {"messages": outputs}

tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

Note

如果您将来不想自己构建这个,可以使用 LangGraph 预构建的 ToolNode

:::

6. 定义 conditional_edges

添加了工具节点后,现在可以定义 conditional_edges

Edges 将控制流从一个节点路由到下一个节点。Conditional edges 从单个节点开始,通常包含 "if" 语句,根据当前图状态路由到不同的节点。这些函数接收当前的图 state,并返回一个字符串或字符串列表,指示接下来要调用哪个节点。

接下来,定义一个名为 route_tools 的路由函数,检查聊天机器人输出中的 tool_calls。通过调用 add_conditional_edges 将此函数提供给图,这告诉图每当 chatbot 节点完成时,检查此函数以确定下一步去向。

如果存在工具调用,条件将路由到 tools,否则路由到 END。因为条件可以返回 END,所以这次不需要显式设置 finish_point

def route_tools(
    state: State,
):
    """
    在 conditional_edge 中使用,如果最后一条消息有工具调用,则路由到 ToolNode。
    否则,路由到结束。
    """
    if isinstance(state, list):
        ai_message = state[-1]
    elif messages := state.get("messages", []):
        ai_message = messages[-1]
    else:
        raise ValueError(f"在 tool_edge 的输入状态中未找到消息: {state}")
    if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
        return "tools"
    return END

# `tools_condition` 函数如果聊天机器人要求使用工具则返回 "tools",如果可以直接回复则返回 "END"。
# 这种条件路由定义了主要的代理循环。
graph_builder.add_conditional_edges(
    "chatbot",
    route_tools,
    # 下面的字典让您告诉图将条件的输出解释为特定节点
    # 它默认为恒等函数,但如果您
    # 想使用一个名为 "tools" 以外的节点,
    # 您可以将字典的值更新为其他内容
    # 例如,"tools": "my_tools"
    {"tools": "tools", END: END},
)
# 每次调用工具时,我们返回聊天机器人以决定下一步
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()

Note

您可以用预构建的 tools_condition 替换它,以更简洁。

7. 可视化图形(可选)

您可以使用 get_graph 方法和其中一个 "draw" 方法(如 draw_asciidraw_png)来可视化图形。每个 draw 方法都需要额外的依赖项。

from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    # 这需要一些额外的依赖项,是可选的
    pass

chatbot-with-tools-diagram

8. 向机器人提问

现在你可以向聊天机器人询问其训练数据之外的问题:

def stream_graph_updates(user_input: str):
    for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
        for value in event.values():
            print("Assistant:", value["messages"][-1].content)

while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break

        stream_graph_updates(user_input)
    except:
        # fallback if input() is not available
        user_input = "What do you know about LangGraph?"
        print("User: " + user_input)
        stream_graph_updates(user_input)
        break
Assistant: [{'text': "To provide you with accurate and up-to-date information about LangGraph, I'll need to search for the latest details. Let me do that for you.", 'type': 'text'}, {'id': 'toolu_01Q588CszHaSvvP2MxRq9zRD', 'input': {'query': 'LangGraph AI tool information'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Assistant: [{"url": "https://www.langchain.com/langgraph", "content": "LangGraph sets the foundation for how we can build and scale AI workloads \u2014 from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. The next chapter in building complex production-ready features with LLMs is agentic, and with LangGraph and LangSmith, LangChain delivers an out-of-the-box solution ..."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "Overview. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures ..."}]
Assistant: Based on the search results, I can provide you with information about LangGraph:

1. Purpose:
   LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows.

2. Developer:
   LangGraph is developed by LangChain, a company known for its tools and frameworks in the AI and LLM space.

3. Key Features:
   - Cycles: LangGraph allows the definition of flows that involve cycles, which is essential for most agentic architectures.
   - Controllability: It offers enhanced control over the application flow.
   - Persistence: The library provides ways to maintain state and persistence in LLM-based applications.

4. Use Cases:
   LangGraph can be used for various applications, including:
   - Conversational agents
   - Complex task automation
   - Custom LLM-backed experiences

5. Integration:
   LangGraph works in conjunction with LangSmith, another tool by LangChain, to provide an out-of-the-box solution for building complex, production-ready features with LLMs.

6. Significance:
...
   LangGraph is noted to offer unique benefits compared to other LLM frameworks, particularly in its ability to handle cycles, provide controllability, and maintain persistence.

LangGraph appears to be a significant tool in the evolving landscape of LLM-based application development, offering developers new ways to create more complex, stateful, and interactive AI systems.
Goodbye!

User: 你对LangGraph了解多少? Assistant: 我将为您搜索关于LangGraph的最新信息。 Assistant: [{"title":"LangGraph入门指南 - Medium","url":"https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141","content":"..."}] Assistant: 根据搜索结果,我可以为您提供关于LangGraph的信息:

LangGraph是LangChain生态系统中的一个库,专门用于构建具有状态的多智能体应用程序,这些应用程序使用大型语言模型(LLMs)。以下是关键方面:

核心目的: - LangGraph专门用于创建智能体和多智能体工作流 - 它提供了一个框架,用于以结构化方式定义、协调和执行多个LLM智能体

主要特性: 1. 有状态图架构:LangGraph围绕一个有状态图构建,其中每个节点代表计算中的一个步骤,图维护状态,这些状态在计算过程中被传递和更新

  1. 条件边:它支持条件边,允许您根据图的当前状态动态确定要执行的下一个节点

  2. 循环:与其他LLM框架不同,LangGraph允许您定义涉及循环的流程,这对于大多数智能体架构至关重要

  3. 可控性:它提供了对应用程序流程的增强控制

  4. 持久性:该库提供了在基于LLM的应用程序中维护状态和持久性的方法

用例: - 对话智能体 - 复杂任务自动化 - 自定义LLM支持的经验 - 执行复杂任务的多智能体系统

优势: LangGraph使开发人员能够专注于应用程序的高级逻辑,而不是智能体协调的复杂性,这使得使用LLM构建复杂、生产就绪的功能变得更加容易。

这使得LangGraph在基于LLM的应用程序开发不断发展的领域中成为一个重要工具。

9. 使用预构建组件

为方便使用,请调整您的代码,用 LangGraph 预构建组件替换以下内容。这些组件具有内置功能,如并行 API 执行。

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

llm = init_chat_model("openai:gpt-4.1")

👉 Read the OpenAI integration docs

pip install -U "langchain[anthropic]"
import os
from langchain.chat_models import init_chat_model

os.environ["ANTHROPIC_API_KEY"] = "sk-..."

llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")

👉 Read the Anthropic integration docs

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"

llm = init_chat_model(
    "azure_openai:gpt-4.1",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)

👉 Read the Azure integration docs

pip install -U "langchain[google-genai]"
import os
from langchain.chat_models import init_chat_model

os.environ["GOOGLE_API_KEY"] = "..."

llm = init_chat_model("google_genai:gemini-2.0-flash")

👉 Read the Google GenAI integration docs

pip install -U "langchain[aws]"
from langchain.chat_models import init_chat_model

# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html

llm = init_chat_model(
    "anthropic.claude-3-5-sonnet-20240620-v1:0",
    model_provider="bedrock_converse",
)

👉 Read the AWS Bedrock integration docs

from typing import Annotated

from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
# 每次调用工具时,我们都会返回到聊天机器人以决定下一步
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()

恭喜! 您已经创建了一个 LangGraph 对话代理,它可以在需要时使用搜索引擎检索更新的信息。现在它可以处理更广泛的用户查询。

要检查您的代理刚刚执行的所有步骤,请查看此 LangSmith 追踪

下一步

聊天机器人无法自行记住过去的交互,这限制了它进行连贯的多轮对话的能力。在下一部分中,您将 添加 记忆 来解决这个问题。