跳转至

添加记忆

聊天机器人现在已经可以使用工具来回答用户问题,但它无法记住先前交互的上下文。这限制了它进行连贯的多轮对话的能力。

LangGraph 通过“持久化检查点”(persistent checkpointing)来解决这个问题。在编译图时提供一个 checkpointer,并在调用图时提供 thread_id,LangGraph 会在每一步后自动保存状态。再次使用同一 thread_id 调用该图时,会自动加载先前保存的状态,让聊天机器人从中断处继续。

我们会在后面看到,“检查点”远比简单的聊天记忆强大得多——它允许你在任意时间保存与恢复复杂状态,用于错误恢复、人类参与(human-in-the-loop)流程、时间旅行式交互等。但首先,我们先添加检查点来实现多轮会话。

Note

本教程基于 添加工具

1. 创建 MemorySaver 检查点保存器

创建一个 MemorySaver 检查点保存器:

API Reference: InMemorySaver

from langgraph.checkpoint.memory import InMemorySaver

memory = InMemorySaver()

这是一个内存中的检查点保存器,适合本教程演示。但在生产环境中,你更可能使用 SqliteSaverPostgresSaver 并连接到数据库。

2. 编译图

使用提供的 checkpointer 编译图;当图在各个节点间执行时,它会对 State 进行检查点保存:

graph = graph_builder.compile(checkpointer=memory)

3. 与你的聊天机器人交互

现在可以和机器人对话啦!

  1. 选择一个线程(thread)作为这次会话的 key。

    config = {"configurable": {"thread_id": "1"}}
    
  2. 调用你的聊天机器人:

    user_input = "Hi there! My name is Will."
    
    # config 是传给 stream() 或 invoke() 的“第二个位置参数”!
    events = graph.stream(
        {"messages": [{"role": "user", "content": user_input}]},
        config,
        stream_mode="values",
    )
    for event in events:
        event["messages"][-1].pretty_print()
    
    ================================ Human Message =================================
    
    Hi there! My name is Will.
    ================================== Ai Message ==================================
    
    Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?
    

    Note

    调用图时,config 作为“第二个位置参数”传入。重要的是:它不是嵌套在图的输入({'messages': []})里面的。

4. 继续追问

提一个追问:

user_input = "Remember my name?"

# config 是传给 stream() 或 invoke() 的“第二个位置参数”!
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    config,
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.

请注意,我们没有使用任何外部列表来保存记忆:这一切都由检查点保存器自动处理!你可以在这个 LangSmith trace 中查看完整执行过程。

不信?换个 config 试试。

# 唯一的不同是,这里我们把 `thread_id` 从 "1" 改成了 "2"
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    {"configurable": {"thread_id": "2"}},
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

I apologize, but I don't have any previous context or memory of your name. As an AI assistant, I don't retain information from past conversations. Each interaction starts fresh. Could you please tell me your name so I can address you properly in this conversation?

请注意,我们做的唯一改动就是修改了 config 里的 thread_id。可以查看这次调用的 LangSmith trace 作对比。

5. 检查状态

到现在为止,我们已经在两个不同的线程里创建了几个检查点。那么一个检查点里到底包含了什么?要在任意时刻检查某个 config 对应图的 state,可以调用 get_state(config)

snapshot = graph.get_state(config)
snapshot
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='8c1ca919-c553-4ebf-95d4-b59a2d61e078'), AIMessage(content="Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?", additional_kwargs={}, response_metadata={'id': 'msg_01WTQebPhNwmMrmmWojJ9KXJ', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 405, 'output_tokens': 32}}, id='run-58587b77-8c82-41e6-8a90-d62c444a261d-0', usage_metadata={'input_tokens': 405, 'output_tokens': 32, 'total_tokens': 437}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='daba7df6-ad75-4d6b-8057-745881cea1ca'), AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-93e0-6acc-8004-f2ac846575d2'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}}, 'step': 4, 'parents': {}}, created_at='2024-09-27T19:30:10.820758+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-859f-6206-8003-e1bd3c264b8f'}}, tasks=())
snapshot.next  # (由于本回合图已结束,`next` 为空。如果你在图的执行过程中获取状态,next 会告诉你下一个将执行的节点)

上面的快照包含了当前的状态值、对应的 config,以及将要处理的 next 节点。在我们的例子里,图已经到达 END 状态,所以 next 是空的。

恭喜!借助 LangGraph 的检查点系统,你的聊天机器人现在可以在会话间维持对话状态了。这为更自然、更有上下文的交互打开了大门。LangGraph 的检查点甚至可以处理“任意复杂的图状态”,远比简单的聊天记忆更强大、更具表达力。

下面的代码片段回顾了本教程中的图:

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

llm = init_chat_model("openai:gpt-4.1")

👉 Read the OpenAI integration docs

pip install -U "langchain[anthropic]"
import os
from langchain.chat_models import init_chat_model

os.environ["ANTHROPIC_API_KEY"] = "sk-..."

llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")

👉 Read the Anthropic integration docs

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"

llm = init_chat_model(
    "azure_openai:gpt-4.1",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)

👉 Read the Azure integration docs

pip install -U "langchain[google-genai]"
import os
from langchain.chat_models import init_chat_model

os.environ["GOOGLE_API_KEY"] = "..."

llm = init_chat_model("google_genai:gemini-2.0-flash")

👉 Read the Google GenAI integration docs

pip install -U "langchain[aws]"
from langchain.chat_models import init_chat_model

# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html

llm = init_chat_model(
    "anthropic.claude-3-5-sonnet-20240620-v1:0",
    model_provider="bedrock_converse",
)

👉 Read the AWS Bedrock integration docs

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = InMemorySaver()
graph = graph_builder.compile(checkpointer=memory)

后续步骤

在下一篇教程中,你将为聊天机器人加入人类介入(human-in-the-loop),以便在需要指导或验证时再继续执行。