掘金 人工智能 6小时前
LangGraph构建Ai智能体-6-智能体ReAct-例子
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本示例展示了如何利用LangGraph构建一个能够查询产品信息的AI代理。通过定义一个`product_info`工具函数,该代理可以根据用户输入的产品名称,从预设的产品目录中检索并返回产品信息。代码中演示了如何初始化模型、定义工具、创建ReAct代理图,并通过调用`graph.invoke`来执行查询。代理能够准确识别用户意图,调用相应的工具,并返回包含产品详情的文本。

🎯 **智能产品信息检索**:该AI代理能够理解用户关于产品信息的查询,并调用预定义的`product_info`工具来获取具体的产品详情。例如,当用户询问“介绍下新的iPhone 20”时,代理能够识别出产品名称“iPhone 20”,并从内置的产品目录中查找信息。

🛠️ **工具调用机制**:代理的核心在于其工具调用能力。`product_info`函数模拟了一个实际的数据库查询,返回产品名称对应的描述。代理通过`create_react_agent`进行配置,能够根据用户输入动态选择和调用最适合的工具。

🔄 **多轮交互支持**:示例展示了代理能够处理连续的用户查询。在第一次查询“介绍下新的iPhone 20”后,代理能够根据上下文理解“进一步介绍下iPhone 20”的意图,并再次调用工具提供信息,体现了其在多轮对话中的连贯性。

⚠️ **信息准确性与边界提示**:代理在返回信息时,会结合其知识边界进行提示。例如,对于不存在的“iPhone 20”,代理会说明其知识截止日期,并提示用户可能指的是当前最新型号,增加了回答的准确性和用户体验。

示例 2 产品信息查询的例子

from dotenv import load_dotenvimport osload_dotenv()llm = ChatOpenAI(model="qwen-max",                 base_url=os.getenv("BASE_URL"),                 api_key=os.getenv("OPENAI_API_KEY"),                 streaming=True)# toolsdef product_info(product_name: str) -> str:    """Fetch product infomation"""    product_catalog = {        'iPhone 20': 'The latest iPhone features an A15 chip and improved camera.',        'MacBook': 'The new MacBook has an M2 chip and a 14-inch Retina display.',    }    print(f"调用工具 {product_name}")    return product_catalog.get(product_name, 'Sorry, product not found.')checkpointer = MemorySaver()tools = [product_info]# ReAct agentgraph = create_react_agent(model=llm, checkpointer=checkpointer, tools=tools, debug=False)#config = {"configurable": {"thread_id": "thread_1"}}inputs = {"messages": [("user", "介绍下新的iPhone 20")]}messages = graph.invoke(inputs, config)for message in messages["messages"]:    message.pretty_print()inputs = {"messages": [("user", "进一步介绍下iPhone 20")]}messages = graph.invoke(inputs, config)for message in messages["messages"]:    message.pretty_print()

输出结果为

调用工具 iPhone 20================================ Human Message =================================介绍下新的iPhone 20================================== Ai Message ==================================Tool Calls:  product_info (call_08b215e1a2c945108de654) Call ID: call_08b215e1a2c945108de654  Args:    product_name: iPhone 20================================= Tool Message =================================Name: product_infoThe latest iPhone features an A15 chip and improved camera.================================== Ai Message ==================================最新的iPhone 20配备了A15芯片和改进的摄像头。请注意,这可能是指当前最新的iPhone型号,因为直到我的知识更新为止(2021年),并没有一款叫做iPhone 20的手机。如果您想了解的是另一个特定的产品,请提供更多信息。调用工具 iPhone 20================================ Human Message =================================介绍下新的iPhone 20================================== Ai Message ==================================Tool Calls:  product_info (call_08b215e1a2c945108de654) Call ID: call_08b215e1a2c945108de654  Args:    product_name: iPhone 20================================= Tool Message =================================Name: product_infoThe latest iPhone features an A15 chip and improved camera.================================== Ai Message ==================================最新的iPhone 20配备了A15芯片和改进的摄像头。请注意,这可能是指当前最新的iPhone型号,因为直到我的知识更新为止(2021年),并没有一款叫做iPhone 20的手机。如果您想了解的是另一个特定的产品,请提供更多信息。================================ Human Message =================================进一步介绍下iPhone 20================================== Ai Message ==================================Tool Calls:  product_info (call_d456afc25aac46e98e869a) Call ID: call_d456afc25aac46e98e869a  Args:    product_name: iPhone 20================================= Tool Message =================================Name: product_infoThe latest iPhone features an A15 chip and improved camera.================================== Ai Message ==================================再次确认,最新的iPhone配备了A15芯片以及升级的摄像头系统。不过,实际上并没有一款被正式命名为“iPhone 20”的产品。可能您指的是最新发布的iPhone型号。如果您需要更详细的信息,比如具体型号(如iPhone 13, iPhone 14等)、特色功能、设计变化或其他细节,请告知我,这样我可以为您提供更加准确的信息。  如果有其他特定的方面或者功能您感兴趣的话,也请告诉我!

示例3 产品查询并检查库存的例子

from langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom dotenv import load_dotenvimport osload_dotenv()llm = ChatOpenAI(model="qwen-max",                 base_url=os.getenv("BASE_URL"),                 api_key=os.getenv("OPENAI_API_KEY"),                 streaming=True)# toolsdef product_info(product_name: str) -> str:    """Fetch product infomation"""    product_catalog = {        'iPhone': 'The latest iPhone features an A15 chip and improved camera.',        'MacBook': 'The new MacBook has an M2 chip and a 14-inch Retina display.',    }    print(f"调用工具product_info {product_name}")    return product_catalog.get(product_name, 'Sorry, product not found.')def check_stock(product_name: str) -> str:    """Check product stock availability."""    stock_data = {        'iPhone': 'In stock.',        'MacBook': 'Out of stock.',    }    print(f"调用工具check_stock {product_name}")    return stock_data.get(product_name, 'Stock information unavailable.')checkpointer = MemorySaver()tools = [product_info, check_stock]# ReAct agentgraph = create_react_agent(model=llm, checkpointer=checkpointer, tools=tools, debug=False)#config = {"configurable": {"thread_id": "thread_1"}}inputs = {"messages": [("user", "我是小明,介绍下新的iPhone")]}messages = graph.invoke(inputs, config)for message in messages["messages"]:    message.pretty_print()inputs = {"messages": [("user", "iPhone 有库存吗")]}messages = graph.invoke(inputs, config)for message in messages["messages"]:    message.pretty_print()inputs = {"messages": [("user", "介绍下MacBook,另外还有库存吗")]}messages = graph.invoke(inputs, config)for message in messages["messages"]:    message.pretty_print()

输出结果为

调用工具product_info iPhone================================ Human Message =================================我是小明,介绍下新的iPhone================================== Ai Message ==================================Tool Calls:  product_info (call_7fe4fbbe6dc6426db484df) Call ID: call_7fe4fbbe6dc6426db484df  Args:    product_name: iPhone================================= Tool Message =================================Name: product_infoThe latest iPhone features an A15 chip and improved camera.================================== Ai Message ==================================最新的iPhone配备了A15芯片和改进的相机。调用工具check_stock iPhone================================ Human Message =================================我是小明,介绍下新的iPhone================================== Ai Message ==================================Tool Calls:  product_info (call_7fe4fbbe6dc6426db484df) Call ID: call_7fe4fbbe6dc6426db484df  Args:    product_name: iPhone================================= Tool Message =================================Name: product_infoThe latest iPhone features an A15 chip and improved camera.================================== Ai Message ==================================最新的iPhone配备了A15芯片和改进的相机。================================ Human Message =================================iPhone 有库存吗================================== Ai Message ==================================Tool Calls:  check_stock (call_1b24d717931a4b42807943) Call ID: call_1b24d717931a4b42807943  Args:    product_name: iPhone================================= Tool Message =================================Name: check_stockIn stock.================================== Ai Message ==================================iPhone目前有库存。调用工具product_info MacBook调用工具check_stock MacBook================================ Human Message =================================我是小明,介绍下新的iPhone================================== Ai Message ==================================Tool Calls:  product_info (call_7fe4fbbe6dc6426db484df) Call ID: call_7fe4fbbe6dc6426db484df  Args:    product_name: iPhone================================= Tool Message =================================Name: product_infoThe latest iPhone features an A15 chip and improved camera.================================== Ai Message ==================================最新的iPhone配备了A15芯片和改进的相机。================================ Human Message =================================iPhone 有库存吗================================== Ai Message ==================================Tool Calls:  check_stock (call_1b24d717931a4b42807943) Call ID: call_1b24d717931a4b42807943  Args:    product_name: iPhone================================= Tool Message =================================Name: check_stockIn stock.================================== Ai Message ==================================iPhone目前有库存。================================ Human Message =================================介绍下MacBook,另外还有库存吗================================== Ai Message ==================================Tool Calls:  product_info (call_a1a34117064840f0bda5c9) Call ID: call_a1a34117064840f0bda5c9  Args:    product_name: MacBook================================= Tool Message =================================Name: product_infoThe new MacBook has an M2 chip and a 14-inch Retina display.================================== Ai Message ==================================新款MacBook配备了M2芯片和14英寸Retina显示屏。让我查一下库存情况。Tool Calls:  check_stock (call_5cd17f2a9a5041caba454d) Call ID: call_5cd17f2a9a5041caba454d  Args:    product_name: MacBook================================= Tool Message =================================Name: check_stockOut of stock.================================== Ai Message ==================================目前MacBook没有库存。

示例4 多步推理和动态Action的例子

from langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom dotenv import load_dotenvfrom langgraph.graph import StateGraph, MessagesState, START, ENDimport osfrom typing import TypedDictload_dotenv()llm = ChatOpenAI(model="qwen-max",                 base_url=os.getenv("BASE_URL"),                 api_key=os.getenv("OPENAI_API_KEY"),                 streaming=True)#class ReActAgentState(TypedDict):    message: str    action: str    sub_action: str#def reasoning_node(state: ReActAgentState):    query = state['message']    if 'weather' in query:        return {'action': 'fetch_weather'}    elif 'news' in query:        return {'action': 'fetch_news'}    elif 'recommend' in query:        return {'action': 'recommendation', 'sub_action': 'book'}    else:        return {'action': 'unknown'}# toolsdef weather_subgraph_node(state: ReActAgentState):    # Simulating a weather tool call    return {'message': 'The weather today is sunny.'}def news_subgraph_node(state: ReActAgentState):    # Simulating a news tool call    return {'message': 'Here are the latest news headlines.'}def recommendation_subgraph_node(state: ReActAgentState):    if state.get('sub_action') == 'book':        return {'message': "I recommend reading 'The Pragmatic Programmer'."}    else:        return {'message': 'I have no other recommendations at the moment.'}# 天气子图weather_subgraph_builder = StateGraph(ReActAgentState)weather_subgraph_builder.add_node("weather_action", weather_subgraph_node)weather_subgraph_builder.set_entry_point("weather_action")weather_subgraph = weather_subgraph_builder.compile()# 新闻子图news_subgraph_builder = StateGraph(ReActAgentState)news_subgraph_builder.add_node("news_action", news_subgraph_node)news_subgraph_builder.set_entry_point("news_action")news_subgraph = news_subgraph_builder.compile()# 推荐子图recommendation_subgraph_builder = StateGraph(ReActAgentState)recommendation_subgraph_builder.add_node('recommendation_action', recommendation_subgraph_node)recommendation_subgraph_builder.set_entry_point('recommendation_action')recommendation_subgraph = recommendation_subgraph_builder.compile()#def reasoning_state_manager(state: ReActAgentState):    if state['action'] == "fetch_weather":        return weather_subgraph    elif state['action'] == "fetch_news":        return news_subgraph    elif state['action'] == "recommendation":        return recommendation_subgraph    else:        return None#  parent graphparent_builder = StateGraph(ReActAgentState)parent_builder.add_node("reasoning", reasoning_node)parent_builder.add_node("action_dispatch", reasoning_state_manager)parent_builder.add_edge(START, "reasoning")parent_builder.add_edge("reasoning", "action_dispatch")#react_agent_graph = parent_builder.compile()#inputs_weather = {'message': 'What is the weather today?'}result_weather = react_agent_graph.invoke(inputs_weather)print(result_weather['message'])#inputs_news = {'message': 'Give me the latest news.'}result_news = react_agent_graph.invoke(inputs_news)print(result_news['message'])#inputs_recommendation = {'message': 'Can you recommend a good book?'}result_recommendation = react_agent_graph.invoke(inputs_recommendation)print(result_recommendation['message'])

输出结果

The weather today is sunny.Here are the latest news headlines.I recommend reading 'The Pragmatic Programmer'.

示例5 多个子图和上下文记忆的高级 ReAct 代理

from langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom dotenv import load_dotenvfrom langgraph.graph import StateGraph, MessagesState, START, ENDimport osfrom typing import TypedDictload_dotenv()llm = ChatOpenAI(model="qwen-max",                 base_url=os.getenv("BASE_URL"),                 api_key=os.getenv("OPENAI_API_KEY"),                 streaming=True)#class ReActAgentState(TypedDict):    message: str    action: str    sub_action: str    memory: dict  # Memory of past interactions#def reasoning_node(state: ReActAgentState):    query = state['message']    past_interactions = state.get('memory', {})    if 'weather' in query:        return {'action': 'fetch_weather'}    elif 'news' in query:        return {'action': 'fetch_news'}    elif 'recommend' in query:        if past_interactions.get('favorite_genre') == 'science':            return {'action': 'recommendation', 'sub_action': 'science_book'}        else:            return {'action': 'recommendation', 'sub_action': 'general_book'}    else:        return {'action': 'unknown'}# toolsdef weather_subgraph_node(state: ReActAgentState):    # Simulating a weather tool call    return {'message': 'The weather today is sunny.'}def news_subgraph_node(state: ReActAgentState):    # Simulating a news tool call    return {'message': 'Here are the latest news headlines.'}def general_recommendation_node(state: ReActAgentState):    return {'message': "I recommend reading 'The Pragmatic Programmer'."}def science_recommendation_node(state: ReActAgentState):    return {'message': "Since you like science, I recommend 'A Brief History of Time' by Stephen Hawking."}def update_memory_node(state: ReActAgentState):    if "recommend" in state["message"]:        state['memory']['favorite_genre'] = 'science'    return state# 天气子图weather_subgraph_builder = StateGraph(ReActAgentState)weather_subgraph_builder.add_node("weather_action", weather_subgraph_node)weather_subgraph_builder.set_entry_point("weather_action")weather_subgraph = weather_subgraph_builder.compile()# 新闻子图news_subgraph_builder = StateGraph(ReActAgentState)news_subgraph_builder.add_node("news_action", news_subgraph_node)news_subgraph_builder.set_entry_point("news_action")news_subgraph = news_subgraph_builder.compile()# 推荐子图recommendation_subgraph_builder = StateGraph(ReActAgentState)recommendation_subgraph_builder.add_node('general_recommendation_node', general_recommendation_node)recommendation_subgraph_builder.set_entry_point('general_recommendation_node')general_recommendation_subgraph = recommendation_subgraph_builder.compile()#science_recommendation_builder = StateGraph(ReActAgentState)science_recommendation_builder.add_node('science_recommendation_action', science_recommendation_node)science_recommendation_builder.set_entry_point('science_recommendation_action')science_recommendation_subgraph = science_recommendation_builder.compile()#memory_update_builder = StateGraph(ReActAgentState)memory_update_builder.add_node('update_memory_action', update_memory_node)memory_update_builder.set_entry_point('update_memory_action')memory_update_subgraph = memory_update_builder.compile()#def reasoning_state_manager(state: ReActAgentState):    if state['action'] == "fetch_weather":        return weather_subgraph    elif state['action'] == "fetch_news":        return news_subgraph    elif state['action'] == "recommendation":        if state['sub_action'] == 'science_book':            return science_recommendation_subgraph        else:            return general_recommendation_subgraph    else:        return None#  parent graphparent_builder = StateGraph(ReActAgentState)parent_builder.add_node("reasoning", reasoning_node)parent_builder.add_node("action_dispatch", reasoning_state_manager)parent_builder.add_node('update_memory', memory_update_subgraph)parent_builder.add_edge(START, "reasoning")parent_builder.add_edge("reasoning", "action_dispatch")parent_builder.add_edge('action_dispatch', 'update_memory')react_agent_graph = parent_builder.compile()checkpointer = MemorySaver()#inputs_weather = {'message': 'What is the weather today?', 'memory': {}}result_weather = react_agent_graph.invoke(inputs_weather)print(result_weather['message'])#inputs_recommendation_first = {'message': 'Can you recommend a good book?', 'memory': {}}result_recommendation_first = react_agent_graph.invoke(inputs_recommendation_first)print(result_recommendation_first['message'])#inputs_recommendation_second = {'message': 'Can you recommend another book?',                                'memory': {'favorite_genre': 'science'}}result_recommendation_second = react_agent_graph.invoke(inputs_recommendation_second)print(result_recommendation_second['message'])

输出结果

The weather today is sunny.I recommend reading 'The Pragmatic Programmer'.Since you like science, I recommend 'A Brief History of Time' by Stephen Hawking.

示例6 动态定价的Agent

from langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom dotenv import load_dotenvfrom langgraph.graph import StateGraph, MessagesState, START, ENDimport osfrom typing import TypedDictload_dotenv()model = ChatOpenAI(model="qwen-max",                   base_url=os.getenv("BASE_URL"),                   api_key=os.getenv("OPENAI_API_KEY"),                   streaming=True)# 模拟apidef get_demand_data(product_id: str) -> dict:    """Mock demand API to get demand data for a product."""    return {"product_id": product_id, "demand_level": "high"}def get_competitor_pricing(product_id: str) -> dict:    """Mock competitor pricing API."""    return {'product_id': product_id, 'competitor_price': 95.0}tools = [get_demand_data, get_competitor_pricing]graph = create_react_agent(model, tools=tools)initial_messages = [    ('system', '你是一个人工智能代理,能够根据市场需求和竞争对手的价格动态调整产品价格。'),    ('user', "产品ID“12345”的价格应该定多少?")]inputs = {"messages": initial_messages}for state in graph.stream(inputs, stream_mode="values"):    message = state["messages"][-1]    if isinstance(message, tuple):        print(message)    else:        message.pretty_print()

输出结果

================================ Human Message =================================产品ID“12345”的价格应该定多少?================================== Ai Message ==================================Tool Calls:  get_competitor_pricing (call_8212a1aad1044c488a9a53) Call ID: call_8212a1aad1044c488a9a53  Args:    product_id: 12345================================= Tool Message =================================Name: get_competitor_pricing{"product_id": "12345", "competitor_price": 95.0}================================== Ai Message ==================================Tool Calls:  get_demand_data (call_2450404f79774e85b1904a) Call ID: call_2450404f79774e85b1904a  Args:    product_id: 12345================================= Tool Message =================================Name: get_demand_data{"product_id": "12345", "demand_level": "high"}================================== Ai Message ==================================根据市场情况,对于产品ID为“12345”的商品,竞争对手的定价是95.0元。同时,当前市场上该产品的需求水平较高。鉴于高需求以及竞争对手的价格,我们可以考虑将价格设定得略高于或等于95.0元以最大化利润,同时保持竞争力。确切的价格还需要考虑成本、品牌定位等因素。如果我们决定跟随竞争者的价格,那么可以将价格也设为95.0元;如果想要利用高需求来增加收益,并且确定顾客愿意为此支付更多,我们则可以适度提价。您希望如何继续?

示例 7 使用 LLM 和自定义情感分析工具的情感分析代理

from langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom dotenv import load_dotenvfrom langgraph.graph import StateGraph, MessagesState, START, ENDimport osfrom typing import TypedDictfrom typing import Annotated, Sequence, TypedDictfrom langchain_core.messages import BaseMessage, ToolMessage, SystemMessagefrom langgraph.graph.message import add_messagesfrom langchain_core.tools import tool# pip install textblobfrom textblob import TextBlobimport jsonfrom langchain_core.runnables import RunnableConfigload_dotenv()model = ChatOpenAI(model="qwen-max",                   base_url=os.getenv("BASE_URL"),                   api_key=os.getenv("OPENAI_API_KEY"),                   streaming=True)class AgentState(TypedDict):    messages: Annotated[Sequence[BaseMessage], add_messages]@tooldef analyze_sentiment(feedback: str) -> str:    """Analyze customer feedback sentiment with custom logic."""    analysis = TextBlob(feedback)    if analysis.sentiment.polarity > 0.5:        return "positive"    elif analysis.sentiment.polarity == 0.5:        return "neutral"    else:        return "negative"@tooldef respond_based_on_sentiment(sentiment: str) -> str:    """Respond to the customer based on the analyzed sentiment."""    if sentiment == 'positive':        return '感谢您的积极反馈!'    elif sentiment == 'neutral':        return '我们感谢您的反馈。'    else:        return "我们很遗憾听到你不满意。我们如何提供帮助?"tools = [analyze_sentiment, respond_based_on_sentiment]llm = model.bind_tools(tools)tools_by_name = {tool.name: tool for tool in tools}graph = create_react_agent(model, tools=tools)def tool_node(state: AgentState):    outputs = []    for tool_call in state['messages'][-1].tool_calls:        tool_result = tools_by_name[tool_call['name']].invoke(tool_call['args'])        outputs.append(            ToolMessage(content=json.dumps(tool_result),                        name=tool_call['name'],                        tool_call_id=tool_call['id']))    return {'messages': outputs}def call_model(state: AgentState, config: RunnableConfig):    system_prompt = SystemMessage(content='你是一位乐于助人的助手,负责回复客户反馈。')    response = llm.invoke([system_prompt] + state['messages'], config)    return {'messages': [response]}def should_continue(state: AgentState):    last_message = state['messages'][-1]    # If there is no tool call, then we finish    if not last_message.tool_calls:        return 'end'    else:        return 'continue'#workflow = StateGraph(AgentState)workflow.add_node('agent', call_model)workflow.add_node('tools', tool_node)workflow.set_entry_point('agent')workflow.add_conditional_edges(    'agent',    should_continue,    {'continue': 'tools', 'end': END, },)workflow.add_edge('tools', 'agent')graph = workflow.compile()def print_stream(stream):    for s in stream:        message = s['messages'][-1]    if isinstance(message, tuple):        print(message)    else:        message.pretty_print()initial_state = {'messages': [('user', '产品很好,但交货很差。')]}print_stream(graph.stream(initial_state, stream_mode='values'))

输出结果

================================== Ai Message ==================================我们很遗憾听到你不满意。我们如何提供帮助?

示例8 使用 LLM 和内存的个性化产品推荐代理

from langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom dotenv import load_dotenvfrom langgraph.graph import StateGraph, MessagesState, START, ENDimport osfrom typing import TypedDictfrom typing import Annotated, Sequence, TypedDictfrom langchain_core.messages import BaseMessage, ToolMessage, SystemMessagefrom langgraph.graph.message import add_messagesfrom langchain_core.tools import tool# pip install textblobfrom textblob import TextBlobimport jsonfrom langchain_core.runnables import RunnableConfigload_dotenv()model = ChatOpenAI(model="qwen-max",                   base_url=os.getenv("BASE_URL"),                   api_key=os.getenv("OPENAI_API_KEY"),                   streaming=True)class RecommendationState(TypedDict):    user_id: str  # 用户标识符    preference: str  # 用户当前的偏好(例如,流派、类别)    reasoning: str  # 来自 LLM 的推理过程    recommendation: str  # 最终产品推荐    memory: dict  # 用于存储首选项的用户内存    messages: Annotated[Sequence[BaseMessage], add_messages]@tooldef recommend_product(preference: str) -> str:    """Recommend a product based on the user's preferences."""    product_db = {        'science': "I recommend 'A Brief History of Time' by Stephen Hawking.",        'technology': "I recommend 'The Innovators' by Walter Isaacson.",        'fiction': "I recommend 'The Alchemist' by Paulo Coelho."    }    return product_db.get(preference, 'I recommend exploring our latest products!')tools = [recommend_product]tools_by_name = {tool.name: tool for tool in tools}llm = model.bind_tools(tools)def update_memory(state: RecommendationState):    state["memory"][state['user_id']] = state["preference"]    return statedef tool_node(state: RecommendationState):    outputs = []    for tool_call in state["messages"][-1].tool_calls:        tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])        outputs.append(            ToolMessage(                content=json.dumps(tool_result),                name=tool_call["name"],                tool_call_id=tool_call["id"],            )        )    return {"messages": outputs}def call_model(state: RecommendationState, config: RunnableConfig):    system_prompt = SystemMessage(        content=f"You are a helpful assistant for recommending a product based on the user's preference."    )    response = llm.invoke([system_prompt] + state["messages"] + [("user", state["preference"])], config)    return {"messages": [response]}def should_continue(state: RecommendationState):    last_message = state["messages"][-1]    # If there is no tool call, then we finish    if not last_message.tool_calls:        return "end"    else:        return "continue"workflow = StateGraph(RecommendationState)workflow.add_node('agent', call_model)workflow.add_node('tools', tool_node)workflow.add_node('update_memory', update_memory)workflow.set_entry_point('agent')workflow.add_conditional_edges(    'agent',    should_continue,    {        'continue': 'tools',        'end': END,    })workflow.add_edge('tools', 'update_memory')workflow.add_edge('update_memory', 'agent')graph = workflow.compile()#initial_state = {    "messages": [("user", "I'm looking for a book.")],    "user_id": "user1", "preference": "science",    "memory": {},    "reasoning": ""}result = graph.invoke(initial_state)#print(f"Reasoning: {result['reasoning']}")print(f"Product Recommendation: {result['messages'][-1].content}")print(f"Updated Memory: {result['memory']}")# Helper function to print the conversationdef print_stream(stream):    for s in stream:        message = s["messages"][-1]        if isinstance(message, tuple):            print(message)        else:            message.pretty_print()# Run the agentprint_stream(graph.stream(initial_state, stream_mode="values"))

输出结果

Reasoning: Product Recommendation: I recommend 'A Brief History of Time' by Stephen Hawking. It's a great read for anyone interested in science, particularly in the areas of cosmology and theoretical physics.Updated Memory: {'user1': 'science'}================================ Human Message =================================I'm looking for a book.================================== Ai Message ==================================Tool Calls:  recommend_product (call_6b6a242cc9e24557a89db8) Call ID: call_6b6a242cc9e24557a89db8  Args:    preference: science================================= Tool Message =================================Name: recommend_product"I recommend 'A Brief History of Time' by Stephen Hawking."================================= Tool Message =================================Name: recommend_product"I recommend 'A Brief History of Time' by Stephen Hawking."================================== Ai Message ==================================I recommend 'A Brief History of Time' by Stephen Hawking. It's a great book that explores complex scientific concepts in an accessible way. Would you like more recommendations or information on this book?

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI代理 LangGraph 产品查询 工具调用 自然语言处理
相关文章