三门峡市湖滨区建设局网站,建设网站用凡科怎么样,注册人力资源公司需要什么条件,中小企业网站设计总结本文介绍如何使用langgraph创建一个基本的Agent执行器#xff0c;主要包括下面几个步骤#xff1a; 1、定义工具 2、创建langchain Agent#xff08;由LLM、tools、prompt三部分组成#xff09; 3、定义图形状态 传统的LangChain代理的状态有几个属性: #xff08;1#…本文介绍如何使用langgraph创建一个基本的Agent执行器主要包括下面几个步骤 1、定义工具 2、创建langchain Agent由LLM、tools、prompt三部分组成 3、定义图形状态 传统的LangChain代理的状态有几个属性: 1 ’ input ‘:这是一个输入字符串表示来自用户的主要请求作为输入传入。 2’ chat_history ‘:这是以前的对话消息也作为输入传入。 3 ’ intermediate_steps ‘:这是代理在一段时间内采取的操作和相应观察的列表。这在代理的每次迭代中都会更新。 4’ agent_outcome’:这是代理的响应可以是AgentAction也可以是AgentFinish。当这是一个AgentFinish时AgentExecutor应该完成否则它应该调用所请求的工具。 4、定义节点 现在我们需要在图中定义几个不同的节点。在’ langgraph 中节点可以是函数或可运行的。 我们需要两个主要节点: 1代理:负责决定采取什么(如果有的话)行动。 2调用工具的函数:如果代理决定采取操作则该节点将执行该操作。 5、定义边 其中一些边可能是有条件的。它们是有条件的原因是基于节点的输出可能会采取几个路径中的一个。在运行该节点之前所采取的路径是未知的(由LLM决定)。 1条件边:在代理被调用后我们应该: a.如果代理说要采取行动那么应该调用调用工具的函数 b.如果代理说完成了那就应该完成 2 正常边:在工具被调用后它应该总是回到代理来决定下一步做什么 6、编译
代码实现如下
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain_openai.chat_models import ChatOpenAI
import os
os.environ[OPENAI_API_KEY]sk-XXXXXXXXXX
os.environ[SERPAPI_API_KEY] XXXXXXXXXXXXXXXXXXXXX
from langchain.agents.tools import Tool
from langchain_community.utilities import SerpAPIWrapper
search SerpAPIWrapper()search_tool Tool(name Search,funcsearch.run,descriptionuseful for when you need to answer questions about current events)tools [search_tool]#### Create the LangChain agent
# Get the prompt to use - you can modify this!
prompt hub.pull(hwchase17/openai-functions-agent)
llm ChatOpenAI(modelgpt-3.5-turbo-1106, streamingTrue)
# Construct the OpenAI Functions agent
agent_runnable create_openai_functions_agent(llm, tools, prompt)
from langchain.agents import AgentExecutor
agent_executor AgentExecutor(agentagent_runnable, toolstools)
response agent_executor.invoke({input: weather in San Francisco})
#### Define the graph state
from typing import TypedDict, Annotated, List, Union
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage
import operator
class AgentState(TypedDict):# The input stringinput: str# The list of previous messages in the conversationchat_history: list[BaseMessage]# The outcome of a given call to the agent# Needs None as a valid type, since this is what this will start asagent_outcome: Union[AgentAction, AgentFinish, None]# List of actions and corresponding observations# Here we annotate this with operator.add to indicate that operations to# this state should be ADDED to the existing values (not overwrite it)intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]#### Define the nodes
from langchain_core.agents import AgentFinish
from langgraph.prebuilt.tool_executor import ToolExecutor# It takes in an agent action and calls that tool and returns the result
tool_executor ToolExecutor(tools)# Define the agent
def run_agent(data):agent_outcome agent_runnable.invoke(data)return {agent_outcome: agent_outcome}# Define the function to execute tools
def execute_tools(data):# Get the most recent agent_outcome - this is the key added in the agent aboveagent_action data[agent_outcome]output tool_executor.invoke(agent_action)return {intermediate_steps: [(agent_action, str(output))]}# Define logic that will be used to determine which conditional edge to go down
def should_continue(data):if isinstance(data[agent_outcome], AgentFinish):return endelse:return continue#### Define the graphfrom langgraph.graph import END, StateGraphworkflow StateGraph(AgentState)workflow.add_node(agent, run_agent)
workflow.add_node(action, execute_tools)
workflow.set_entry_point(agent)workflow.add_conditional_edges(agent,should_continue,{# If tools, then we call the tool node.continue: action,# Otherwise we finish.end: END}
)workflow.add_edge(action, agent)# This compiles it into a LangChain Runnable, meaning you can use it as you would any other runnable
app workflow.compile()inputs {input: what is the weather in sf, chat_history: []}
for s in app.stream(inputs):print(list(s.values())[0])print(----)