央美老师做的家具网站,html网上购物系统,罗湖网站建设设计,静态网站建设开发本文翻译整理自#xff1a;Build an Agent https://python.langchain.com/v0.2/docs/tutorials/agents/ 文章目录 一、说明概念 二、定义工具1、TavilyAPI参考#xff1a; 2、RetrieverAPI参考#xff1a;API参考#xff1a; 3、工具 三、使用语言模型四、创建代理五、运行…本文翻译整理自Build an Agent https://python.langchain.com/v0.2/docs/tutorials/agents/ 文章目录 一、说明概念 二、定义工具1、TavilyAPI参考 2、RetrieverAPI参考API参考 3、工具 三、使用语言模型四、创建代理五、运行代理六、Streaming Messages七、Streaming tokens八、添加到内存九、总结 一、说明
语言模型本身无法采取行动——它们只是输出文本。 LangChain 的一个重要用例是创建代理。
代理是使用 LLM 作为推理工程师 来确定 要采取哪些操作以及这些操作的输入应该是什么的系统。
然后这些操作的结果可以反馈给代理并确定是否需要更多操作或者是否可以完成。
在本教程中我们将构建一个可以与多种不同工具交互的代理一个是本地数据库另一个是搜索引擎。
您将能够向该代理询问问题、观看它调用工具并与其进行对话。 概念
我们将涵盖的概念是
使用语言模型特别是它们的工具调用能力创建检索器以向我们的代理公开特定信息使用搜索工具在线查找内容使用LangGraph Agents它使用 LLM 来思考要做什么然后执行该操作使用 LangSmith调试和跟踪你的应用程序 项目设置可参考https://blog.csdn.net/lovechris00/article/details/139130091#_33 二、定义工具
我们首先需要创建要使用的工具。我们将使用两个工具Tavily用于在线搜索以及我们将创建的本地索引检索器 1、Tavily
https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/
我们在 LangChain 中有一个内置工具可以轻松使用 Tavily 搜索引擎作为工具。
请注意这需要 API 密钥 - 他们有一个免费套餐但如果您没有或不想创建一个您可以随时忽略此步骤。
创建 API 密钥后您需要将其导出为
export TAVILY_API_KEY...from langchain_community.tools.tavily_search import TavilySearchResultsAPI参考
TavilySearchResults
search TavilySearchResults(max_results2)search.invoke(what is the weather in SF)[{url: https://weather.com/weather/tenday/l/San Francisco CA USCA0987:1:US,content: Comfy Cozy\nThats Not What Was Expected\nOutside\nNo-Name Storms In Florida\nGifts From On High\nWhat To Do For Wheezing\nSurviving The Season\nStay Safe\nAir Quality Index\nAir quality is considered satisfactory, and air pollution poses little or no risk.\n Health Activities\nSeasonal Allergies and Pollen Count Forecast\nNo pollen detected in your area\nCold Flu Forecast\nFlu risk is low in your area\nWe recognize our responsibility to use data and technology for good. recents\nSpecialty Forecasts\n10 Day Weather-San Francisco, CA\nToday\nMon 18 | Day\nConsiderable cloudiness. Tue 19\nTue 19 | Day\nLight rain early...then remaining cloudy with showers in the afternoon. Wed 27\nWed 27 | Day\nOvercast with rain showers at times.},{url: https://www.accuweather.com/en/us/san-francisco/94103/hourly-weather-forecast/347629,content: Hourly weather forecast in San Francisco, CA. Check current conditions in San Francisco, CA with radar, hourly, and more.}]2、Retriever
我们还将针对我们自己的一些数据创建一个检索器。
有关此处每个步骤的更详细说明请参阅本教程。
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitterloader WebBaseLoader(https://docs.smith.langchain.com/overview)
docs loader.load()
documents RecursiveCharacterTextSplitter(chunk_size1000, chunk_overlap200
).split_documents(docs)
vector FAISS.from_documents(documents, OpenAIEmbeddings())
retriever vector.as_retriever()API参考
WebBaseLoaderFAISSOpenAIEmbeddingsRecursiveCharacterTextSplitter retriever.invoke(how to upload a dataset)[0]Document(page_contentimport Clientfrom langsmith.evaluation import evaluateclient Client()# Define dataset: these are your test casesdataset_name Sample Datasetdataset client.create_dataset(dataset_name, descriptionA sample dataset in LangSmith.)client.create_examples( inputs[ {postfix: to LangSmith}, {postfix: to Evaluations in LangSmith}, ], outputs[ {output: Welcome to LangSmith}, {output: Welcome to Evaluations in LangSmith}, ], dataset_iddataset.id,)# Define your evaluatordef exact_match(run, example): return {score: run.outputs[output] example.outputs[output]}experiment_results evaluate( lambda input: Welcome input[\postfix\], # Your AI system goes here datadataset_name, # The data to predict and grade over evaluators[exact_match], # The evaluators to score the results experiment_prefixsample-experiment, # The name of the experiment metadata{ version: 1.0.0, revision_id:, metadata{source: https://docs.smith.langchain.com/overview, title: Getting started with LangSmith | ️️ LangSmith, description: Introduction, language: en})现在我们已经填充了我们将进行检索的索引我们可以轻松地将其变成一个工具代理正确使用它所需的格式
from langchain.tools.retriever import create_retriever_toolAPI参考
create_retriever_tool retriever_tool create_retriever_tool(retriever,langsmith_search,Search for information about LangSmith. For any questions about LangSmith, you must use this tool!,
)3、工具
现在我们已经创建了两者我们可以创建将在下游使用的工具列表。
tools [search, retriever_tool]三、使用语言模型
接下来我们通过调用工具来学习如何使用语言模型。 LangChain支持多种不同的语言模型 OpenAI, Anthropic, Google, Cohere, FireworksAI, MistralAI, TogetherAI 这里以 OpenAI 为例其他模型调用可参阅 https://python.langchain.com/v0.2/docs/tutorials/agents/#using-language-models
pip install -qU langchain-openaiimport getpass
import osos.environ[OPENAI_API_KEY] getpass.getpass()from langchain_openai import ChatOpenAImodel ChatOpenAI(modelgpt-4)您可以通过传入消息列表来调用语言模型。默认情况下响应是一个content字符串。
from langchain_core.messages import HumanMessageresponse model.invoke([HumanMessage(contenthi!)])
response.contentAPI参考
HumanMessage
Hello! How can I assist you today?现在我们可以看到让这个模型进行工具调用是什么样子的。为了使我们能够使用.bind_tools这些工具来赋予语言模型知识
model_with_tools model.bind_tools(tools)我们现在可以调用该模型。我们首先用一条普通的消息来调用它看看它如何响应。我们既可以看content场也可以看tool_calls场。
response model_with_tools.invoke([HumanMessage(contentHi!)])print(fContentString: {response.content})
print(fToolCalls: {response.tool_calls})ContentString: Hello! How can I assist you today?
ToolCalls: []现在让我们尝试使用一些期望调用工具的输入来调用它。
response model_with_tools.invoke([HumanMessage(contentWhats the weather in SF?)])print(fContentString: {response.content})
print(fToolCalls: {response.tool_calls})ContentString:
ToolCalls: [{name: tavily_search_results_json, args: {query: current weather in SF}, id: call_nfE1XbCqZ8eJsB8rNdn4MQZQ}]我们可以看到现在没有内容但是有一个工具调用它希望我们调用 Tavily Search 工具。
这还没有调用该工具——它只是告诉我们这样做。为了实际调用它我们需要创建我们的代理。 四、创建代理
现在我们已经定义了工具和 LLM我们可以创建代理。
我们将使用LangGraph来构建代理。
目前我们使用高级接口来构建代理但 LangGraph 的好处在于这个高级接口由低级、高度可控的 API 支持以防您想要修改代理逻辑。
现在我们可以使用 LLM 和工具来初始化代理。
请注意我们传入的是model而不是model_with_tools。这是因为create_tool_calling_executor会在后台为我们调用 .bind_tools。
from langgraph.prebuilt import chat_agent_executoragent_executor chat_agent_executor.create_tool_calling_executor(model, tools)五、运行代理
我们现在可以针对一些查询运行代理
请注意目前这些都是无状态查询它不会记住以前的交互。
请注意代理将在交互结束时返回最终状态其中包括任何输入我们稍后将看到如何仅获取输出。 首先让我们看看当不需要调用工具时它如何响应
response agent_executor.invoke({messages: [HumanMessage(contenthi!)]})response[messages][HumanMessage(contenthi!, id1535b889-10a5-45d0-a1e1-dd2e60d4bc04),AIMessage(contentHello! How can I assist you today?, response_metadata{token_usage: {completion_tokens: 10, prompt_tokens: 129, total_tokens: 139}, model_name: gpt-4, system_fingerprint: None, finish_reason: stop, logprobs: None}, idrun-2c94c074-bdc9-4f01-8fd7-71cfc4777d55-0)]为了准确了解幕后发生的情况并确保它没有调用工具我们可以查看LangSmith 跟踪
现在让我们尝试一下应该调用检索器的示例
response agent_executor.invoke({messages: [HumanMessage(contenthow can langsmith help with testing?)]}
)
response[messages][HumanMessage(contenthow can langsmith help with testing?, id04f4fe8f-391a-427c-88af-1fa064db304c),AIMessage(content, additional_kwargs{tool_calls: [{id: call_FNIgdO97wo51sKx3XZOGLHqT, function: {arguments: {\n query: how can LangSmith help with testing\n}, name: langsmith_search}, type: function}]}, response_metadata{token_usage: {completion_tokens: 22, prompt_tokens: 135, total_tokens: 157}, model_name: gpt-4, system_fingerprint: None, finish_reason: tool_calls, logprobs: None}, idrun-51f6ea92-84e1-43a5-b1f2-bc0c12d8613f-0, tool_calls[{name: langsmith_search, args: {query: how can LangSmith help with testing}, id: call_FNIgdO97wo51sKx3XZOGLHqT}]),ToolMessage(contentGetting started with LangSmith | ️️ LangSmith\n\nSkip to main contentLangSmith API DocsSearchGo to AppQuick StartUser GuideTracingEvaluationProduction Monitoring AutomationsPrompt HubProxyPricingSelf-HostingCookbookQuick StartOn this pageGetting started with LangSmithIntroduction\u200bLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!Install LangSmith\u200bWe offer Python and Typescript SDKs for all your LangSmith needs.PythonTypeScriptpip install -U langsmithyarn add langchain langsmithCreate an API key\u200bTo create an API key head to the setting pages. Then click Create API Key.Setup your environment\u200bShellexport LANGCHAIN_TRACING_V2trueexport LANGCHAIN_API_KEYyour-api-key# The below examples use the OpenAI API, though its not necessary in generalexport OPENAI_API_KEYyour-openai-api-keyLog your first trace\u200bWe provide multiple ways to log traces\n\nLearn about the workflows LangSmith supports at each stage of the LLM application lifecycle.Pricing: Learn about the pricing model for LangSmith.Self-Hosting: Learn about self-hosting options for LangSmith.Proxy: Learn about the proxy capabilities of LangSmith.Tracing: Learn about the tracing capabilities of LangSmith.Evaluation: Learn about the evaluation capabilities of LangSmith.Prompt Hub Learn about the Prompt Hub, a prompt management tool built into LangSmith.Additional Resources\u200bLangSmith Cookbook: A collection of tutorials and end-to-end walkthroughs using LangSmith.LangChain Python: Docs for the Python LangChain library.LangChain Python API Reference: documentation to review the core APIs of LangChain.LangChain JS: Docs for the TypeScript LangChain libraryDiscord: Join us on our Discord to discuss all things LangChain!FAQ\u200bHow do I migrate projects between organizations?\u200bCurrently we do not support project migration betwen organizations. While you can manually imitate this by\n\nteam deals with sensitive data that cannot be logged. How can I ensure that only my team can access it?\u200bIf you are interested in a private deployment of LangSmith or if you need to self-host, please reach out to us at saleslangchain.dev. Self-hosting LangSmith requires an annual enterprise license that also comes with support and formalized access to the LangChain team.Was this page helpful?NextUser GuideIntroductionInstall LangSmithCreate an API keySetup your environmentLog your first traceCreate your first evaluationNext StepsAdditional ResourcesFAQHow do I migrate projects between organizations?Why arent my runs arent showing up in my project?My team deals with sensitive data that cannot be logged. How can I ensure that only my team can access it?CommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc., namelangsmith_search, idf286c7e7-6514-4621-ac60-e4079b37ebe2, tool_call_idcall_FNIgdO97wo51sKx3XZOGLHqT),AIMessage(contentLangSmith is a platform that can significantly aid in testing by offering several features:\n\n1. **Tracing**: LangSmith provides robust tracing capabilities that enable you to monitor your application closely. This feature is particularly useful for tracking the behavior of your application and identifying any potential issues.\n\n2. **Evaluation**: LangSmith allows you to perform comprehensive evaluations of your application. This can help you assess the performance of your application under various conditions and make necessary adjustments to enhance its functionality.\n\n3. **Production Monitoring Automations**: With LangSmith, you can keep a close eye on your application when its in active use. The platform provides tools for automatic monitoring and managing routine tasks, helping to ensure your application runs smoothly.\n\n4. **Prompt Hub**: Its a prompt management tool built into LangSmith. This feature can be instrumental when testing various prompts in your application.\n\nOverall, LangSmith helps you build production-grade LLM applications with confidence, providing necessary tools for monitoring, evaluation, and automation., response_metadata{token_usage: {completion_tokens: 200, prompt_tokens: 782, total_tokens: 982}, model_name: gpt-4, system_fingerprint: None, finish_reason: stop, logprobs: None}, idrun-4b80db7e-9a26-4043-8b6b-922f847f9c80-0)]让我们看一下LangSmith 跟踪看看幕后发生了什么。
请注意我们最后返回的状态还包含工具调用和工具响应消息。
现在让我们尝试一下需要调用搜索工具的地方
response agent_executor.invoke({messages: [HumanMessage(contentwhats the weather in sf?)]}
)
response[messages][HumanMessage(contentwhats the weather in sf?, ide6b716e6-da57-41de-a227-fee281fda588),AIMessage(content, additional_kwargs{tool_calls: [{id: call_TGDKm0saxuGKJD5OYOXWRvLe, function: {arguments: {\n query: current weather in San Francisco\n}, name: tavily_search_results_json}, type: function}]}, response_metadata{token_usage: {completion_tokens: 23, prompt_tokens: 134, total_tokens: 157}, model_name: gpt-4, system_fingerprint: None, finish_reason: tool_calls, logprobs: None}, idrun-fd7d5854-2eab-4fca-ad9e-b3de8d587614-0, tool_calls[{name: tavily_search_results_json, args: {query: current weather in San Francisco}, id: call_TGDKm0saxuGKJD5OYOXWRvLe}]),ToolMessage(content[{url: https://www.weatherapi.com/, content: {\location\: {\name\: \San Francisco\, \region\: \California\, \country\: \United States of America\, \lat\: 37.78, \lon\: -122.42, \tz_id\: \America/Los_Angeles\, \localtime_epoch\: 1714426800, \localtime\: \2024-04-29 14:40\}, \current\: {\last_updated_epoch\: 1714426200, \last_updated\: \2024-04-29 14:30\, \temp_c\: 17.8, \temp_f\: 64.0, \is_day\: 1, \condition\: {\text\: \Sunny\, \icon\: \//cdn.weatherapi.com/weather/64x64/day/113.png\, \code\: 1000}, \wind_mph\: 23.0, \wind_kph\: 37.1, \wind_degree\: 290, \wind_dir\: \WNW\, \pressure_mb\: 1019.0, \pressure_in\: 30.09, \precip_mm\: 0.0, \precip_in\: 0.0, \humidity\: 50, \cloud\: 0, \feelslike_c\: 17.8, \feelslike_f\: 64.0, \vis_km\: 16.0, \vis_miles\: 9.0, \uv\: 5.0, \gust_mph\: 27.5, \gust_kph\: 44.3}}}, {url: https://www.wunderground.com/hourly/us/ca/san-francisco/94125/date/2024-4-29, content: Current Weather for Popular Cities . San Francisco, CA warning 59 \\u00b0 F Mostly Cloudy; Manhattan, NY 56 \\u00b0 F Fair; Schiller Park, IL (60176) warning 58 \\u00b0 F Mostly Cloudy; Boston, MA 52 \\u00b0 F Sunny ...}], nametavily_search_results_json, idaa0d8c3d-23b5-425a-ad05-3c174fc04892, tool_call_idcall_TGDKm0saxuGKJD5OYOXWRvLe),AIMessage(contentThe current weather in San Francisco, California is sunny with a temperature of 64.0°F (17.8°C). The wind is coming from the WNW at a speed of 23.0 mph. The humidity level is at 50%. There is no precipitation and the cloud cover is 0%. The visibility is 16.0 km. The UV index is 5.0. Please note that this information is as of 14:30 on April 29, 2024, according to [Weather API](https://www.weatherapi.com/)., response_metadata{token_usage: {completion_tokens: 117, prompt_tokens: 620, total_tokens: 737}, model_name: gpt-4, system_fingerprint: None, finish_reason: stop, logprobs: None}, idrun-2359b41b-cab6-40c3-b6d9-7bdf7195a601-0)]我们可以检查LangSmith 跟踪以确保它有效地调用搜索工具。 六、Streaming Messages
我们已经了解了如何调用代理.invoke来获取最终响应。
如果代理正在执行多个步骤则可能需要一段时间。为了显示中间进度我们可以在消息发生时流回消息。
for chunk in agent_executor.stream({messages: [HumanMessage(contentwhats the weather in sf?)]}
):print(chunk)print(----){agent: {messages: [AIMessage(content, additional_kwargs{tool_calls: [{id: call_50Kb8zHmFqPYavQwF5TgcOH8, function: {arguments: {\n query: current weather in San Francisco\n}, name: tavily_search_results_json}, type: function}]}, response_metadata{token_usage: {completion_tokens: 23, prompt_tokens: 134, total_tokens: 157}, model_name: gpt-4, system_fingerprint: None, finish_reason: tool_calls, logprobs: None}, idrun-042d5feb-c2cc-4c3f-b8fd-dbc22fd0bc07-0, tool_calls[{name: tavily_search_results_json, args: {query: current weather in San Francisco}, id: call_50Kb8zHmFqPYavQwF5TgcOH8}])]}}
----
{action: {messages: [ToolMessage(content[{url: https://www.weatherapi.com/, content: {\location\: {\name\: \San Francisco\, \region\: \California\, \country\: \United States of America\, \lat\: 37.78, \lon\: -122.42, \tz_id\: \America/Los_Angeles\, \localtime_epoch\: 1714426906, \localtime\: \2024-04-29 14:41\}, \current\: {\last_updated_epoch\: 1714426200, \last_updated\: \2024-04-29 14:30\, \temp_c\: 17.8, \temp_f\: 64.0, \is_day\: 1, \condition\: {\text\: \Sunny\, \icon\: \//cdn.weatherapi.com/weather/64x64/day/113.png\, \code\: 1000}, \wind_mph\: 23.0, \wind_kph\: 37.1, \wind_degree\: 290, \wind_dir\: \WNW\, \pressure_mb\: 1019.0, \pressure_in\: 30.09, \precip_mm\: 0.0, \precip_in\: 0.0, \humidity\: 50, \cloud\: 0, \feelslike_c\: 17.8, \feelslike_f\: 64.0, \vis_km\: 16.0, \vis_miles\: 9.0, \uv\: 5.0, \gust_mph\: 27.5, \gust_kph\: 44.3}}}, {url: https://world-weather.info/forecast/usa/san_francisco/april-2024/, content: Extended weather forecast in San Francisco. Hourly Week 10 days 14 days 30 days Year. Detailed \\u26a1 San Francisco Weather Forecast for April 2024 - day/night \\ud83c\\udf21\\ufe0f temperatures, precipitations - World-Weather.info.}], nametavily_search_results_json, idd88320ac-3fe1-4f73-870a-3681f15f6982, tool_call_idcall_50Kb8zHmFqPYavQwF5TgcOH8)]}}
----
{agent: {messages: [AIMessage(contentThe current weather in San Francisco, California is sunny with a temperature of 17.8°C (64.0°F). The wind is coming from the WNW at 23.0 mph. The humidity is at 50%. [source](https://www.weatherapi.com/), response_metadata{token_usage: {completion_tokens: 58, prompt_tokens: 602, total_tokens: 660}, model_name: gpt-4, system_fingerprint: None, finish_reason: stop, logprobs: None}, idrun-0cd2a507-ded5-4601-afe3-3807400e9989-0)]}}
----七、Streaming tokens
除了流回消息之外流回令牌也很有用。我们可以用.astream_events方法来做到这一点。
信息
此.astream_events方法仅适用于 Python 3.11 或更高版本。
async for event in agent_executor.astream_events({messages: [HumanMessage(contentwhats the weather in sf?)]}, versionv1
):kind event[event]if kind on_chain_start:if (event[name] Agent): # Was assigned when creating the agent with .with_config({run_name: Agent})print(fStarting agent: {event[name]} with input: {event[data].get(input)})elif kind on_chain_end:if (event[name] Agent): # Was assigned when creating the agent with .with_config({run_name: Agent})print()print(--)print(fDone agent: {event[name]} with output: {event[data].get(output)[output]})if kind on_chat_model_stream:content event[data][chunk].contentif content:# Empty content in the context of OpenAI means# that the model is asking for a tool to be invoked.# So we only print non-empty contentprint(content, end|)elif kind on_tool_start:print(--)print(fStarting tool: {event[name]} with inputs: {event[data].get(input)})elif kind on_tool_end:print(fDone tool: {event[name]})print(fTool output was: {event[data].get(output)})print(--)--
Starting tool: tavily_search_results_json with inputs: {query: current weather in San Francisco}
Done tool: tavily_search_results_json
Tool output was: [{url: https://www.weatherapi.com/, content: {location: {name: San Francisco, region: California, country: United States of America, lat: 37.78, lon: -122.42, tz_id: America/Los_Angeles, localtime_epoch: 1714427052, localtime: 2024-04-29 14:44}, current: {last_updated_epoch: 1714426200, last_updated: 2024-04-29 14:30, temp_c: 17.8, temp_f: 64.0, is_day: 1, condition: {text: Sunny, icon: //cdn.weatherapi.com/weather/64x64/day/113.png, code: 1000}, wind_mph: 23.0, wind_kph: 37.1, wind_degree: 290, wind_dir: WNW, pressure_mb: 1019.0, pressure_in: 30.09, precip_mm: 0.0, precip_in: 0.0, humidity: 50, cloud: 0, feelslike_c: 17.8, feelslike_f: 64.0, vis_km: 16.0, vis_miles: 9.0, uv: 5.0, gust_mph: 27.5, gust_kph: 44.3}}}, {url: https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/, content: San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...}]
--
The| current| weather| in| San| Francisco|,| California|,| USA| is| sunny| with| a| temperature| of| |17|.|8|°C| (|64|.|0|°F|).| The| wind| is| blowing| from| the| W|NW| at| a| speed| of| |37|.|1| k|ph| (|23|.|0| mph|).| The| humidity| level| is| at| |50|%.| [|Source|](|https|://|www|.weather|api|.com|/)|八、添加到内存
如前所述该代理是无状态的。这意味着它不记得以前的交互。
为了给它内存我们需要传入一个检查指针。
当传入检查指针时我们还必须thread_id在调用代理时传入 a 以便它知道从哪个线程/会话恢复。
from langgraph.checkpoint.sqlite import SqliteSavermemory SqliteSaver.from_conn_string(:memory:)agent_executor chat_agent_executor.create_tool_calling_executor(model, tools, checkpointermemory
)config {configurable: {thread_id: abc123}}for chunk in agent_executor.stream({messages: [HumanMessage(contenthi im bob!)]}, config
):print(chunk)print(----){agent: {messages: [AIMessage(contentHello Bob! How can I assist you today?, response_metadata{token_usage: {completion_tokens: 11, prompt_tokens: 131, total_tokens: 142}, model_name: gpt-4, system_fingerprint: None, finish_reason: stop, logprobs: None}, idrun-607733e3-4b8d-4137-ae66-8a4b8ccc8d40-0)]}}
----for chunk in agent_executor.stream({messages: [HumanMessage(contentwhats my name?)]}, config
):print(chunk)print(----){agent: {messages: [AIMessage(contentYour name is Bob. How can I assist you further?, response_metadata{token_usage: {completion_tokens: 13, prompt_tokens: 154, total_tokens: 167}, model_name: gpt-4, system_fingerprint: None, finish_reason: stop, logprobs: None}, idrun-e1181ba6-732d-4564-b479-9f1ab6bf01f6-0)]}}
----LangSmith trace示例 九、总结
这是一个包装在本快速入门中我们介绍了如何创建一个简单的代理。
然后我们展示了如何流回响应 - 不仅是中间步骤还有令牌
我们还添加了内存以便您可以与他们进行对话。代理是一个复杂的话题有很多东西需要学习
有关代理的更多信息请查看LangGraph文档。它有自己的一套概念、教程和操作指南。 2024-05-22