网站在网站网站在哪里找到的,上海大企业公司排名,php 企业网站源码,wordpress google字体DeepSeek结合Langchain的基本用法 DeepSeek 基于Openai接口规范的Prompt应答Deepseek结合LangchainDeepSeek 基于langchain的结构化返回 DeepSeek 基于Openai接口规范的Prompt应答
首先我们需要先基于pip 安装
pip install openai最开始我们先熟悉如何使用openai的接口规范基于deepseek来实现的基础问答。代码接口如下 from openai import OpenAI
client OpenAI(api_keyapi_key, base_urlhttps://api.deepseek.com)def get_completion(prompt, modeldeepseek-chat):# messages [{role: user, content: prompt}]response client.chat.completions.create(modelmodel,messages[{role: system, content: You are a helpful assistant},{role: user, content: prompt},],streamFalse)return responseresp get_completion(What is 11?)
print(resp)
print(resp.choices[0].message.content)
我们这类 1 1 等于几大模型回答如下 往往为了复用某些功能就需要我们针对某一类问题设计模版能够基于不同的问题替换不同的具体问题如何来使用模版功能如下所示这里我们需要转换文本使用一种新的表达style 基于 llm改造文本内容
# 模版开发
customer_email
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse,\
the warranty dont cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!style American English \
in a calm and respectful toneprompt fTranslate the text \
that is delimited by triple backticks
into a style that is {style}.
text: {customer_email}
response get_completion(prompt)
print(response)
print(------------)print(response.choices[0].message.content)Deepseek结合Langchain
首先我们需要先基于pip 安装
pip install langchain_openai langchain我们实现上述类似逻辑通过llm 基于同一段文本进行改造转换, 实现如下 from langchain_openai import ChatOpenAIchat ChatOpenAI(modeldeepseek-chat,openai_api_keyapi_key,openai_api_basehttps://api.deepseek.com,max_tokens1024
)template_string Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: {text}
from langchain.prompts import ChatPromptTemplate
prompt_template ChatPromptTemplate.from_template(template_string)
customer_style American English \
in a calm and respectful tonecustomer_email
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse, \
the warranty dont cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!customer_messages prompt_template.format_messages(stylecustomer_style,textcustomer_email)
# Call the LLM to translate to the style of the customer message
# Reference: chat ChatOpenAI(temperature0.0)
customer_response chat.invoke(customer_messages, temperature0)
print(customer_response.content)service_reply Hey there customer, \
the warranty does not cover \
cleaning expenses for your kitchen \
because its your fault that \
you misused your blender \
by forgetting to put the lid on before \
starting the blender. \
Tough luck! See ya!
service_style_pirate \
a polite tone \
that speaks in English Pirate\
service_messages prompt_template.format_messages(styleservice_style_pirate,textservice_reply)service_response chat.invoke(service_messages, temperature0)
print(service_response.content) DeepSeek 基于langchain的结构化返回
如何将llm返回的信息按照特定的结构返回信息比如返回json数据格式。 我们还是按照上面的例子来进行改造 首先我们返回的数据结构长什么样子 因此需要设计输出的schema要求 gift_schema ResponseSchema(namegift,descriptionWas the item purchased\as a gift for someone else? \Answer True if yes,\False if not or unknown.)
delivery_days_schema ResponseSchema(namedelivery_days,descriptionHow many days\did it take for the product\to arrive? If this \information is not found,\output -1.)response_schemas [gift_schema,delivery_days_schema]我们定义了返回的数据结构gift True or False delivery_days 返回时间 默认值-1.
from langchain_openai import ChatOpenAI
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
from langchain.prompts import ChatPromptTemplatechat ChatOpenAI(modeldeepseek-chat,openai_api_keyapi_key,openai_api_basehttps://api.deepseek.com,max_tokens1024
)gift_schema ResponseSchema(namegift,descriptionWas the item purchased\as a gift for someone else? \Answer True if yes,\False if not or unknown.)
delivery_days_schema ResponseSchema(namedelivery_days,descriptionHow many days\did it take for the product\to arrive? If this \information is not found,\output -1.)response_schemas [gift_schema,delivery_days_schema]
output_parser StructuredOutputParser.from_response_schemas(response_schemas)
print(output_parser)
format_instructions output_parser.get_format_instructions()
print(format_instructions)customer_review \
This leaf blower is pretty amazing. It has four settings:\
candle blower, gentle breeze, windy city, and tornado. \
It arrived in two days, just in time for my wifes \
anniversary present. \
I think my wife liked it so much she was speechless. \
So far Ive been the only one using it, and Ive been \
using it every other morning to clear the leaves on our lawn. \
Its slightly more expensive than the other leaf blowers \
out there, but I think its worth it for the extra features.
review_template \
For the following text, extract the following information:gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.delivery_days: How many days did it take for the product \
to arrive? If this information is not found, output -1.Format the output as JSON with the following keys:
gift
delivery_daystext: {text}
prompt ChatPromptTemplate.from_template(templatereview_template)
messages prompt.format_messages(textcustomer_review,format_instructionsformat_instructions)response chat.invoke(messages, temperature0)
output_dict output_parser.parse(response.content)
print(output_dict)