import os
import openai
from dotenv import load_dotenv, find_dotenv
# 读取本地/项目的环境变量。
# find_dotenv()寻找并定位.env文件的路径
# load_dotenv()读取该.env文件,并将其中的环境变量加载到当前的运行环境中
# 如果你设置的是全局的环境变量,这行代码则没有任何作用。
_ = load_dotenv(find_dotenv())
# 获取环境变量 OPENAI_API_KEY
openai.api_key = os.environ['OPENAI_API_KEY']
二、直接使用OpenAI¶
我们先从直接调用OpenAI的API开始。
get_completion函数是基于openai的封装函数,对于给定提示(prompt)输出相应的回答。其包含两个参数
prompt必需输入参数。 你给模型的提示,可以是一个问题,可以是你需要模型帮助你做的事(改变文本写作风格,翻译,回复消息等等)。model非必需输入参数。默认使用gpt-3.5-turbo。你也可以选择其他模型。
这里的提示对应我们给chatgpt的问题,函数给出的输出则对应chatpgt给我们的答案。
from openai import OpenAI
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
client = OpenAI()
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0,
)
return response.choices[0].message.content
2.1 计算1+1¶
我们来一个简单的例子 - 分别用中英文问问模型
- 中文提示(Prompt in Chinese):
1+1是什么? - 英文提示(Prompt in English):
What is 1+1?
# 中文
get_completion("1+1是什么?")
'1+1等于2。'
# 英文
get_completion("What is 1+1?")
'1+1 equals 2.'
2.2 用美式英语表达海盗邮件¶
上面的简单例子,模型gpt-3.5-turbo对我们的关于1+1是什么的提问给出了回答。
现在我们来看一个复杂一点的例子:
假设我们是电商公司员工,我们的顾客是一名海盗A,他在我们的网站上买了一个榨汁机用来做奶昔,在制作奶昔的过程中,奶昔的盖子飞了出去,弄得厨房墙上到处都是。于是海盗A给我们的客服中心写来以下邮件:customer_email
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse,\
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
我们的客服人员对于海盗的措辞表达觉得有点难以理解。 现在我们想要实现两个小目标:
- 让模型用美式英语的表达方式将海盗的邮件进行翻译,客服人员可以更好理解。*这里海盗的英文表达可以理解为英文的方言,其与美式英语的关系,就如四川话与普通话的关系。
- 让模型在翻译是用平和尊重的语气进行表达,客服人员的心情也会更好。
根据这两个小目标,定义一下文本表达风格:style
# 美式英语 + 平静、尊敬的语调
style = """American English \
in a calm and respectful tone
"""
下一步需要做的是将customer_email和style结合起来构造我们的提示:prompt
# 要求模型根据给出的语调进行转化
prompt = f"""Translate the text \
that is delimited by triple backticks
into a style that is {style}.
text: ```{customer_email}```
"""
print(prompt)
Translate the text that is delimited by triple backticks into a style that is American English in a calm and respectful tone . text: ``` Arrr, I be fuming that me blender lid flew off and splattered me kitchen walls with smoothie! And to make matters worse,the warranty don't cover the cost of cleaning up me kitchen. I need yer help right now, matey! ```
prompt 构造好了,我们可以调用get_completion得到我们想要的结果 - 用平和尊重的语气,美式英语表达的海盗语言邮件
response = get_completion(prompt)
response
"Ah, I'm really frustrated that my blender lid flew off and splattered my kitchen walls with smoothie! And to make matters worse, the warranty doesn't cover the cost of cleaning up my kitchen. I could really use your help right now, friend."
对比语言风格转换前后,用词更为正式,替换了极端情绪的表达,并表达了感谢。
✨ 你可以尝试修改提示,看可以得到什么不一样的结果😉
2.3 中文版¶
# 普通话 + 平静、尊敬的语调
style = """正式普通话 \
用一个平静、尊敬的语调
"""
# 非正式用语
customer_email = """
阿,我很生气,\
因为我的搅拌机盖掉了,\
把奶昔溅到了厨房的墙上!\
更糟糕的是,保修不包括打扫厨房的费用。\
我现在需要你的帮助,伙计!
"""
# 要求模型根据给出的语调进行转化
prompt = f"""把由三个反引号分隔的文本\
翻译成一种{style}风格。
文本: ```{customer_email}```
"""
print(prompt)
把由三个反引号分隔的文本翻译成一种正式普通话 用一个平静、尊敬的语调 风格。 文本: ``` 阿,我很生气,因为我的搅拌机盖掉了,把奶昔溅到了厨房的墙上!更糟糕的是,保修不包括打扫厨房的费用。我现在需要你的帮助,伙计! ```
response = get_completion(prompt)
response
'尊敬的先生/女士,\n\n我感到非常不安,因为我的搅拌机盖子掉了,导致奶昔溅到了厨房的墙上!更糟糕的是,保修不包括清洁厨房的费用。我现在需要您的帮助,朋友!感谢您的理解和支持。'
三、通过LangChain使用OpenAI¶
在前面一部分,我们通过封装函数get_completion直接调用了OpenAI完成了对方言邮件进行了的翻译,得到用平和尊重的语气、正式的普通话表达的邮件。
让我们尝试使用LangChain来实现相同的功能。
3.1 模型¶
从langchain_community.chat_models.openai导入OpenAI的对话模型ChatOpenAI。 除去OpenAI以外,langchain_community.chat_models还集成了其他对话模型,更多细节可以查看Langchain官方文档。
from langchain_community.chat_models.openai import ChatOpenAI
# 这里我们将参数temperature设置为0.0,从而减少生成答案的随机性。
# 如果你想要每次得到不一样的有新意的答案,可以尝试调整该参数。
chat = ChatOpenAI(temperature=0)
chat
上面的输出显示ChatOpenAI的默认模型为gpt-3.5-turbo
3.2.1 使用LangChain提示模版¶
1️⃣ 构造提示模版字符串¶
我们构造一个提示模版字符串:template_string
template_string = """Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: ```{text}```
"""
2️⃣ 构造LangChain提示模版¶
我们调用ChatPromptTemplate.from_template()函数将上面的提示模版字符template_string转换为提示模版prompt_template
from langchain.prompts.chat import ChatPromptTemplate
prompt_template = ChatPromptTemplate.from_template(template_string)
print(prompt_template.messages[0].prompt)
input_variables=['style', 'text'] template='Translate the text that is delimited by triple backticks into a style that is {style}. text: ```{text}```\n'
从上面的输出可以看出,prompt_template 有两个输入变量: style 和 text。
print(prompt_template.messages[0].prompt.input_variables)
['style', 'text']
3️⃣ 使用模版得到客户消息提示¶
langchain提示模版prompt_template需要两个输入变量: style 和 text。 这里分别对应
customer_style: 我们想要的顾客邮件风格customer_email: 顾客的原始邮件文本。
customer_style = """American English \
in a calm and respectful tone
"""
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse, \
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
对于给定的customer_style和customer_email, 我们可以使用提示模版prompt_template的format_messages方法生成想要的客户消息customer_messages。
customer_messages = prompt_template.format_messages(
style=customer_style,
text=customer_email)
print(type(customer_messages))
print(type(customer_messages[0]))
<class 'list'> <class 'langchain_core.messages.human.HumanMessage'>
可以看出customer_messages变量类型为列表(list),而列表里的元素变量类型为langchain自定义消息(langchain.schema.HumanMessage)。
打印第一个元素可以得到如下:
print(customer_messages[0])
content="Translate the text that is delimited by triple backticks into a style that is American English in a calm and respectful tone\n. text: ```\nArrr, I be fuming that me blender lid flew off and splattered me kitchen walls with smoothie! And to make matters worse, the warranty don't cover the cost of cleaning up me kitchen. I need yer help right now, matey!\n```\n"
# 中文提示
from langchain.prompts import ChatPromptTemplate
template_string = """把由三个反引号分隔的文本\
翻译成一种{style}风格。\
文本: ```{text}```
"""
prompt_template = ChatPromptTemplate.from_template(template_string)
customer_style = """正式普通话 \
用一个平静、尊敬的语气
"""
customer_email = """
阿,我很生气,\
因为我的搅拌机盖掉了,\
把奶昔溅到了厨房的墙上!\
更糟糕的是,保修不包括打扫厨房的费用。\
我现在需要你的帮助,伙计!
"""
customer_messages = prompt_template.format_messages(
style=customer_style,
text=customer_email)
print(customer_messages[0])
content='把由三个反引号分隔的文本翻译成一种正式普通话 用一个平静、尊敬的语气\n风格。文本: ```\n阿,我很生气,因为我的搅拌机盖掉了,把奶昔溅到了厨房的墙上!更糟糕的是,保修不包括打扫厨房的费用。我现在需要你的帮助,伙计!\n```\n'
customer_response = chat.invoke(customer_messages, temperature=0.0)
print(customer_response.content)
非常抱歉听到您的困扰。我理解您的不快,因为搅拌机盖掉了,导致奶昔溅到了厨房的墙上。更令人不安的是,保修不包括清洁厨房的费用。现在我需要您的帮助,朋友。让我们一起解决这个问题。感谢您的理解和支持。
print(customer_response.content)
非常抱歉听到您的困扰。我理解您的不快,因为搅拌机盖掉了,导致奶昔溅到了厨房的墙上。更令人不安的是,保修不包括清洁厨房的费用。现在我需要您的帮助,朋友。让我们一起解决这个问题。感谢您的理解和支持。
5️⃣ 使用模版得到回复消息提示¶
接下来,我们更进一步,将客服人员回复的消息,转换为海盗的语言风格,并确保消息比较有礼貌。
这里,我们可以继续使用第2️⃣步构造的langchain提示模版,来获得我们回复消息提示。
service_reply = """Hey there customer, \
the warranty does not cover \
cleaning expenses for your kitchen \
because it's your fault that \
you misused your blender \
by forgetting to put the lid on before \
starting the blender. \
Tough luck! See ya!
"""
service_style_pirate = """\
a polite tone \
that speaks in English Pirate\
"""
service_messages = prompt_template.format_messages(
style=service_style_pirate,
text=service_reply)
print(service_messages[0].content)
把由三个反引号分隔的文本翻译成一种a polite tone that speaks in English Pirate风格。文本: ```Hey there customer, the warranty does not cover cleaning expenses for your kitchen because it's your fault that you misused your blender by forgetting to put the lid on before starting the blender. Tough luck! See ya! ```
service_response = chat(service_messages)
print(service_response.content)
/Users/lta/anaconda3/envs/cookbook/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The method `BaseChatModel.__call__` was deprecated in langchain-core 0.1.7 and will be removed in 0.3.0. Use invoke instead. warn_deprecated(
Ahoy there, me hearty customer! The warranty be not coverin' the cleanin' expenses for yer galley because 'tis yer own fault for misusin' yer blender by forgettin' to put the lid on afore startin' the blender. Tough luck, matey! Fare thee well!
3.2.2 中文版¶
service_reply = """嘿,顾客, \
保修不包括厨房的清洁费用, \
因为您在启动搅拌机之前 \
忘记盖上盖子而误用搅拌机, \
这是您的错。 \
倒霉! 再见!
"""
service_style_pirate = """\
一个有礼貌的语气 \
使用正式的普通话 \
"""
service_messages = prompt_template.format_messages(
style=service_style_pirate,
text=service_reply)
print(service_messages[0].content)
把由三个反引号分隔的文本翻译成一种一个有礼貌的语气 使用正式的普通话 风格。文本: ```嘿,顾客, 保修不包括厨房的清洁费用, 因为您在启动搅拌机之前 忘记盖上盖子而误用搅拌机, 这是您的错。 倒霉! 再见! ```
service_response = chat(service_messages)
print(service_response.content)
尊敬的顾客,根据我们的保修政策,很遗憾告知您,厨房清洁费用不在保修范围内。这是因为在使用搅拌机之前,您忘记盖上盖子而导致搅拌机误用,这属于您的责任范围。非常抱歉给您带来不便。祝您好运,再见。
3.2.2 为什么需要提示模版¶
在应用于比较复杂的场景时,提示可能会非常长并且包含涉及许多细节。使用提示模版,可以让我们更为方便地重复使用设计好的提示。
下面给出了一个比较长的提示模版案例。学生们线上学习并提交作业,通过以下的提示来实现对学生的提交的作业的评分。
# 英文版
prompt = """ Your task is to determine if the student's solution is correct or not
To solve the problem do the following:
- First, workout your own solution to the problem
- Then compare your solution to the student's solution
and evaluate if the sudtent's solution is correct or not.
...
Use the following format:
Question:
```
question here
```
Student's solution:
```
student's solution here
```
Actual solution:
```
...
steps to work out the solution and your solution here
```
Is the student's solution the same as acutal solution \
just calculated:
```
yes or no
```
Student grade
```
correct or incorrect
```
Question:
```
{question}
```
Student's solution:
```
{student's solution}
```
Actual solution:
"""
# 中文版
prompt = """ 你的任务是判断学生的解决方案是正确的还是不正确的
要解决该问题,请执行以下操作:
- 首先,制定自己的问题解决方案
- 然后将您的解决方案与学生的解决方案进行比较
并评估学生的解决方案是否正确。
...
使用下面的格式:
问题:
```
问题文本
```
学生的解决方案:
```
学生的解决方案文本
```
实际解决方案:
```
...
制定解决方案的步骤以及您的解决方案请参见此处
```
学生的解决方案和实际解决方案是否相同 \
只计算:
```
是或者不是
```
学生的成绩
```
正确或者不正确
```
问题:
```
{question}
```
学生的解决方案:
```
{student's solution}
```
实际解决方案:
"""
此外,LangChain还提供了提示模版用于一些常用场景。比如自动摘要、问答、连接到SQL数据库、连接到不同的API. 通过使用LangChain内置的提示模版,你可以快速建立自己的大模型应用,而不需要花时间去设计和构造提示。
最后,我们在建立大模型应用时,通常希望模型的输出为给定的格式,比如在输出使用特定的关键词来让输出结构化。 下面为一个使用大模型进行链式思考推理例子,对于问题:What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?
通过使用LangChain库函数,输出采用"Thought"(思考)、"Action"(行动)、"Observation"(观察)作为链式思考推理的关键词,让输出结构化。
Thought: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.
Action: Search[Colorado orogeny]
Observation: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.
Thought: It does not mention the eastern sector. So I need to look up eastern sector.
Action: Lookup[eastern sector]
Observation: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.
Thought: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.
Action: Search[High Plains]
Observation: High Plains refers to one of two distinct land regions
Thought: I need to instead search High Plains (United States).
Action: Search[High Plains (United States)]
Observation: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]
Thought: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.
Action: Finish[1,800 to 7,000 ft]
在补充材料中,可以查看使用LangChain和OpenAI进行链式思考推理的另一个代码实例。
3.3 输出解析器¶
3.3.1 如果没有输出解析器¶
对于给定的评价customer_review, 我们希望提取信息,并按以下格式输出:
{
"gift": False,
"delivery_days": 5,
"price_value": "pretty affordable!"
}
customer_review = """\
This leaf blower is pretty amazing. It has four settings:\
candle blower, gentle breeze, windy city, and tornado. \
It arrived in two days, just in time for my wife's \
anniversary present. \
I think my wife liked it so much she was speechless. \
So far I've been the only one using it, and I've been \
using it every other morning to clear the leaves on our lawn. \
It's slightly more expensive than the other leaf blowers \
out there, but I think it's worth it for the extra features.
"""
1️⃣ 构造提示模版字符串¶
review_template = """\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product \
to arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.
Format the output as JSON with the following keys:
gift
delivery_days
price_value
text: {text}
"""
2️⃣ 构造langchain提示模版¶
from langchain.prompts import ChatPromptTemplate
prompt_template = ChatPromptTemplate.from_template(review_template)
print(prompt_template)
input_variables=['text'] messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], template='For the following text, extract the following information:\n\ngift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.\n\ndelivery_days: How many days did it take for the product to arrive? If this information is not found, output -1.\n\nprice_value: Extract any sentences about the value or price,and output them as a comma separated Python list.\n\nFormat the output as JSON with the following keys:\ngift\ndelivery_days\nprice_value\n\ntext: {text}\n'))]
3️⃣ 使用模版得到提示消息¶
messages = prompt_template.format_messages(text=customer_review)
4️⃣ 调用chat模型提取信息¶
chat = ChatOpenAI(temperature=0.0)
response = chat(messages)
print(response.content)
{
"gift": true,
"delivery_days": 2,
"price_value": ["It's slightly more expensive than the other leaf blowers out there"]
}
📝 分析与总结¶
response.content类型为字符串(str),而并非字典(dict), 直接使用get方法会报错。因此,我们需要输出解释器。
type(response.content)
str
response.content.get('gift')
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[43], line 1 ----> 1 response.content.get('gift') AttributeError: 'str' object has no attribute 'get'
3.3.2 中文版¶
from langchain.prompts import ChatPromptTemplate
customer_review = """\
这款吹叶机非常神奇。 它有四个设置:\
吹蜡烛、微风、风城、龙卷风。 \
两天后就到了,正好赶上我妻子的\
周年纪念礼物。 \
我想我的妻子会喜欢它到说不出话来。 \
到目前为止,我是唯一一个使用它的人,而且我一直\
每隔一天早上用它来清理草坪上的叶子。 \
它比其他吹叶机稍微贵一点,\
但我认为它的额外功能是值得的。
"""
review_template = """\
对于以下文本,请从中提取以下信息:
礼物:该商品是作为礼物送给别人的吗? \
如果是,则回答 是的;如果否或未知,则回答 不是。
交货天数:产品需要多少天\
到达? 如果没有找到该信息,则输出-1。
价钱:提取有关价值或价格的任何句子,\
并将它们输出为逗号分隔的 Python 列表。
使用以下键将输出格式化为 JSON:
礼物
交货天数
价钱
文本: {text}
"""
prompt_template = ChatPromptTemplate.from_template(review_template)
print(prompt_template)
input_variables=['text'] messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['text'], template='对于以下文本,请从中提取以下信息:\n\n礼物:该商品是作为礼物送给别人的吗? 如果是,则回答 是的;如果否或未知,则回答 不是。\n\n交货天数:产品需要多少天到达? 如果没有找到该信息,则输出-1。\n\n价钱:提取有关价值或价格的任何句子,并将它们输出为逗号分隔的 Python 列表。\n\n使用以下键将输出格式化为 JSON:\n礼物\n交货天数\n价钱\n\n文本: {text}\n'))]
messages = prompt_template.format_messages(text=customer_review)
chat = ChatOpenAI(temperature=0.0)
response = chat(messages)
print(response.content)
{
"礼物": "是的",
"交货天数": 2,
"价钱": ["它比其他吹叶机稍微贵一点"]
}
3.3.3 LangChain输出解析器¶
1️⃣ 构造提示模版字符串¶
review_template_2 = """\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product\
to arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.
text: {text}
{format_instructions}
"""
2️⃣ 构造langchain提示模版¶
prompt = ChatPromptTemplate.from_template(template=review_template_2)
🔥 构造输出解析器¶
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
gift_schema = ResponseSchema(name="gift",
description="Was the item purchased\
as a gift for someone else? \
Answer True if yes,\
False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",
description="How many days\
did it take for the product\
to arrive? If this \
information is not found,\
output -1.")
price_value_schema = ResponseSchema(name="price_value",
description="Extract any\
sentences about the value or \
price, and output them as a \
comma separated Python list.")
response_schemas = [gift_schema,
delivery_days_schema,
price_value_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
```json
{
"gift": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.
"delivery_days": string // How many days did it take for the product to arrive? If this information is not found, output -1.
"price_value": string // Extract any sentences about the value or price, and output them as a comma separated Python list.
}
```
3️⃣ 使用模版得到提示消息¶
messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)
print(messages[0].content)
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the productto arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,and output them as a comma separated Python list.
text: 这款吹叶机非常神奇。 它有四个设置:吹蜡烛、微风、风城、龙卷风。 两天后就到了,正好赶上我妻子的周年纪念礼物。 我想我的妻子会喜欢它到说不出话来。 到目前为止,我是唯一一个使用它的人,而且我一直每隔一天早上用它来清理草坪上的叶子。 它比其他吹叶机稍微贵一点,但我认为它的额外功能是值得的。
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
```json
{
"gift": string // Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.
"delivery_days": string // How many days did it take for the product to arrive? If this information is not found, output -1.
"price_value": string // Extract any sentences about the value or price, and output them as a comma separated Python list.
}
```
4️⃣ 调用chat模型提取信息¶
response = chat(messages)
print(response.content)
```json
{
"gift": true,
"delivery_days": 2,
"price_value": "它比其他吹叶机稍微贵一点"
}
```
5️⃣ 使用输出解析器解析输出¶
output_dict = output_parser.parse(response.content)
output_dict
{'gift': True, 'delivery_days': 2, 'price_value': '它比其他吹叶机稍微贵一点'}
📝 分析与总结¶
output_dict类型为字典(dict), 可直接使用get方法。这样的输出更方便下游任务的处理。
type(output_dict)
dict
output_dict.get('delivery_days')
2
3.3.4 中文版¶
# 中文
review_template_2 = """\
对于以下文本,请从中提取以下信息::
礼物:该商品是作为礼物送给别人的吗?
如果是,则回答 是的;如果否或未知,则回答 不是。
交货天数:产品到达需要多少天? 如果没有找到该信息,则输出-1。
价钱:提取有关价值或价格的任何句子,并将它们输出为逗号分隔的 Python 列表。
文本: {text}
{format_instructions}
"""
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
gift_schema = ResponseSchema(name="礼物",
description="这件物品是作为礼物送给别人的吗?\
如果是,则回答 是的,\
如果否或未知,则回答 不是。")
delivery_days_schema = ResponseSchema(name="交货天数",
description="产品需要多少天才能到达?\
如果没有找到该信息,则输出-1。")
price_value_schema = ResponseSchema(name="价钱",
description="提取有关价值或价格的任何句子,\
并将它们输出为逗号分隔的 Python 列表")
response_schemas = [gift_schema,
delivery_days_schema,
price_value_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
```json
{
"礼物": string // 这件物品是作为礼物送给别人的吗? 如果是,则回答 是的, 如果否或未知,则回答 不是。
"交货天数": string // 产品需要多少天才能到达? 如果没有找到该信息,则输出-1。
"价钱": string // 提取有关价值或价格的任何句子, 并将它们输出为逗号分隔的 Python 列表
}
```
messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)
print(messages[0].content)
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the productto arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,and output them as a comma separated Python list.
text: 这款吹叶机非常神奇。 它有四个设置:吹蜡烛、微风、风城、龙卷风。 两天后就到了,正好赶上我妻子的周年纪念礼物。 我想我的妻子会喜欢它到说不出话来。 到目前为止,我是唯一一个使用它的人,而且我一直每隔一天早上用它来清理草坪上的叶子。 它比其他吹叶机稍微贵一点,但我认为它的额外功能是值得的。
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
```json
{
"礼物": string // 这件物品是作为礼物送给别人的吗? 如果是,则回答 是的, 如果否或未知,则回答 不是。
"交货天数": string // 产品需要多少天才能到达? 如果没有找到该信息,则输出-1。
"价钱": string // 提取有关价值或价格的任何句子, 并将它们输出为逗号分隔的 Python 列表
}
```
response = chat(messages)
print(response.content)
```json
{
"礼物": "是的",
"交货天数": "两天",
"价钱": "它比其他吹叶机稍微贵一点"
}
```
output_dict = output_parser.parse(response.content)
output_dict
{'礼物': '是的', '交货天数': '两天', '价钱': '它比其他吹叶机稍微贵一点'}
四、补充材料¶
4.1 链式思考推理(ReAct)¶
参考资料:ReAct (Reason+Act) prompting in OpenAI GPT and LangChain
from langchain.docstore.wikipedia import Wikipedia
from langchain_community.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, Tool, AgentExecutor
from langchain.agents.react.base import DocstoreExplorer
docstore=DocstoreExplorer(Wikipedia())
tools = [
Tool(
name="Search",
func=docstore.search,
description="Search for a term in the docstore.",
),
Tool(
name="Lookup",
func=docstore.lookup,
description="Lookup a term in the docstore.",
)
]
# 使用大语言模型
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
)
# 初始化ReAct代理
react = initialize_agent(tools, llm, agent="react-docstore", verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=react.agent,
tools=tools,
verbose=True,
)
question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"
agent_executor.run(question)
/Users/lta/anaconda3/envs/cookbook/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The class `DocstoreExplorer` was deprecated in LangChain 0.1.0 and will be removed in 0.3.0 warn_deprecated( /Users/lta/anaconda3/envs/cookbook/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.3.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead. warn_deprecated( /Users/lta/anaconda3/envs/cookbook/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 0.3.0. Use invoke instead. warn_deprecated(
> Entering new AgentExecutor chain... Thought: I need to search David Chanoff, find the U.S. Navy admiral he collaborated with, and then find out which President the admiral served as the ambassador to the United Kingdom under. Action: Search[David Chanoff] Observation: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books. Thought:David Chanoff has collaborated with U.S. Navy admiral William J. Crowe. I need to search William J. Crowe and find out which President he served as the ambassador to the United Kingdom under. Action: Search[William J. Crowe] Observation: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton. Thought:William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton. So the answer is Bill Clinton. Action: Finish[Bill Clinton] > Finished chain.
'Bill Clinton'