程序化运行ChatGPT - 如何在不重新提交所有历史消息的情况下继续对话?
你可以通过以下示例来获取ChatGPT对某个提示的回应:
from openai import OpenAI
client = OpenAI() # requires key in OPEN_AI_KEY environment variable
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message.content)
那么,如何继续这个对话呢?我看到有些例子说你只需要把新消息加到消息列表里,然后重新提交:
# Continue the conversation by including the initial messages and adding a new one
continued_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."},
{"role": "assistant", "content": initial_completion.choices[0].message.content}, # Include the initial response
{"role": "user", "content": "Can you elaborate more on how recursion can lead to infinite loops if not properly handled?"} # New follow-up prompt
]
)
但我想这意味着每次新提示时都要重新处理之前的消息,这样似乎很浪费。难道这真的是唯一的方法吗?有没有办法保持某种“会话”,让ChatGPT记住之前的状态,只处理新给出的提示呢?
1 个回答
0
对话总结
现在我们来看看一种稍微复杂一点的记忆类型
- 对话总结记忆。这种记忆会随着时间的推移对对话进行总结。它可以帮助我们把对话中的信息浓缩起来。对话总结记忆在对话进行时就会总结内容,并把当前的总结存储在记忆中。之后,我们可以用这个记忆把到目前为止的对话总结放入提示或链条中。这种记忆在较长的对话中尤其有用,因为如果逐字保留过去的消息记录,会占用太多的空间。
在对话的场景中,如果你想保持连贯性并给OpenAI的模型提供上下文,你需要发送某种形式的对话历史。这段历史可以帮助模型理解正在进行的对话,并生成与上下文相关的回复。