LangChain RAG - ChatOpenAI 回复时不连贯
我正在用Langchain构建一个非常简单的RAG应用程序。现在遇到的问题是,当我使用ChatOpenAI提问时,模型回答时没有生成任何完整的句子,表现得不像一个“聊天机器人”,而是像llama2那样(见下面的图片)。当我把模型从ChatOpenAI换成llama2时,我只是在代码里注释掉模型,其他的都没动。
我的数据是基于openfoodfacts的,所以我在提问时会询问特定的成分。
我该如何解决这个问题,才能让ChatOpenAI的结果和llama2一样呢?
代码:
from fastapi import FastAPI
from langchain.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langserve import add_routes
from langchain_community.llms import Ollama
from langchain.chat_models import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain.embeddings import OpenAIEmbeddings
import os
os.environ["OPENAI_API_KEY"] = "SECRET"
# model = Ollama(model="llama2")
model = ChatOpenAI(temperature=0.1)
import pandas as pd
products = pd.read_csv('./data/products.csv')
vectorstore = FAISS.from_texts(
products['text'], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()
app = FastAPI(
title="LangChain Server",
version="1.0",
description="Spin up a simple api server using Langchain's Runnable interfaces",
)
ANSWER_TEMPLATE = """Answer the question based on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(ANSWER_TEMPLATE)
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
# Adds routes to the app for using the retriever under:
# /invoke
# /batch
# /stream
add_routes(app, chain)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
1 个回答
1
把温度设置成 0.7
,然后使用默认的模板 rlm/rag-prompt
,这样就解决了我的问题。