Skip to main content

对话式检索问答

ConversationalRetrievalQA 链基于 RetrievalQAChain 构建,提供了聊天历史记录组件。

它首先将聊天历史记录(显式传递或从提供的记忆中检索)与问题组合为一个独立的问题,然后从检索器中查找相关文档,最后将这些文档和问题传递到问答链中返回响应。

要创建一个 ConversationalRetrievalQA,你需要一个检索器。在下面的示例中,我们将从 向量存储 创建一个检索器,该存储可以从嵌入中创建。

import { OpenAI } from "langchain/llms/openai";
import { ConversationalRetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { BufferMemory } from "langchain/memory";
import * as fs from "fs";

export const run = async () => {
/* Initialize the LLM to use to answer the question */
const model = new OpenAI({});
/* Load in the file we want to do question answering over */
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
/* Split the text into chunks */
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);
/* Create the vectorstore */
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());
/* Create the chain */
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
{
memory: new BufferMemory({
memoryKey: "chat_history", // Must be set to "chat_history"
}),
}
);
/* Ask it a question */
const question = "What did the president say about Justice Breyer?";
const res = await chain.call({ question });
console.log(res);
/* Ask it a follow up question */
const followUpRes = await chain.call({
question: "Was that nice?",
});
console.log(followUpRes);
};

在上面的代码片段中,ConversationalRetrievalQAChain 类的 fromLLM 方法具有以下签名:

static fromLLM(

llm: BaseLanguageModel,

retriever: BaseRetriever,

options?: {

questionGeneratorChainOptions?: {

llm?: BaseLanguageModel;

template?: string;

};

qaChainOptions?: QAChainParams;

returnSourceDocuments?: boolean;

}

): ConversationalRetrievalQAChain

以下是选项对象中各属性的说明:

  • questionGeneratorChainOptions:一个对象,允许您将自定义模板和 LLM 传递到底层的问题生成链中。

    • 如果提供了模板,ConversationalRetrievalQAChain 将使用该模板从对话上下文中生成问题,而不是使用问题参数中提供的问题。

      如果原始问题不包含足够的信息来检索合适的答案,则这将非常有用。

    • 在这里传递单独的 LLM 允许您使用更便宜/更快的模型来创建简化的问题,同时在最终响应中使用更强大的模型,可以减少不必要的延迟。

  • qaChainOptions:允许您自定义在最后一步使用的特定 QA 链的选项。默认值为 StuffDocumentsChain,但是您可以通过传递 type 参数来进行自定义以使用那个链。

    在此处传递特定选项完全是可选的,但是,如果您想要自定义将响应呈现给最终用户的方式,或者如果您拥有太多文档而无法使用默认的 StuffDocumentsChain,则可能非常有用。

    您可以在 此处查看可以使用的字段的文档

  • returnSourceDocuments:一个布尔值,表示 ConversationalRetrievalQAChain 是否应返回用于检索答案的源文档。如果设置为 true,则文档将包含在 call() 方法返回的结果中。如果未设置,则默认值为 false。这对于允许用户查看生成答案所使用的源非常有用。

    • 如果您使用此选项并传递内存实例,请将内存实例的 inputKeyoutputKey 设置为与链输入和最终对话链输出相同的值。它们默认分别为 "question""text",并指定内存应存储的值。

内置存储器

以下是一个使用更快的LLM生成问题和更全面的LLM生成最终答案的自定义示例。它使用内置存储对象(Built-in Memory),并返回所引用的源文档。

import { ChatOpenAI } from "langchain/chat_models/openai";
import { ConversationalRetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { BufferMemory } from "langchain/memory";

import * as fs from "fs";

export const run = async () => {
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());
const fasterModel = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
});
const slowerModel = new ChatOpenAI({
modelName: "gpt-4",
});
const chain = ConversationalRetrievalQAChain.fromLLM(
slowerModel,
vectorStore.asRetriever(),
{
returnSourceDocuments: true,
memory: new BufferMemory({
memoryKey: "chat_history",
inputKey: "question", // The key for the input to the chain
outputKey: "text", // The key for the final conversational output of the chain
returnMessages: true, // If using with a chat model
}),
questionGeneratorChainOptions: {
llm: fasterModel,
},
}
);
/* Ask it a question */
const question = "What did the president say about Justice Breyer?";
const res = await chain.call({ question });
console.log(res);

const followUpRes = await chain.call({ question: "Was that nice?" });
console.log(followUpRes);
};

流式处理

您还可以使用上述使用两个不同的LLM来仅从链中流式传输最终响应而不是中间独立问题生成步骤的输出的概念。以下是一个示例。# Streaming

import { ChatOpenAI } from "langchain/chat_models/openai";
import { ConversationalRetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { BufferMemory } from "langchain/memory";

import * as fs from "fs";

export const run = async () => {
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());
let streamedResponse = "";
const streamingModel = new ChatOpenAI({
streaming: true,
callbacks: [
{
handleLLMNewToken(token) {
streamedResponse += token;
},
},
],
});
const nonStreamingModel = new ChatOpenAI({});
const chain = ConversationalRetrievalQAChain.fromLLM(
streamingModel,
vectorStore.asRetriever(),
{
returnSourceDocuments: true,
memory: new BufferMemory({
memoryKey: "chat_history",
inputKey: "question", // The key for the input to the chain
outputKey: "text", // The key for the final conversational output of the chain
returnMessages: true, // If using with a chat model
}),
questionGeneratorChainOptions: {
llm: nonStreamingModel,
},
}
);
/* Ask it a question */
const question = "What did the president say about Justice Breyer?";
const res = await chain.call({ question });
console.log({ streamedResponse });
/*
{
streamedResponse: 'President Biden thanked Justice Breyer for his service, and honored him as an Army veteran, Constitutional scholar and retiring Justice of the United States Supreme Court.'
}
*/
};

外部管理的存储器

如果您希望以特定方式格式化聊天历史记录,则还可以通过省略memory选项并直接将chat_history字符串传递到chain.call方法中来显式传递聊天历史记录。# Externally-Managed Memory

import { OpenAI } from "langchain/llms/openai";
import { ConversationalRetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import * as fs from "fs";

export const run = async () => {
/* Initialize the LLM to use to answer the question */
const model = new OpenAI({});
/* Load in the file we want to do question answering over */
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
/* Split the text into chunks */
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);
/* Create the vectorstore */
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());
/* Create the chain */
const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever()
);
/* Ask it a question */
const question = "What did the president say about Justice Breyer?";
const res = await chain.call({ question, chat_history: [] });
console.log(res);
/* Ask it a follow up question */
const chatHistory = question + res.text;
const followUpRes = await chain.call({
question: "Was that nice?",
chat_history: chatHistory,
});
console.log(followUpRes);
};