Skip to main content

快速入门, 使用聊天模型

聊天模型是一种语言模型的变体。 虽然聊天模型在幕后使用语言模型, 但它们公开的接口有些不同。 它们不是暴露一个"输入文本,输出文本"的API,而是暴露一个"聊天消息"到输入和输出的接口。

聊天模型API相当新, 因此我们仍在找出正确的抽象。

安装和设置

要开始使用,请按照安装说明安装LangChain。

入门指南

本节介绍如何使用聊天模型入门。接口基于消息而不是原始文本。

import { ChatOpenAI } from "langchain/chat_models/openai";

import { HumanChatMessage, SystemChatMessage } from "langchain/schema";



const chat = new ChatOpenAI({ temperature: 0 });

在这里,我们使用存储在环境变量OPENAI_API_KEYAZURE_OPENAI_API_KEY中的API密钥创建聊天模型。在本节中,我们将调用此聊天模型。

注意,如果您使用的是Azure OpenAI,请确保还设置了环境变量AZURE_OPENAI_API_INSTANCE_NAMEAZURE_OPENAI_API_DEPLOYMENT_NAMEAZURE_OPENAI_API_VERSION

聊天模型: 消息作为输入, 消息作为输出

通过将一个或多个消息传递给聊天模型,可以获取聊天完成。响应也将是一条消息。LangChain当前支持的消息类型为AIChatMessageHumanChatMessageSystemChatMessage ,和通用ChatMessage-- ChatMessage采用任意角色参数,在这里,我们不会使用。大多数时候,您只需要处理HumanChatMessageAIChatMessageSystemChatMessage

const response = await chat.call([

new HumanChatMessage(

"Translate this sentence from English to French. I love programming."

),

]);



console.log(response);

AIChatMessage { text: "J'aime programmer." }

多条消息

OpenAI的在线聊天模型(目前包括gpt-3.5-turbogpt-4以及Azure OpenAI的gpt-4-32k)支持多条消息作为输入。请参见[这里](https://platform.openai.com/docs/guides/chat/chat -vs-completions)了解更多信息。以下是向聊天模型发送系统消息和用户消息的示例:

注意,如果您使用Azure OpenAI,请确保更改部署名称以使用您选择的模型的部署。

const responseB = await chat.call([

new SystemChatMessage(

"You are a helpful assistant that translates English to French."

),

new HumanChatMessage("Translate: I love programming."),

]);



console.log(responseB);

AIChatMessage { text: "J'aime programmer." }

多条完成

您可以进一步生成多个消息集的完成,使用generate。这将返回具有额外消息参数的LLMResult。

const responseC = await chat.generate([

[

new SystemChatMessage(

"You are a helpful assistant that translates English to French."

),

new HumanChatMessage(

"Translate this sentence from English to French. I love programming."

),

],

[

new SystemChatMessage(

"You are a helpful assistant that translates English to French."

),

new HumanChatMessage(

"Translate this sentence from English to French. I love artificial intelligence."

),

],

]);



console.log(responseC);

{

generations: [

[

{

text: "J'aime programmer.",

message: AIChatMessage { text: "J'aime programmer." },

}

],

[

{

text: "J'aime l'intelligence artificielle.",

message: AIChatMessage { text: "J'aime l'intelligence artificielle." }

}

]

]

}

聊天提示模板: 管理聊天模型的提示

您可以通过使用MessagePromptTemplate来利用模板。您可以从一个或多个MessagePromptTemplate构建ChatPromptTemplate。您可以使用ChatPromptTemplateformatPromptValue - 这将返回一个PromptValue

您可以将PromptValue转换为字符串或消息对象,具体取决于您是否想将格式化值用作llm或聊天模型的输入。

继续上一个示例: import {

SystemMessagePromptTemplate,

HumanMessagePromptTemplate,

ChatPromptTemplate,

} from "langchain/prompts";



```typescript

首先,我们创建一个可重复使用的模板:
const translationPrompt = ChatPromptTemplate.fromPromptMessages([

SystemMessagePromptTemplate.fromTemplate(

"You are a helpful assistant that translates {input_language} to {output_language}."

),

HumanMessagePromptTemplate.fromTemplate("{text}"),

]);

然后,我们可以使用模板生成响应:

const responseA = await chat.generatePrompt([

await translationPrompt.formatPromptValue({

input_language: "English",

output_language: "French",

text: "I love programming.",

}),

]);



console.log(responseA);

{

generations: [

[

{

text: "J'aime programmer.",

message: AIChatMessage { text: "J'aime programmer." }

}

]

]

}

模型+提示=LLMChain

这种要求用户完成格式化提示的模式非常常见,因此我们介绍了下一个谜题: LLMChain

const chain = new LLMChain({

prompt: translationPrompt,

llm: chat,

});

然后您可以调用链:

const responseB = await chain.call({

input_language: "English",

output_language: "French",

text: "I love programming.",

});



console.log(responseB);

{ text: "J'aime programmer." }

代理: 根据用户输入动态运行链

最后,我们介绍了工具和代理,它们通过其他能力扩展了模型,例如搜索或计算器。

工具是一个函数,它接受一个字符串(例如搜索查询)并返回一个字符串(例如搜索结果)。它们还有一个名称和描述,由聊天模型用于识别应该调用哪个工具。

class Tool {

name: string;

description: string;

call(arg: string): Promise<string>;

}

代理是对代理提示链(例如MRKL)的无状态包装器,它负责将工具按格式放入提示中,并解析从聊天模型获取的响应。

interface AgentStep {

action: AgentAction;

observation: string;

}



interface AgentAction {

tool: string; // Tool.name

toolInput: string; // Tool.call argument

}



interface AgentFinish {

returnValues: object;

}



class Agent {

plan(steps: AgentStep[], inputs: object): Promise<AgentAction | AgentFinish>;

}

要使代理更强大,我们需要使它们迭代,即多次调用模型,直到它们到达最终答案。这是AgentExecutor的工作。

class AgentExecutor {

// a simplified implementation

run(inputs: object) {

const steps = [];

while (true) {

const step = await this.agent.plan(steps, inputs);

if (step instanceof AgentFinish) {

return step.returnValues;

}

steps.push(step);

}

}

}

最后,我们可以使用AgentExecutor运行代理:

// Define the list of tools the agent can use

const tools = [

new SerpAPI(process.env.SERPAPI_API_KEY, {

location: "Austin,Texas,United States",

hl: "en",

gl: "us",

}),

];

// Create the agent from the chat model and the tools

const agent = ChatAgent.fromLLMAndTools(new ChatOpenAI(), tools);

// Create an executor, which calls to the agent until an answer is found

const executor = AgentExecutor.fromAgentAndTools({ agent, tools });



const responseG = await executor.run(

"How many people live in canada as of 2023?"

);



console.log(responseG);

38,626,704.

内存: 向链和代理添加状态

您还可以使用链来存储状态。这对于聊天机器人等应用程序非常有用,因为您需要跟踪会话历史记录。MessagesPlaceholder是一个特殊的提示模板,每次调用将替换为传递的消息。

const chatPrompt = ChatPromptTemplate.fromPromptMessages([

SystemMessagePromptTemplate.fromTemplate(

"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."

),

new MessagesPlaceholder("history"),

HumanMessagePromptTemplate.fromTemplate("{input}"),

]);



const chain = new ConversationChain({

memory: new BufferMemory({ returnMessages: true, memoryKey: "history" }),

prompt: chatPrompt,

llm: chat,

});

该链将内部累加发送到模型和接收的输出消息。然后,在下一次调用时,它将把消息注入提示符中。所以你可以多次调用该链,并且它会记住先前的消息。(The chain will internally accumulate the messages sent to the model and the ones received as output. Then it will inject the messages into the prompt on the next call. So you can call the chain a few times and it remembers previous messages.)

const responseH = await chain.call({

input: "hi from London, how are you doing today",

});



console.log(responseH);

{

response: "Hello! As an AI language model, I don't have feelings, but I'm functioning properly and ready to assist you with any questions or tasks you may have. How can I help you today?"

}

const responseI = await chain.call({

input: "Do you know where I am?",

});



console.log(responseI);

{

response: "Yes, you mentioned that you are from London. However, as an AI language model, I don't have access to your current location unless you provide me with that information."

}

流式处理

您还可以使用流式 API 获取按照生成顺序返回的单词。这对于聊天机器人等情况很有用,因为您希望在生成过程中向用户展示正在生成的内容。请注意,在启用流式处理时,:OpenAI 目前不支持“tokenUsage”报告。

import { ChatOpenAI } from "langchain/chat_models/openai";
import { HumanChatMessage } from "langchain/schema";

const chat = new ChatOpenAI({
streaming: true,
callbacks: [
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
],
});

await chat.call([
new HumanChatMessage("Write me a song about sparkling water."),
]);
/*
Verse 1:
Bubbles rise, crisp and clear
Refreshing taste that brings us cheer
Sparkling water, so light and pure
Quenches our thirst, it's always secure

Chorus:
Sparkling water, oh how we love
Its fizzy bubbles and grace above
It's the perfect drink, anytime, anyplace
Refreshing as it gives us a taste

Verse 2:
From morning brunch to evening feast
It's the perfect drink for a treat
A sip of it brings a smile so bright
Our thirst is quenched in just one sip so light
...
*/