Skip to main content

入门指南: LLMChain

LLMChain是在语言模型周围添加一些功能的简单链。它被广泛地应用于LangChain中,包括其他链和代理。

LLMChain由一个PromptTemplate和一个语言模型(LLM或聊天模型)组成。

与LLMs的使用

我们可以构建一个LLMChain,它接受用户输入,使用PromptTemplate进行格式化,然后将格式化后的响应传递给LLM:

import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";

// We can construct an LLMChain from a PromptTemplate and an LLM.
const model = new OpenAI({ temperature: 0 });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chainA = new LLMChain({ llm: model, prompt });

// The result is an object with a `text` property.
const resA = await chainA.call({ product: "colorful socks" });
console.log({ resA });
// { resA: { text: '\n\nSocktastic!' } }

// Since the LLMChain is a single-input, single-output chain, we can also `run` it.
// This takes in a string and returns the `text` property.
const resA2 = await chainA.run("colorful socks");
console.log({ resA2 });
// { resA2: '\n\nSocktastic!' }

与聊天模型的使用

我们也可以构建一个LLMChain,它接受用户输入,使用PromptTemplate进行格式化,然后将格式化后的响应传递给ChatModel:

import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
} from "langchain/prompts";
import { LLMChain } from "langchain/chains";
import { ChatOpenAI } from "langchain/chat_models/openai";

// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.
const chat = new ChatOpenAI({ temperature: 0 });
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(
"You are a helpful assistant that translates {input_language} to {output_language}."
),
HumanMessagePromptTemplate.fromTemplate("{text}"),
]);
const chainB = new LLMChain({
prompt: chatPrompt,
llm: chat,
});

const resB = await chainB.call({
input_language: "English",
output_language: "French",
text: "I love programming.",
});
console.log({ resB });
// { resB: { text: "J'adore la programmation." } }

在流模式下使用

我们也可以构建一个LLMChain,它接受用户输入,使用PromptTemplate进行格式化,然后将格式化后的响应传递给以流模式运行的LLM,该模式将在生成令牌时进行流式返回:

import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";

// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.
const model = new OpenAI({ temperature: 0.9, streaming: true });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chain = new LLMChain({ llm: model, prompt });

// Call the chain with the inputs and a callback for the streamed tokens
const res = await chain.call({ product: "colorful socks" }, [
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
]);
console.log({ res });
// { res: { text: '\n\nKaleidoscope Socks' } }

取消正在运行的LLMChain

我们也可以通过向call方法传递一个AbortSignal来取消正在运行的LLMChain:

import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";

// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.
const model = new OpenAI({ temperature: 0.9, streaming: true });
const prompt = PromptTemplate.fromTemplate(
"Give me a long paragraph about {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
const controller = new AbortController();

// Call `controller.abort()` somewhere to cancel the request.
setTimeout(() => {
controller.abort();
}, 3000);

try {
// Call the chain with the inputs and a callback for the streamed tokens
const res = await chain.call(
{ product: "colorful socks", signal: controller.signal },
[
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
]
);
} catch (e) {
console.log(e);
// Error: Cancel: canceled
}

在这个例子中,我们展示了在流模式下的取消操作,但是在非流模式下的操作方式相同。