Skip to main content

moderation_chain

OpenAIModerationChain

您可以使用 OpenAIModerationChain,它负责评估输入并确定是否违反了 OpenAI 的服务条款。

如果输入包含任何违反服务条款的内容,并且 throwError 设置为 true,则会抛出并捕获错误。如果 throwError 设为 false,则该链将返回 "Text was found that violates OpenAI's content policy."(文本中发现违反 OpenAI 内容政策的内容)。

import { OpenAIModerationChain, LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";
import { OpenAI } from "langchain/llms/openai";

// Define an asynchronous function called run
export async function run() {
// A string containing potentially offensive content from the user
const badString = "Bad naughty words from user";

try {
// Create a new instance of the OpenAIModerationChain
const moderation = new OpenAIModerationChain();

// Send the user's input to the moderation chain and wait for the result
const { output: badResult } = await moderation.call({
input: badString,
throwError: true, // If set to true, the call will throw an error when the moderation chain detects violating content. If set to false, violating content will return "Text was found that violates OpenAI's content policy.".
});

// If the moderation chain does not detect violating content, it will return the original input and you can proceed to use the result in another chain.
const model = new OpenAI({ temperature: 0 });
const template = "Hello, how are you today {person}?";
const prompt = new PromptTemplate({ template, inputVariables: ["person"] });
const chainA = new LLMChain({ llm: model, prompt });
const resA = await chainA.call({ person: badResult });
console.log({ resA });
} catch (error) {
// If an error is caught, it means the input contains content that violates OpenAI TOS
console.error("Naughty words detected!");
}
}