View Source LangChain.Chains.TextToTitleChain (LangChain v0.3.3)
A convenience chain for turning a user's prompt text into a summarized title for the anticipated conversation.
Basic Examples
A basic example that generates a title
llm = ChatOpenAI.new!(%{model: "gpt-3.5-turbo", stream: false, seed: 0})
user_text = "Let's start a new blog post about the magical properties of pineapple cookies."
%{
llm: llm,
input_text: user_text
}
|> TextToTitleChain.new!()
|> TextToTitleChain.evaluate()
#=> "Magical Properties of Pineapple Cookies Blog Post"
Examples using Title Examples
Want to get more consistent titles?
LLMs are pretty bad at following instructions for text length. However, we can provide examples titles for the LLM to follow in format style and length. We get the added benefit of getting more consistently formatted titles.
This is the same example, however now we provide other title examples to the LLM to follow for consistency.
llm = ChatOpenAI.new!(%{model: "gpt-3.5-turbo", stream: false, seed: 0})
user_text = "Let's start a new blog post about the magical properties of
pineapple cookies."
%{
llm: llm,
input_text: user_text,
examples: [
"Blog Post: Making Delicious and Healthy Smoothies",
"System Email: Notifying Users of Planned Downtime"
]
}
|> TextToTitleChain.new!()
|> TextToTitleChain.evaluate()
#=> "Blog Post: Exploring the Magic of Pineapple Cookies"
Overriding the System Prompt
For more explicit control of how titles are generated, an override_system_prompt
can be provided.
%{
llm: llm,
input_text: user_text,
override_system_prompt: ~s|
You expertly summarize the User Text into a short 3 or 4 word title to represent a conversation in a positive way.|
}
|> TextToTitleChain.new!()
|> TextToTitleChain.evaluate()
Using a Fallback
If the primary LLM fails to respond successfully, one or more fallback LLMs can be specified.
%{
llm: primary_llm,
input_text: user_text
}
|> TextToTitleChain.new!()
|> TextToTitleChain.evaluate(with_fallbacks: [fallback_llm])
Summary
Functions
Runs the TextToTitleChain and evaluates the result to return the final answer.
Start a new TextToTitleChain configuration.
Start a new TextToTitleChain and return it or raise an error if invalid.
Run a simple LLMChain to summarize the user's prompt into a title for the conversation. Uses the provided model. Recommend faster, simpler LLMs without streaming.
Types
Functions
Runs the TextToTitleChain and evaluates the result to return the final answer.
If unable to generate a title, the fallback_title
is returned.
Option
:with_fallbacks
- Supports thewith_fallbacks: [fallback_llm]
where one or more additional LLMs can be specified as a backup when the preferred LLM fails.
@spec new(attrs :: map()) :: {:ok, t()} | {:error, Ecto.Changeset.t()}
Start a new TextToTitleChain configuration.
{:ok, chain} = TextToTitleChain.new(%{
llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: false},
input_text: "Let's create a marketing blog post about our new product 'Fuzzy Furries'"
})
Start a new TextToTitleChain and return it or raise an error if invalid.
chain = TextToTitleChain.new!(%{
llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: false},
input_text: "Let's create a marketing blog post about our new product 'Fuzzy Furries'"
})
@spec run(t(), Keyword.t()) :: {:ok, LangChain.Chains.LLMChain.t()} | {:error, LangChain.Chains.LLMChain.t(), LangChain.LangChainError.t()}
Run a simple LLMChain to summarize the user's prompt into a title for the conversation. Uses the provided model. Recommend faster, simpler LLMs without streaming.
If it fails to summarize to a title, it returns the default text.
new_title = TextToTitleChain.new!(%{
llm: %ChatOpenAI{model: "gpt-3.5-turbo", stream: false},
input_text: "Let's create a marketing blog post about our new product 'Fuzzy Furries'"
})
|> TextToTitleChain.run()