View Source Getting Started

Mix.install([
  {:langchain, "~> 0.3.0-rc.0"},
  {:kino, "~> 0.12.0"}
])

Section

After installing the dependency, let's look at the simplest example to get started.

This is interactively available as a Livebook notebook named notebooks/getting_started.livemd.

Basic Example

Let's build the simplest full LLMChain example so we can see how to make a call to ChatGPT from our Elixir application.

NOTE: This assumes your OPENAI_KEY is already set as a secret for this notebook.

Application.put_env(:langchain, :openai_key, System.fetch_env!("LB_OPENAI_API_KEY"))
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI
alias LangChain.Message

{:ok, _updated_chain, response} =
  %{llm: ChatOpenAI.new!(%{model: "gpt-4o"})}
  |> LLMChain.new!()
  |> LLMChain.add_message(Message.new_user!("Testing, testing!"))
  |> LLMChain.run()

response.content

Nice! We've just saw how easy it is to get access to ChatGPT from our Elixir application!

Let's build on that example and define some system context for our conversation.

Adding a System Message

When working with ChatGPT and other LLMs, the conversation works as a series of messages. The first message is the system message. This defines the context for the conversation. Here we can give the LLM some direction and impose limits on what it should do.

Let's create a system message followed by a user message.

{:ok, _updated_chain, response} =
  %{llm: ChatOpenAI.new!(%{model: "gpt-4"})}
  |> LLMChain.new!()
  |> LLMChain.add_messages([
    Message.new_system!(
      "You are an unhelpful assistant. Do not directly help or assist the user."
    ),
    Message.new_user!("What's the capital of the United States?")
  ])
  |> LLMChain.run()

response.content

Here's the answer it gave me when I ran it:

Why don't you try looking it up online? There's so much information readily available on the internet. You might even learn a few other interesting facts about the country.

What I love about this is we can see the power of the system message. It completely changed the way the LLM behaves by default.

Beyond the system message, we pass back a whole collection of messages as the conversation continues. The updated_chain is part of the return and includes the newly received response message from the LLM as assistant message.

Streaming Responses

If we want to display the messages as they are returned in the teletype way LLMs can, then we want to stream the responses.

In this example, we'll output the responses as they are streamed back. That happens in a callback function that we provide.

The stream: true setting belongs to the %ChatOpenAI{} struct that setups up our configuration. We also pass in the callbacks with the llm to fire the on_llm_new_delta. We can pass in the callbacks to the chain as well to fire the on_message_processed callback after the chain assembles the deltas and processes the finished message.

alias LangChain.MessageDelta

handler = %{
  on_llm_new_delta: fn _model, %MessageDelta{} = data ->
    # we received a piece of data
    IO.write(data.content)
  end,
  on_message_processed: fn _chain, %Message{} = data ->
    # the message was assmebled and is processed
    IO.puts("")
    IO.puts("")
    IO.inspect(data.content, label: "COMPLETED MESSAGE")
  end
}

{:ok, _updated_chain, response} =
  %{
    # llm config for streaming and the deltas callback
    llm: ChatOpenAI.new!(%{model: "gpt-4o", stream: true, callbacks: [handler]}),
    # chain callbacks
    callbacks: [handler]
  }
  |> LLMChain.new!()
  |> LLMChain.add_messages([
    Message.new_system!("You are a helpful assistant."),
    Message.new_user!("Write a haiku about the capital of the United States")
  ])
  |> LLMChain.run()

response.content
# streamed
# ==> Washington D.C. stands,
# ... Monuments reflect history,
# ... Power's heart expands.

# ==> COMPLETED MESSAGE: "Washington D.C. stands,\nMonuments reflect history,\nPower's heart expands."

As the delta messages are received, the on_llm_new_delta callback function fires and the received data is written out to the console.

Finally, once the full message is received, the chain's on_message_processed callback fires and the completed message is written out separately.

Next Steps

With the basics covered, you're ready to start integrating an LLM into your Elixia application! Check out other notebooks for more specific examples and other ways to use it.