Modules
Defines the structure of callbacks and provides utilities for executing them.
Defines the callbacks fired by an LLMChain and LLM module.
Defines an LLMChain for performing data extraction from a body of text.
Define an LLMChain. This is the heart of the LangChain library.
Behaviour for LLMChain execution modes.
Pipe-friendly building blocks for composing custom execution modes.
Execution mode that runs a single step at a time.
Execution mode that loops until a successful result.
Execution mode that loops until a specific tool is called.
Execution mode that loops while the chain needs a response.
Run a router based on a user's initial prompt to determine what category best matches from the given options. If there is no good match, the value "DEFAULT" is returned.
When an AI conversation has many back-and-forth messages (from user to assistant to user to assistant, etc.), the number of messages and the total token count can be large. Large token counts present the following problems
A convenience chain for turning a user's prompt text into a summarized title for the anticipated conversation.
Module for interacting with Anthropic models.
Represents a chat model hosted by AWS Bedrock's Mantle endpoint — the OpenAI-compatible gateway AWS introduced for third-party models such as Moonshot AI's Kimi K2 family and OpenAI's gpt-oss series.
Represents a chat model hosted by Bumblebee and accessed through an
Nx.Serving.
Module for interacting with DeepSeek models.
Parses and validates inputs for making a request for the Google AI Chat API.
Module for interacting with xAI's Grok models.
Represents the Ollama AI Chat model
Represents the OpenAI ChatModel.
Represents the OpenAI Responses API
Chat adapter for orq.ai Deployments API.
Represents the Perplexity Chat model.
ChatModel adapter using the req_llm library as the HTTP/LLM backend.
Parses and validates inputs for making a request for the Google AI Chat API.
Embedded schema for OpenAI reasoning configuration options.
Utility that handles interaction with the application's configuration.
Behaviour for uploading files to LLM providers.
Uploads files to Anthropic's Files API.
Uploads files to Google Gemini's File API.
Uploads files to OpenAI's Files API.
Represents the result of a file upload to an LLM provider.
Defines a "function" that can be provided to an LLM for the LLM to optionally execute and pass argument data to.
Define a function parameter as a struct. Used to generate the expected
JSONSchema data for describing one or more arguments being passed to a
LangChain.Function.
A module providing Internationalization with a gettext-based API.
Functions for working with LangChain.GeneratedImage files.
Represents a generated image where we have either the base64 encoded contents or a temporary URL to it.
Represents the ModelsLab Images API for text-to-image generation using Flux, SDXL, Stable Diffusion, and 10,000+ community fine-tuned models.
Represents the OpenAI Images API endpoint for working with DALL-E-2 and DALL-E-3.
Exception used for raising LangChain specific errors.
Models a complete Message for a chat LLM.
Represents a citation linking a span of response text to a source.
Represents the source of a citation - where the cited information came from.
Models a ContentPart. ContentParts are now used for multi-modal support in
both messages and tool results. This enables richer responses, allowing text,
images, files, and thinking blocks to be combined in a single message or tool
result.
Represents an LLM's request to use tool. It specifies the tool to execute and may provide arguments for the tool to use.
Represents a the result of running a requested tool. The LLM's requests a tool
use through a ToolCall. A ToolResult returns the answer or result from the
application back to the AI.
Models a "delta" message from a chat LLM. A delta is a small chunk, or piece of a much larger complete message. A series of deltas are used to construct the complete message.
A built-in Message processor that processes a received Message for JSON contents.
Represents built-in tools available from AI/LLM services that can be used within the LangChain framework.
Enables defining a prompt, optionally as a template, but delaying the final building of it until a later time when input values are substituted in.
Defines a route or direction a prompting interaction with an LLM can take.
Telemetry events for LangChain.
The CharacterTextSplitter is a length based text splitter
that divides text based on specified characters.
This splitter provides consistent chunk sizes.
It operates as follows
Separators lists for programming and markdown languages.
Useful to use with LangChain.TextSplitter.RecursiveCharacterTextSplitter.
The RecursiveCharacterTextSplitter is the recommended spliltter for generic text.
It splits the text based on a list of characters.
It uses each of these characters sequentially, until the text is split
into small enough chunks. The default list is [" ", " ", " ", ""].
Contains token usage information returned from an LLM.
Defines a Calculator tool for performing basic math calculations.
Defines an OpenAI Deep Research tool for conducting comprehensive research on complex topics.
Represents a Deep Research request sent to the OpenAI API.
Represents the final result of a completed Deep Research request.
Represents the status of a Deep Research request.
HTTP client for OpenAI Deep Research API.
Captures the structured sequence of messages and tool calls produced during
an LLMChain run for inspection, serialization, and comparison.
ExUnit assertion helpers for trajectory comparison.
Collection of helpful utilities mostly for internal use.
Decodes AWS messages in the application/vnd.amazon.eventstream content-type. Ignores the headers because on Bedrock it's the same content type, event type & message type headers in every message.
Configuration for AWS Bedrock.
Module to help when working with the results of a chain.
Functions for converting messages into the various commonly used chat template formats.
A generic WebSocket client GenServer built on Mint.WebSocket.