API Reference langchainex v0.2.3

modules

Modules

TheAccountant is responsible for storing and retrieving usage and pricing reports

An Anchor is a point at the end of a chain where the AI confirms with (hopefully) a human that the result of the chain is 'in alignment' before moving on. AI programming differs from traditional programming because it's inherently hard to predict what it will actually do at run-time. This means that best practice is to anchor your chains so that at runtime a human (or at least a traditional hard-coded computer program) can confirm the AI isn't doing something harmful or WOPRish.

An anchor for getting confirmation at run-time from the command line

A chain of ChainLinks to be processed in order, usually ending in an anchor for user confirmation.

an individual chain_link in a language chain when called, a chainlink will

A Chat is a list of multiple PromptTemplates along with all their input variables

An Effector is used by a daemon to impact the outside world. By default, an Effector should ask for confirmation before actually impacting anything. Daemons are AIs and should not be trusted to do the right thing without supervision.

An OpenAI implementation of the LangChain.EmbedderProtocol. Use this for embedding your docs for openai models by specifying the model_name in your LLM.

Language Model GenServer

A "portal" is a function that transports knowledge between the semantic and BEAM knowledge domains A standard langhchain 'prompt' is a 'portal' into the semantic domain Likewise a template that an LLM can use to fill in Elixir code is a 'portal' into the BEAM knowledge domain

a PromptTemplate is just a normal string template, you can pass it a set of values and it will interpolate them. You can also partially evaluate the template by calling the partial/2 function input_variables will contain the list of variables that still need to be specified to complete the template.

Input Processing for the Bumblebee models

When you want to use the Bumblebee API to embed documents. Embedding will transform documents into vectors of numbers that you can then feed into a neural network. The embedding provider must match the input size of the model and use the same encoding scheme.

A module for interacting with Bumblebee language models, unlike the other providers Bumblebee runs models on your local hardware, see https://hexdocs.pm/bumblebee/Bumblebee.html

Cohere is a for-pay provider for ML models https://cohere.ai/docs

A module for interacting with Cohere's API Cohere is a host for ML models that generate language based on given prompts.

Goose AI is a for-pay provider for ML models https://goose.ai/docs/api/engines

A module for interacting with GooseAi's API GooseAi is a host for ML models that take in any data and return any data, it can be used for LLM, image generation, image parsing, sound, etc

shared configuration for Huggingface API calls

Audio models with huggingface

When you want to use the huggingface API to embed documents Embedding will transform documents into vectors of numbers that you can then feed into a neural network The embedding provider must match the input size of the model and use the same encoding scheme. Use Sentence Transformer modles like

Image models with huggingface

A module for interacting with Huggingface's API Huggingface is a host for ML models that take in any data and return any data, it can be used for LLM, image generation, image parsing, sound, etc

NLP Cloud Provider https://nlpcloud.com/ This module is predominantly used for internal API handling

Language model implementation for NLP Cloud.

OpenAI results return a body that will contain: 'usage': {'prompt_tokens': 56, 'completion_tokens': 31, 'total_tokens': 87}

A module for interacting with OpenAI's main language models

Replicate's pricing structure is based on what hardware you use and how long you use it. More expensive hardware runs faster

A module for interacting with Replicate's API Replicate is a host for ML models that take in any data and return any data, it can be used for LLM, image generation, image parsing, sound, etc

A filesystem implementation of the LangChain.Retriever protocol.

Gitex is a wrapper around the Elixir Git library.

ScrapeChain is a wrapper around a special type of Chain that requires 'input_schema' and 'input_text' in its input_variables and combines it with an output_parser. Once you define that chain, you can have the chain 'scrape' a text and return the formatted output in virtually any form.

A Scraper is a GenServer that scrapes natural language text and tries to turn it into some kind of structured data. It comes with a built in "default_scraper" that can generally extract data from text according to the schema you gave it. Examples

simple splitter that splits strings on a special character, like ' ' or '.'

## VectorStore Genserver, provides all the services for storing and searching vectors

A Pinecone implementation of the LangChain.VectorStore.Provider protocol. the 'config' argument just needs to be a struct with the config_name (ie :pinecone) for the specific db you want to use, this implementation will grab that config from config.exs for you. You can have multiple pinecone configs in config.exs, just make and multiple implementations of this module, each with a different config_name.

Documentation for LangchainEx.

A mock implementation of the LangChain.VectorStore.Provider protocol for testing purposes.