View Source Instructor.Adapters.Llamacpp (Instructor v0.0.5)

Runs against the llama.cpp server. To be clear this calls the llamacpp specific endpoints, not the open-ai compliant ones.

You can read more about it here: https://github.com/ggerganov/llama.cpp/tree/master/examples/server

Summary

Functions

Run a completion against the llama.cpp server, not the open-ai compliant one. This gives you more specific control over the grammar, and the ability to provide other parameters to the specific LLM invocation.

Functions

Link to this function

chat_completion(params, config \\ nil)

View Source

Run a completion against the llama.cpp server, not the open-ai compliant one. This gives you more specific control over the grammar, and the ability to provide other parameters to the specific LLM invocation.

You can read more about the parameters here: https://github.com/ggerganov/llama.cpp/tree/master/examples/server

Examples

iex> Instructor.chat_completion(%{ ...> model: "mistral-7b-instruct", ...> messages: [ ...> %{ role: "user", content: "Classify the following text: Hello I am a Nigerian prince and I would like to send you money!" }, ...> ], ...> response_model: response_model, ...> temperature: 0.5, ...> })