View Source InstructorLite.Adapters.Llamacpp (instructor_lite v0.3.0)
LLaMA.cpp HTTP Server adapter.
This adapter is implemented using llama-specific completion endpoint.
Params
params
argument should be shaped as a completion request body
Example
InstructorLite.instruct(%{
prompt: "John is 25yo",
temperature: 0.8
},
response_model: %{name: :string, age: :integer},
adapter: InstructorLite.Adapters.Llamacpp,
adapter_context: [url: "http://localhost:8000/completion"]
)
{:ok, %{name: "John", age: 25}}
Summary
Functions
Updates params
with prompt based on json_schema
and notes
.
Parse API response.
Updates params
with prompt for retrying a request.
Make request to llamacpp HTTP server.
Functions
Updates params
with prompt based on json_schema
and notes
.
It uses json_schema
and system_prompt
request parameters.
Parse API response.
Can return:
{:ok, parsed_json}
on success.{:error, :unexpected_response, response}
if response is of unexpected shape.
Updates params
with prompt for retrying a request.
Make request to llamacpp HTTP server.
Options
:http_client
(atom/0
) - Any module that followsReq.post/2
interface The default value isReq
.:http_options
(keyword/0
) - Options passed tohttp_client.post/2
The default value is[receive_timeout: 60000]
.:url
(String.t/0
) - Required. API endpoint to use, for examplehttp://localhost:8000/completion