Karn.AI (Karn v1.0.0)
Implementation for Karn
Summary
Functions
Requests AI to explain any specific module.
Sends a natural language query (query) to the AI server.
Reset context
Resets the model to the default.
Starts the AI server.
Terminates the server,prints usage before end
Switches the model used by the AI server.
Shows usage per model basis
View context
View state of the server
Functions
Requests AI to explain any specific module.
Parameters
module: The module to explainreferences (optional): The list of modules which are related tomoduledefaults to[]query (optional): The specific question you have about the module/ functions, else a breif explaination is given The user can ask follow up questions usingq/1NOTE: Currently the modules are not cached (on client or server) NOTE: Feeding too many modules might bloat the context, you can reduce context by firingreset_context
Returns
The response from the AI server (content and format depend on the server implementation). Current (and default implementation) is IO as this is ment to be used through IEX
:done
Sends a natural language query (query) to the AI server.
This is the primary function for asking the AI questions or giving it instructions. You can ask follow up questions on previous queries and explanations.
Parameters
query: The string query to send to the AI.
Returns
The response from the AI server (content and format depend on the server implementation). The default implementation is IO as this is ment to be used through IEX
:done
Reset context
Parameters
sys_prompt: Optional system prompt, if non is resorts to default
Returns
The response from the AI server (content and format depend on the server implementation). Current (and default implementation) is IO as this is ment to be used through IEX
:done
Resets the model to the default.
Returns
:okif the model was switched successfully.
Starts the AI server.
Parameters
opts: A keyword list of options to pass to the server. SeeKarn.AI.Server.start_link/1for more information.
Returns
{:ok, pid}if the server was started successfully.{:error, reason}otherwise.
Terminates the server,prints usage before end
Returns
The response from the AI server (content and format depend on the server implementation). Current (and default implementation) is IO as this is ment to be used through IEX
Switches the model used by the AI server.
Parameters
model: The name of the model to switch to.
Returns
:okif the model was switched successfully.{:error, :not_found}if the model is not available.
Shows usage per model basis
Returns
The response from the AI server (content and format depend on the server implementation). Current (and default implementation) is IO as this is ment to be used through IEX
:done
View context
Returns
The response from the AI server (content and format depend on the server implementation). Current (and default implementation) is IO as this is ment to be used through IEX
:done
View state of the server
Returns
The response from the AI server (content and format depend on the server implementation). Current (and default implementation) is IO as this is ment to be used through IEX
:done