# `GenAI`

# `chat`

Creates a new chat context.

# `execute`

Execute command.

  # Notes
Used, for example, to retrieve full report of a thread with an optimization loop or data loop command.
Under usual processing not final/accepted grid search loops are not returned in response and a linear thread is returned. Execute mode however will return a graph of all runs, or meta data based on options, and grid search configuration.

# `report`

Shorthand for execute report

# `run`

Run inference. Returning update chat completion and updated thread state.

# `run`

# `stream`

Run inference in streaming mode, interstitial messages (dynamics) if any will sent to the stream handler using the interstitial handle

# `with_api_key`

Set API Key or API Key constraint for inference.
@todo we will need per model keys for ollam and hugging face.

# `with_api_org`

Set API Org or API Org constraint for inference.

# `with_message`

Append message to thread.
@note Message may be dynamic/generated.

# `with_messages`

Append messages to thread.
@note Messages may be dynamic/generated.

# `with_model`

Set model or model selector constraint for inference.

# `with_model_setting`

# `with_model_setting`

# `with_provider_setting`

# `with_provider_setting`

# `with_provider_settings`

# `with_provider_settings`

# `with_safety_setting`

# `with_safety_setting`

Set safety setting for inference.
@note - only fully supported by Gemini. backwards compatibility can be enabled via prompting but will be less reliable.

# `with_setting`

Set setting or setting selector constraint for inference.

# `with_setting`

Set Inference setting.
`GenAI.Session`

# `with_settings`

Set settings setting selector constraints for inference.

# `with_stream_handler`

Override streaming handler module.

# `with_tool`

Set tool for inference.

# `with_tools`

Set tools for inference.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
