# `Feline.Services.OpenAI.StreamingLLM`
[🔗](https://github.com/dimamik/feline/blob/main/lib/feline/services/openai/streaming_llm.ex#L1)

OpenAI LLM service with streaming SSE support. Spawns a task per request
that pushes LLMTextFrame tokens as they arrive. Supports interruption
by killing the in-flight task.

# `child_spec`

---

*Consult [api-reference.md](api-reference.md) for complete listing*
