OpenAI LLM service with streaming SSE support. Spawns a task per request that pushes LLMTextFrame tokens as they arrive. Supports interruption by killing the in-flight task.