PostHog.Integrations.LLMAnalytics.Req (posthog v2.4.0)
View SourceReq plugin that automatically captures
$ai_generation
events for LLMs.
It tries to extract as much information as possible from both requests and responses. Currently, it works best with the following APIs:
- OpenAI (Responses)
- OpenAI (Chat Completions)
- Anthropic (Create Message)
- Gemini (generateContent)
Usage
Just add it to your Req client before making a call:
Req.new()
|> PostHog.Integrations.LLMAnalytics.Req.attach()
|> Req.post!(url: "https://api.openai.com/v1/responses", json: %{model: "gpt-5-mini", input: "Who are you?"})Optionally, start a new span beforehand to add additional properties to the event:
PostHog.LLMAnalytics.start_span(%{"$ai_span_name": "OpenAI Request"})
Req.post!(client, url: "https://api.openai.com/v1/responses", json: ...)Integrating with InstructorLite
InstructorLite built-in adapters allow customizing the HTTP client using the
http_client
option.
Define a wrapper module like this:
defmodule ReqWithLLMAnalytics do
def post(url, opts) do
Req.new(url: url)
|> PostHog.Integrations.LLMAnalytics.Req.attach()
|> Req.post(opts)
end
endThen pass this module as the http_client option in adapter_context.
Optionally, start a span beforehand!
PostHog.LLMAnalytics.start_span(%{"$ai_span_name": "LLM Call"})
InstructorLite.instruct(%{
input: [%{role: "user", content: "John is 25yo"}],
model: "gpt-4o-mini"
},
response_model: %{name: :string, age: :integer},
adapter: InstructorLite.Adapters.OpenAI,
adapter_context: [
api_key: "my-secret-key",
http_client: ReqWithLLMAnalytics
]
)
{:ok, %{name: "John", age: 25}}
Summary
Functions
Attach plugin to a Req.Request struct.
Functions
Attach plugin to a Req.Request struct.
The plugin registers the posthog_supervisor option. Use it if you run a custom
PostHog instance.
Examples
iex> Req.new() |> PostHog.Integrations.LLMAnalytics.Req.attach()
iex> Req.new() |> PostHog.Integrations.LLMAnalytics.Req.attach(posthog_supervisor: MyPostHog)