# `OpenrouterSdk.Api.Speech`
[🔗](https://github.com/zmzlois/openrouter_sdk/blob/v0.1.0/lib/openrouter_sdk/api/speech.ex#L1)

`POST /audio/speech` — text-to-speech.

    {:ok, mp3_binary} = OpenrouterSdk.Api.Speech.create(%{
      model: "openai/tts-1",
      input: "hello there",
      voice: "alloy",
      response_format: "mp3"
    })

    File.write!("hello.mp3", mp3_binary)

the response is the raw audio bytes — we do NOT decode json on this
endpoint.

# `create`

```elixir
@spec create(
  map(),
  keyword()
) :: {:ok, binary()} | {:error, OpenrouterSdk.Error.t()}
```

# `create_stream`

```elixir
@spec create_stream(
  map(),
  keyword()
) ::
  {:ok, Enumerable.t() | reference() | term()}
  | {:error, OpenrouterSdk.Error.t()}
```

stream the audio response as a `Stream` of byte chunks. consumers
who want pid / fun delivery pass `:into` exactly like the chat
streaming api.

---

*Consult [api-reference.md](api-reference.md) for complete listing*
