Finch (Finch v0.22.0)
View SourceAn HTTP client with a focus on performance, built on top of Mint and NimblePool.
We attempt to achieve this goal by providing efficient connection pooling strategies and avoiding copying of memory wherever possible.
Most developers will most likely prefer to use the fabulous HTTP client Req which takes advantage of Finch's pooling and provides an extremely friendly and pleasant to use API.
Usage
In order to use Finch, you must start it and provide a :name. Often in your
supervision tree:
children = [
{Finch, name: MyFinch}
]Or, in rare cases, dynamically:
Finch.start_link(name: MyFinch)Once you have started your instance of Finch, you are ready to start making requests:
Finch.build(:get, "https://hex.pm") |> Finch.request(MyFinch)When using HTTP/1, Finch will parse the passed in URL into a {scheme, host, port}
tuple, and maintain one or more connection pools for each {scheme, host, port} you
interact with.
You can also configure a pool size and count to be used for specific URLs that are
known before starting Finch. The passed URLs will be parsed into {scheme, host, port},
and the corresponding pools will be started. See Finch.start_link/1 for configuration
options.
children = [
{Finch,
name: MyConfiguredFinch,
pools: %{
:default => [size: 10, count: 2],
"https://hex.pm" => [size: 32, count: 8]
}}
]Pools will be started for each configured {scheme, host, port} when Finch is started.
For any unconfigured {scheme, host, port}, the pool will be started the first time
it is requested using the :default configuration. This means given the pool
configuration above each origin/{scheme, host, port} will launch 2 (:count) new pool
processes. So, if you encountered 10 separate combinations, that'd be 20 pool processes.
For how :size and :count interact on HTTP/1, how workers are chosen when :count is
greater than 1 (including the :pool_strategy request option), see the Pool Configuration
Options and Multiple workers sections in Finch.start_link/1.
Pool Tagging
Finch supports pool tagging, which allows you to create separate pools for the same
{scheme, host, port} combination or Unix socket. This is useful when you need different
configurations or want to isolate traffic for different purposes (e.g., API vs web requests,
tenants, JWT tokens, etc).
You can configure tagged pools using Finch.Pool.new/2:
children = [
{Finch,
name: MyTaggedFinch,
pools: %{
Finch.Pool.new("https://api.example.com") => [size: 50, count: 4],
Finch.Pool.new("https://api.example.com", tag: :web) => [size: 20, count: 2],
Finch.Pool.new("http+unix:///tmp/api.sock", tag: :api) => [size: 30, count: 2],
Finch.Pool.new("http+unix:///tmp/api.sock", tag: :web) => [size: 10, count: 1],
:default => [size: 10, count: 1]
}}
]When making requests, you can specify which pool to use by setting the :pool_tag option:
# Uses the :api tagged pool
request = Finch.build(:get, "https://api.example.com/users", [], nil, pool_tag: :api)
Finch.request(request, MyTaggedFinch)
# Uses the :web tagged pool
request = Finch.build(:get, "https://api.example.com/users", [], nil, pool_tag: :web)
Finch.request(request, MyTaggedFinch)
# Uses :default tag (or falls back to default config)
request = Finch.build(:get, "https://api.example.com/users")
Finch.request(request, MyTaggedFinch)
# Tagged Unix socket pool
request =
Finch.build(
:get,
"http://localhost/",
[],
nil,
unix_socket: "/tmp/api.sock",
pool_tag: :api
)
Finch.request(request, MyTaggedFinch)When making a request with a specific :pool_tag, the tag must exist in your pool
configuration. If it doesn't exist, the request will use the :default configuration.
This allows you to have specific configurations for tagged pools while maintaining
sensible defaults for untagged requests.
Note pools are not automatically terminated by default, if you need to
terminate them after some idle time, use the pool_max_idle_time option (available only for HTTP1 pools).
User-managed pools
You can start pools under your own supervision tree using Finch.Pool.child_spec/1. The Finch
instance must be started first. User-managed pools integrate with Finch.request/2,
Finch.stop_pool/2, Finch.get_pool_status/2, and Finch.find_pool/2:
children = [
{Finch, name: MyFinch},
{Finch.Pool, finch: MyFinch, pool: Finch.Pool.new("https://api.internal", tag: :api), size: 10},
{Finch.Pool, finch: MyFinch, pool: Finch.Pool.new("https://node-2.internal", tag: :node2), size: 10}
]
Supervisor.start_link(children, strategy: :one_for_one)To add pools dynamically under Finch's internal supervisor, use Finch.start_pool/3:
Finch.start_pool(MyFinch, Finch.Pool.new("https://api.example.com", tag: :api), size: 10)Use Finch.find_pool/2 to check if a pool exists:
case Finch.find_pool(MyFinch, Finch.Pool.new("https://api.internal", tag: :api)) do
{:ok, _pid} -> # Pool exists
:error -> # Pool not found
endTelemetry
Finch uses Telemetry to provide instrumentation. See the Finch.Telemetry
module for details on specific events.
Logging TLS Secrets
Finch supports logging TLS secrets to a file. These can be later used in a tool such as
Wireshark to decrypt HTTPS sessions. To use this feature you must specify the file to
which the secrets should be written. If you are using TLSv1.3 you must also add
keep_secrets: true to your pool :transport_opts. For example:
{Finch,
name: MyFinch,
pools: %{
default: [conn_opts: [transport_opts: [keep_secrets: true]]]
}}There are two different ways to specify this file:
- The
:ssl_key_log_fileconnection option in your pool configuration. For example:
{Finch,
name: MyFinch,
pools: %{
default: [
conn_opts: [
ssl_key_log_file: "/writable/path/to/the/sslkey.log"
]
]
}}- Alternatively, you could also set the
SSLKEYLOGFILEenvironment variable.
Summary
Types
Pool metrics grouped by pool identifier when querying the :default configuration.
Errors returned by Finch request functions.
The :name provided to Finch in start_link/1.
Pool metrics returned by get_pool_status/2 for a single pool.
Pool strategy to choose a pool from a list of pools.
The request body function used with {:stream, req_body_fun} in build/5.
Options used by request functions.
The reference used to identify a request sent using async_request/3.
The stream function given to stream/5.
The stream function given to stream_while/5.
Functions
Sends an HTTP request asynchronously, returning a request reference.
Builds an HTTP request to be sent with request/3, async_request/3, stream/5,
or stream_while/5.
Cancels a request sent with async_request/3.
Returns a specification to start this module under a supervisor.
Finds a pool by its configuration and returns the pool pid.
Returns the current worker count for the given pool.
Get pool metrics.
Returns true if the term is any Finch error struct (Finch.error()).
A guard that returns true if ref is a valid request reference from async_request/3.
Sends an HTTP/2 PING frame and waits for PONG.
Sends an HTTP request and returns a Finch.Response struct.
Sends an HTTP request and returns a Finch.Response struct
or raises an exception in case of failure.
Dynamically changes the number of pool workers for the given pool.
Start an instance of Finch.
Starts a pool dynamically under Finch's internal supervision tree.
Stops the pool of processes associated with the given pool identifier.
Streams an HTTP request and returns the accumulator.
Streams an HTTP request until it finishes or is cancelled.
Types
@type default_pool_metrics() :: %{required(Finch.Pool.t()) => pool_metrics()}
Pool metrics grouped by pool identifier when querying the :default configuration.
@type error() :: Finch.Error.t() | Finch.HTTPError.t() | Finch.TransportError.t()
Errors returned by Finch request functions.
@type name() :: atom()
The :name provided to Finch in start_link/1.
@type pool_identifier() :: url :: String.t() | scheme_host_port() | Finch.Pool.t()
@type pool_metrics() :: [Finch.HTTP1.PoolMetrics.t()] | [Finch.HTTP2.PoolMetrics.t()]
Pool metrics returned by get_pool_status/2 for a single pool.
@type pool_strategy() :: pool_strategy_fun() | pool_strategy_fun_with_state() | pool_strategy_module_with_state() | pool_strategy_module()
@type pool_strategy_fun_with_state() :: {([term(), ...], pool_strategy_state() -> term()), pool_strategy_state()}
@type pool_strategy_module() :: module()
@type pool_strategy_module_with_state() :: {module(), pool_strategy_state()}
@type pool_strategy_state() :: term()
Pool strategy to choose a pool from a list of pools.
{module, state}- a module implementingFinch.Pool.Strategy(e.g.{Finch.Pool.Strategy.RoundRobin, counter}){&module.select/2, state}- same as above but avoids dynamic dispatch; use for performance-critical pathsmodule- a module implementingFinch.Pool.Strategy(e.g.Finch.Pool.Strategy.Random) that needs no state:nilwill be passed as a default- a 1-arity function
fn entries -> chosen endwhereentriesisnonempty_list(term())
@type req_body_fun(acc) :: (acc -> {:data, binary(), acc} | {:done, acc} | {:halt, acc})
The request body function used with {:stream, req_body_fun} in build/5.
@type request_opt() :: {:pool_timeout, timeout()} | {:receive_timeout, timeout()} | {:request_timeout, timeout()} | {:pool_strategy, pool_strategy()}
@type request_opts() :: [request_opt()]
Options used by request functions.
@opaque request_ref()
The reference used to identify a request sent using async_request/3.
Use the is_request_ref/1 guard when matching on async response messages in
GenServer.handle_info/2 or similar callbacks to ensure your code keeps
working if the internal structure of the reference changes.
@type scheme() :: :http | :https
@type scheme_host_port() :: {scheme(), host :: String.t(), port :: :inet.port_number()}
@type stream(acc) :: ({:status, integer()} | {:headers, Mint.Types.headers()} | {:data, binary()} | {:trailers, Mint.Types.headers()}, acc -> acc)
The stream function given to stream/5.
@type stream_while(acc) :: ({:status, integer()} | {:headers, Mint.Types.headers()} | {:data, binary()} | {:trailers, Mint.Types.headers()}, acc -> {:cont, acc} | {:halt, acc})
The stream function given to stream_while/5.
Functions
@spec async_request(Finch.Request.t(), name(), request_opts()) :: request_ref()
Sends an HTTP request asynchronously, returning a request reference.
If the request is sent using HTTP1, an extra process is spawned to
consume messages from the underlying socket. The messages are sent
to the current process as soon as they arrive, as a firehose. If
you wish to maximize request rate or have more control over how
messages are streamed, a strategy using request/3 or stream/5
should be used instead.
Receiving the response
Response information is sent to the calling process as it is received
in {ref, response} tuples.
If the calling process exits before the request has completed, the request will be canceled.
Responses include:
{:status, status}- HTTP response status{:headers, headers}- HTTP response headers{:data, data}- section of the HTTP response body{:error, exception}- an error occurred during the request:done- request has completed successfully
On a successful request, a single :status message will be followed
by a single :headers message, after which more than one :data
messages may be sent. If trailing headers are present, a final
:headers message may be sent. Any :done or :error message
indicates that the request has succeeded or failed and no further
messages are expected.
Example
iex> req = Finch.build(:get, "https://httpbin.org/stream/5")
iex> ref = Finch.async_request(req, MyFinch)
iex> flush()
{ref, {:status, 200}}
{ref, {:headers, [...]}}
{ref, {:data, "..."}}
{ref, :done}Connection draining
Unlike request/3 and stream/5, async requests are not automatically retried when a
pool is draining (see http2: [max_connection_age: ...]). If the caller receives
{ref, {:error, %Finch.Error{reason: :read_only}}}, it should retry by calling
async_request/3 again.
Options
Shares options with request/3.
@spec build( Finch.Request.method(), Finch.Request.url(), Finch.Request.headers(), Finch.Request.body(), Finch.Request.build_opts() ) :: Finch.Request.t()
Builds an HTTP request to be sent with request/3, async_request/3, stream/5,
or stream_while/5.
Request body can be one of:
nil- no body is sent with the request.iodata- the body to send for the request.{:stream, enumerable}- stream request body chunks emitted by anEnumerable.{:stream, req_body_fun}- stream request body chunks emitted byreq_body_fun. Can only be used withFinch.stream_while/5on HTTP/1 pools.See
Finch.stream_while/5for more information.
Options
:unix_socket- Path to a Unix domain socket to connect to instead of the URL host/port. The URL scheme still determines whether HTTP or HTTPS is used.:pool_tag- The tag to use when selecting which pool to use for this request. Defaults to:default.
@spec cancel_async_request(request_ref()) :: :ok
Cancels a request sent with async_request/3.
Returns a specification to start this module under a supervisor.
See Supervisor.
@spec find_pool(name(), Finch.Pool.t()) :: {:ok, pid()} | :error
Finds a pool by its configuration and returns the pool pid.
Returns {:ok, pid} if the pool exists, :error otherwise.
This is useful for checking if a pool is available before making requests, or for advanced use cases where you need direct access to the pool process.
Example
case Finch.find_pool(MyFinch, Finch.Pool.new("https://api.internal", tag: :api)) do
{:ok, pid} -> # Pool exists
:error -> # Pool not found
end
@spec get_pool_count(name(), pool_identifier()) :: {:ok, pos_integer()} | {:error, :not_found}
Returns the current worker count for the given pool.
Returns {:ok, count} if the pool exists, {:error, :not_found} otherwise.
Examples
{:ok, count} = Finch.get_pool_count(MyFinch, "https://example.com")
@spec get_pool_status(name(), :default | pool_identifier()) :: {:ok, pool_metrics()} | {:ok, default_pool_metrics()} | {:error, :not_found}
Get pool metrics.
When given a URL or pool identifier tuple, this returns the metrics list for that specific
pool. The number of items in the metrics list depends on the configured
:count option and each entry will have a pool_index going from 1 to
:count.
When :default is provided, Finch returns the metrics for all pools started
from the :default configuration. In this case the return value is a map
keyed by each pool's {scheme, host, port} tuple with the corresponding
metrics list as the value.
The metrics struct depends on the pool scheme defined in the :protocols
option: Finch.HTTP1.PoolMetrics for :http1 and Finch.HTTP2.PoolMetrics
for :http2. See the documentation for those modules for more details.
{:error, :not_found} is returned in the following scenarios:
- There is no pool registered for the given Finch instance and pool identifier.
- The pool has
start_pool_metrics?: false(the default). :defaultis provided but no pools have been started from the:defaultconfiguration (or none have metrics enabled).
Examples
iex> Finch.get_pool_status(MyFinch, "https://httpbin.org")
{:ok, [
%Finch.HTTP1.PoolMetrics{
pool_index: 1,
pool_size: 50,
available_connections: 43,
in_use_connections: 7
},
%Finch.HTTP1.PoolMetrics{
pool_index: 2,
pool_size: 50,
available_connections: 37,
in_use_connections: 13
}]
}
iex> Finch.get_pool_status(MyFinch, :default)
{:ok,
%{
%Finch.Pool{host: "httpbin.com", port: 443, scheme: :https, tag: :default} => [
%Finch.HTTP1.PoolMetrics{
pool_index: 1,
pool_size: 50,
available_connections: 43,
in_use_connections: 7
}
]
}}
Returns true if the term is any Finch error struct (Finch.error()).
A guard that returns true if ref is a valid request reference from async_request/3.
Use this guard when matching on async response messages in GenServer.handle_info/2
so your code remains valid if the internal structure of the reference changes.
Example
require Finch
def handle_info({ref, response}, state) when Finch.is_request_ref(ref) do
# handle async response from Finch.async_request/3
end
@spec ping(name(), pool_identifier()) :: {:ok, integer()} | {:error, term()}
Sends an HTTP/2 PING frame and waits for PONG.
Returns {:ok, rtt_ms} where rtt_ms is the round-trip time in native time units,
or {:error, reason} if the ping fails.
This is only supported for HTTP/2 pools. Returns {:error, :not_http2} for
HTTP/1 pools.
Examples
{:ok, rtt} = Finch.ping(MyFinch, "https://example.com")
IO.puts("RTT: #{rtt}ms")
@spec request(Finch.Request.t(), name(), request_opts()) :: {:ok, Finch.Response.t()} | {:error, error()}
Sends an HTTP request and returns a Finch.Response struct.
It can still raise exceptions if it was not possible to check out a connection in the given :pool_timeout.
See also stream/5.
Connection draining
If the HTTP/2 pool this request is dispatched to is currently draining (see
http2: [max_connection_age: ...]), the request is automatically retried on a fresh
pool. The retry is transparent to the caller. See async_request/3 for the async
variant, which does not retry automatically.
Options
:pool_timeout- This timeout is applied when we check out a connection from the pool. Default value is5_000.:receive_timeout- The maximum time to wait for each chunk to be received before returning an error. Default value is15_000.:request_timeout- The amount of time to wait for a complete response before returning an error. This timeout only applies to HTTP/1, and its current implementation is a best effort timeout, it does not guarantee the call will return precisely when the time has elapsed. Default value is:infinity.:pool_strategy- When the pool has multiple shards (count: N), selects which shards handles the request. Default is random selection. Seepool_strategy/0for details.
@spec request!(Finch.Request.t(), name(), request_opts()) :: Finch.Response.t()
Sends an HTTP request and returns a Finch.Response struct
or raises an exception in case of failure.
See request/3 for more detailed information.
@spec set_pool_count(name(), pool_identifier(), pos_integer()) :: :ok | {:error, term()}
Dynamically changes the number of pool workers for the given pool.
Returns :ok on success, {:error, :not_found} if the pool doesn't exist.
Works with all kinds of pools, but note that :default pools must have
been materialized by at least one request before they can be resized.
Examples
:ok = Finch.set_pool_count(MyFinch, "https://example.com", 4)
Start an instance of Finch.
Options
:name- The name of your Finch instance. Required.:pools- A map of pool identifiers to configuration options. See the ":pools" subsection below.
:name
The name of your Finch instance. It is used to identify the instance when making requests
and when calling other functions like Finch.start_pool/3 or Finch.get_pool_status/2.
Examples
Finch.start_link(name: MyFinch):pools
A map where each key identifies a pool and each value is a keyword list of pool configuration
options (see "Pool Configuration Options" below).
Default is %{default: [size: 50, count: 1]}.
Pool keys may be:
- URL string – A binary URL. Pools created from URLs use the
:defaulttag unless you use aFinch.Pool.t/0struct as the key instead. Finch.Pool.t/0struct – Created withFinch.Pool.new/2. Use this when you need tagged pools (e.g. to run multiple pools for the same host with different configs).- URL string with
http+unix://orhttps+unix://– For Unix domain sockets (e.g."http+unix:///tmp/socket") :default– Catch-all. Any request whose pool is not in the map will use this config when its pool is started.
When making a request with a :pool_tag option, that tag must exist in your pool configuration.
If it does not, the request uses the :default configuration.
Examples
# URL keys (pool uses :default tag)
Finch.start_link(
name: MyFinch,
pools: %{
"https://api.example.com" => [size: 10, count: 2]
}
)
# Tagged pools via Finch.Pool.new/2
Finch.start_link(
name: MyFinch,
pools: %{
Finch.Pool.new("https://api.example.com", tag: :bulk) => [size: 100, count: 1],
Finch.Pool.new("https://api.example.com", tag: :realtime) => [size: 10, count: 2]
}
)
# Unix socket
Finch.start_link(
name: MyFinch,
pools: %{
"http+unix:///tmp/socket" => [size: 5]
}
)
# Custom default configuration
Finch.start_link(
name: MyFinch,
pools: %{
:default => [size: 25, count: 2]
}
)Multiple shards (count > 1) and :pool_strategy
When :count is greater than 1, Finch starts that many shards for the same pool. Each request
must pick one shard. By default Finch picks uniformly at random, which matches
Finch.Pool.Strategy.Random.
You can override selection per request with the :pool_strategy option (see pool_strategy/0
and Finch.Pool.Strategy). Built-in modules include Finch.Pool.Strategy.RoundRobin and
Finch.Pool.Strategy.Hash (stable mapping from a key to a shard, useful for affinity).
You can also pass a custom module or function.
Pool Configuration Options
:protocols- The type of connections to support.If using
:http1only, an HTTP1 pool without multiplexing is used. If using:http2only, an HTTP2 pool with multiplexing is used. If both are listed, then both HTTP1/HTTP2 connections are supported (via ALPN), but there is no multiplexing.The default value is
[:http1].:count(pos_integer/0) - How many shards to start for this pool key.HTTP/1: Each shard is a
NimblePool. HTTP/1 shards are able to re-use connections in the same shard and establish new ones only when necessary. A higher:countunder moderate traffic scatters work so idle connections stay per shard, which reduces HTTP/1 connection reuse. Prefer the lowest:countthat still meets latency and throughput; raisecountwhen you see checkout queue timeouts or heavy load on one shard. Usepool_metrics/0,get_pool_status/2, and Finch telemetry to inspect connections per shard.HTTP/2: Each shard is a single connection process, able to multiplex requests. Shards register under the same registry key, so increasing
:countspreads concurrent load across more processes and can relieve pressure when a single pool process (message handling, socket operations) becomes the bottleneck. Prefer the lowest:countunless one shard is the limit; raisecountwhen telemetry orget_pool_status/2shows a shard consistently hot (e.g. highin_flight_requests).
When
:count> 1,:pool_strategyselects the shard per request—see Multiple shards (count> 1) and:pool_strategy.The default value is
1.:size(pos_integer/0) - It is the maximum number of HTTP/1 connections per pool shard. Connections are opened lazily up to this cap; the value is an upper bound, not a reservation. When every connection in that shard is busy, further requests wait in the checkout queue until one is returned or:pool_timeout(seerequest/3) is exceeded.This applies only to HTTP/1 pools. For HTTP/2, this setting is ignored. A single connection multiplexes streams per pool process; use
:countfor more HTTP/2 connections in parallel.Combined with
:count, the upper bound on concurrent HTTP/1 connections to one origin is roughlycount * size. Actual open connections may be lower.The default value is
50.:conn_opts(keyword/0) - These options are passed toMint.HTTP.connect/4whenever a new connection is established.:modeis not configurable as Finch must control this setting. Typically these options are used to configure proxying, https settings, or connect timeouts. The default value is[].:pool_max_idle_time(timeout/0) - The maximum number of milliseconds that a pool can be idle before being terminated, used only by HTTP1 pools. This options is forwarded to NimblePool and it starts and idle verification cycle that may impact performance if misused. For instance setting a very low timeout may lead to pool restarts. For more information see NimblePool'shandle_ping/2documentation. The default value is:infinity.:conn_max_idle_time(timeout/0) - The maximum number of milliseconds an HTTP1 connection is allowed to be idle before being closed during a checkout attempt. The default value is:infinity.:start_pool_metrics?(boolean/0) - When true, pool metrics will be collected and available throughget_pool_status/2The default value isfalse.:http2(keyword/0) - HTTP/2-specific options. Only relevant whenprotocolsincludes:http2. The default value is[wait_for_server_settings?: false, ping_interval: :infinity, max_connection_age: :infinity, max_connection_age_jitter: 0].:wait_for_server_settings?(boolean/0) - When true, the pool does not send any request until the server's SETTINGS frame has been received and applied. If a request arrives before that, it fails withFinch.Errorreason:connection_not_ready(callers should retry). When false, behaviour is unchanged and requests may be sent before SETTINGS. The default value isfalse.:ping_interval(timeout/0) - Interval in milliseconds between HTTP/2 PING frames sent to keep the connection alive. The timer resets on any connection activity, so PINGs are only sent after the connection has been idle for this duration. When set to:infinity(default), no PINGs are sent. The default value is:infinity.:max_connection_age(timeout/0) - Maximum lifetime in milliseconds for an HTTP/2 connection before it is gracefully drained and replaced with a fresh one. When the timer expires the pool unregisters from the Registry (so new requests go to a fresh connection), finishes any in-flight requests, then terminates normally — the supervisor restarts it with a new DNS lookup. Useful for Kubernetes headless-service load balancing where DNS entries rotate. Defaults to:infinity(no age limit). The default value is:infinity.:max_connection_age_jitter(non_neg_integer/0) - Random jitter in milliseconds added to:max_connection_age. Prevents multiple pool shards from draining simultaneously (thundering-herd). The actual age used ismax_connection_age + :rand.uniform(max_connection_age_jitter). Defaults to0(no jitter). The default value is0.
@spec start_pool(name(), Finch.Pool.t(), keyword()) :: :ok
Starts a pool dynamically under Finch's internal supervision tree.
Returns :ok if the pool was started or already exists.
Options
Same pool configuration options as Finch.start_link/1:
:size, :count, :protocols, :conn_opts, etc.
Example
Finch.start_pool(MyFinch, Finch.Pool.new("https://api.example.com", tag: :api), size: 10)
@spec stop_pool(name(), pool_identifier()) :: :ok | {:error, :not_found}
Stops the pool of processes associated with the given pool identifier.
This function can be invoked to manually stop the pool for the given identifier when you know it's not going to be used anymore.
Note that this function is not safe with respect to concurrent requests. Invoking it while another request to the same pool is taking place might result in the failure of that request. It is the responsibility of the client to ensure that no request to the same pool is taking place while this function is being invoked.
@spec stream(Finch.Request.t(), name(), acc, stream(acc), request_opts()) :: {:ok, acc} | {:error, error(), acc} when acc: term()
Streams an HTTP request and returns the accumulator.
resp_fun receives a response entry and the accumulator acc, and must return
the updated accumulator.
Response entries are:
{:status, status}- the HTTP response status{:headers, headers}- the HTTP response headers{:data, data}- the HTTP response body chunk{:trailers, trailers}- the HTTP response trailers
See also request/3, stream_while/5.
HTTP2 streaming and back-pressure
At the moment, streaming over HTTP2 connections do not provide any back-pressure mechanism: this means the response will be sent to the client as quickly as possible. Therefore, you must not use streaming over HTTP2 for non-terminating responses or when streaming large responses which you do not intend to keep in memory.
Connection draining
If the HTTP/2 pool this request is dispatched to is currently draining (see
http2: [max_connection_age: ...]), the request is automatically retried on a fresh
pool. The retry is transparent to the caller. See async_request/3 for the async
variant, which does not retry automatically.
Options
Shares options with request/3.
Examples
path = "/tmp/archive.zip"
file = File.open!(path, [:write, :exclusive])
url = "https://example.com/archive.zip"
request = Finch.build(:get, url)
Finch.stream(request, MyFinch, nil, fn
{:status, status}, _acc ->
IO.inspect(status)
{:headers, headers}, _acc ->
IO.inspect(headers)
{:data, data}, _acc ->
IO.binwrite(file, data)
end)
File.close(file)
@spec stream_while(Finch.Request.t(), name(), acc, stream_while(acc), request_opts()) :: {:ok, acc} | {:error, error(), acc} when acc: term()
Streams an HTTP request until it finishes or is cancelled.
Request body streaming
When the request body is set to {:stream, req_body_fun} (see build/5), req_body_fun
receives the accumulator acc and must return one of:
{:data, chunk, acc}- emit request bodychunkand continue streaming{:done, acc}- request body is done,accis passed toresp_fun{:halt, acc}- cancel the request and close the connection
{:stream, req_body_fun} is currently only supported on HTTP/1 pools.
Response streaming
resp_fun receives a response entry and the accumulator acc, and must return one of:
{:cont, acc}- continue streaming{:halt, acc}- cancel the request. On HTTP/1, this also closes the connection
Response entries are:
{:status, status}- the HTTP response status{:headers, headers}- the HTTP response headers{:data, data}- the HTTP response body chunk{:trailers, trailers}- the HTTP response trailers
HTTP2 streaming and back-pressure
At the moment, streaming over HTTP2 connections do not provide any back-pressure mechanism: this means the response will be sent to the client as quickly as possible. Therefore, you must not use streaming over HTTP2 for non-terminating responses or when streaming large responses which you do not intend to keep in memory.
Connection draining
If the HTTP/2 pool this request is dispatched to is currently draining (see
http2: [max_connection_age: ...]), the request is automatically retried on a fresh
pool. The retry is transparent to the caller. See async_request/3 for the async
variant, which does not retry automatically.
Options
Shares options with request/3.
Examples
path = "/tmp/archive.zip"
file = File.open!(path, [:write, :exclusive])
request = Finch.build(:get, "https://example.com/archive.zip")
Finch.stream_while(request, MyFinch, nil, fn
{:status, status}, acc ->
IO.inspect(status)
{:cont, acc}
{:headers, headers}, acc ->
IO.inspect(headers)
{:cont, acc}
{:data, data}, acc ->
IO.binwrite(file, data)
{:cont, acc}
end)
File.close(file)Uploading a file using req_body_fun:
file = File.open!("/tmp/archive.zip", [:read])
req_body_fun = fn file ->
case IO.binread(file, 4096) do
:eof -> {:done, file}
data -> {:data, data, file}
end
end
request = Finch.build(:post, "https://example.com/upload", [], {:stream, req_body_fun})
resp_fun = fn
{:status, status}, acc ->
IO.inspect(status)
{:cont, acc}
{:headers, headers}, acc ->
IO.inspect(headers)
{:cont, acc}
{:data, data}, acc ->
IO.inspect(data)
{:cont, acc}
end
{:ok, file} = Finch.stream_while(request, MyFinch, file, resp_fun)
File.close(file)