View Source CHANGELOG
v0.5.6 (2024-08-01)
- Fix compatibility with Elixir v1.13
v0.5.5 (2024-08-01)
put_aws_sigv4
: Fix detecting serviceput_aws_sigv4
: Raise on no:access_key_id
/:secret_access_key
/:service
put_aws_sigv4
: Fix handling?name
(no value)handle_http_errors
: should run beforeverify_checksum
encode_body
: Support%File.Stream{}
in:form_multipart
encode_body
: Support%File.Stream{}
from other nodes in:form_multipart
v0.5.4 (2024-07-18)
run_finch
,Req.parse_message/2
: Gracefully handle process messages not meant for the asynchronous response. In that case,Req.parse_message/2
returns:unknown
.
v0.5.3 (2024-07-18)
Req.Test
: Fix using shared modeencode_body
: Add:form_multipart
optionput_aws_sigv4
: Try detecting the servicerun_finch
: Fix setting:finch
option
v0.5.2 (2024-07-08)
put_aws_sigv4
: Fix bug when using custom headersput_aws_sigv4
: Add:token
optionredirect
: Cancel async request before redirectingdecode_body
: Supportapplication/zstd
and.zst
v0.5.1 (2024-06-24)
retry
: Default:retry_log_level
to:warning
put_path_params
: Add:path_params_style
optionput_aws_sigv4
: Fix path encodingdecode_body
: Improve tar detectionrun_finch
: Fix defaulting to using just HTTP/1
v0.5.0 (2024-05-28)
Req v0.5.0 brings testing enhancements, errors standardization, %Req.Response.Async{}
, and more improvements and bug fixes.
Testing Enhancements
In previous releases, we could only create test stubs (using Req.Test.stub/2
), that is, fake
HTTP servers which had predefined behaviour. Let's say we're integrating with a third-party
weather service and we might create a stub for it like below:
Req.Test.stub(MyApp.Weather, fn conn ->
Req.Test.json(conn, %{"celsius" => 25.0})
end)
Anytime we hit this fake we'll get the same result. This works extremely well for simple integrations however it's not quite enough for more complicated ones. Imagine we're using something like AWS S3 and we test uploading some data and reading it back again. While we could do this:
Req.Test.stub(MyApp.S3, fn
conn when conn.method == "PUT" ->
# ...
conn when conn.method == "GET" ->
# ...
end)
making the test just a little bit more thorough will make it MUCH more complicated, for example:
the first GET request should return a 404, we then make a PUT, and now GET should return a 200.
We could solve it by adding some state to our test (e.g. an agent) but there is a simpler way and
that is to set request expectations using the new Req.Test.expect/3
function:
Req.Test.expect(MyApp.S3, fn conn when conn.method == "GET" ->
Plug.Conn.send_resp(conn, 404, "not found")
end)
Req.Test.expect(MyApp.S3, fn conn when conn.method == "PUT" ->
{:ok, body, conn} = Plug.Conn.read_body(conn)
assert body == "foo"
Plug.Conn.send_resp(conn, 200, "")
end)
Req.Test.expect(MyApp.S3, fn conn when conn.method == "GET" ->
Plug.Conn.send_resp(conn, 200, "foo")
end)
The important part is the request expectations are meant to run in order (and fail if they don't).
In this release we're also adding Req.Test.transport_error/2
, a way to simulate network
errors.
Here is another example using both of the new features, let's simulate a server that is
having issues: on the first request it is not responding and on the following two requests it
returns an HTTP 500. Only on the fourth request it returns an HTTP 200. Req by default
automatically retries transient errors (using retry
step) so it will make multiple
requests exercising all of our request expectations:
iex> Req.Test.expect(MyApp.S3, &Req.Test.transport_error(&1, :econnrefused))
iex> Req.Test.expect(MyApp.S3, 2, &Plug.Conn.send_resp(&1, 500, "internal server error"))
iex> Req.Test.expect(MyApp.S3, &Plug.Conn.send_resp(&1, 200, "ok"))
iex> Req.get!(plug: {Req.Test, MyApp.S3}).body
# 15:57:06.309 [error] retry: got exception, will retry in 1000ms, 3 attempts left
# 15:57:06.309 [error] ** (Req.TransportError) connection refused
# 15:57:07.310 [error] retry: got response with status 500, will retry in 2000ms, 2 attempts left
# 15:57:09.311 [error] retry: got response with status 500, will retry in 4000ms, 1 attempt left
"ok"
Finally, for parity with Mox, we add functions for setting ownership mode:
Req.Test.set_req_test_from_context/1
Req.Test.set_req_test_to_private/1
Req.Test.set_req_test_to_shared/1
And for verifying expectations:
Thanks to Andrea Leopardi for driving the testing improvements.
Standardized Errors
In previous releases, when using the default adapter, Finch, Req could return these exceptions on
network/protocol errors: Mint.TransportError
, Mint.HTTPError
, and Finch.Error
. They have
now been standardized into: Req.TransportError
and Req.HTTPError
for more consistent
experience. In fact, this standardization was the pre-requisite of adding
Req.Test.transport_error/2
!
Two additional exception structs have been added: Req.ArchiveError
and Req.DecompressError
for zip/tar/etc errors in decode_body
and gzip/br/zstd/etc errors in decompress_body
respectively. Additionally, decode_body
now returns Jason.DecodeError
instead of raising it.
%Req.Response.Async{}
In previous releases we added ability to stream response body chunks into the current process
mailbox using the into: :self
option. When such is used, the response.body
is now set to
Req.Response.Async
struct which implements the Enumerable
protocol.
Here's a quick example:
resp = Req.get!("http://httpbin.org/stream/2", into: :self)
resp.body
#=> #Req.Response.Async<...>
Enum.each(resp.body, &IO.puts/1)
# {"url": "http://httpbin.org/stream/2", ..., "id": 0}
# {"url": "http://httpbin.org/stream/2", ..., "id": 1}
Here is another example where we use Req to talk to two different servers. The first server
produces some test data, strings "foo"
, "bar"
and "baz"
. The second one is an "echo" server, it simply
responds with the request body it returned. We then stream data from one server, transform it, and
stream it to the other one:
Mix.install([
{:req, "~> 0.5"},
{:bandit, "~> 1.0"}
])
{:ok, _} =
Bandit.start_link(
scheme: :http,
port: 4000,
plug: fn conn, _ ->
conn = Plug.Conn.send_chunked(conn, 200)
{:ok, conn} = Plug.Conn.chunk(conn, "foo")
{:ok, conn} = Plug.Conn.chunk(conn, "bar")
{:ok, conn} = Plug.Conn.chunk(conn, "baz")
conn
end
)
{:ok, _} =
Bandit.start_link(
scheme: :http,
port: 4001,
plug: fn conn, _ ->
{:ok, body, conn} = Plug.Conn.read_body(conn)
Plug.Conn.send_resp(conn, 200, body)
end
)
resp = Req.get!("http://localhost:4000", into: :self)
stream = resp.body |> Stream.with_index() |> Stream.map(fn {data, idx} -> "[#{idx}]#{data}" end)
Req.put!("http://localhost:4001", body: stream).body
#=> "[0]foo[1]bar[2]baz"
Req.Response.Async
is an experimental feature which may change in the future.
The existing caveats to into: :self
still apply, that is:
If the request is sent using HTTP/1, an extra process is spawned to consume messages from the underlying socket.
On both HTTP/1 and HTTP/2 the messages are sent to the current process as soon as they arrive, as a firehose with no back-pressure.
If you wish to maximize request rate or have more control over how messages are streamed, use
into: fun
or into: collectable
instead.
Full v0.5.0 CHANGELOG
Req
: Deprecate setting:headers
to values other than string/integer/DateTime
. This is to potentially allow special handling of atom values in the future.Req
: AddReq.run/2
andReq.run!/2
.Req
:into: :self
now setsresponse.body
asReq.Response.Async
which implements enumerable.Req.Request
: Deprecate setting:redact_auth
. It now has no effect. Instead of allowing to opt out of, we give an idea what the secret was without revealing it fully:iex> Req.new(auth: {:basic, "foobar:baz"}) %Req.Request{ options: %{auth: {:basic, "foo*******"}}, ... } iex> Req.new(headers: [authorization: "bearer foobarbaz"]) %Req.Request{ headers: %{"authorization" => ["bearer foo******"]}, ... }
Req.Request
: Deprecatehalt/1
in favour ofReq.Request.halt/2
.Req.Test
: AddReq.Test.transport_error/2
to simulate transport errors.Req.Test
: AddReq.Test.expect/3
.Req.Test
: Add functions for setting ownership mode:Req.Test.set_req_test_from_context/1
,Req.Test.set_req_test_to_private/1
,Req.Test.set_req_test_to_shared/1
and for verifying expectations:Req.Test.verify!/0
,Req.Test.verify!/1
, andReq.Test.verify_on_exit!/1
.Req.Test
: AddReq.Test.html/2
.Req.Test
: AddReq.Test.text/2
.Req.Test
: Drop:nimble_ownership
dependency.Req.Test
: DeprecateReq.Test.stub/1
, i.e. the intended use case is to only work with plug stubs/mocks.decode_body
: ReturnJason.DecodeError
on JSON errors instead of raising it.decode_body
: ReturnReq.ArchiveError
on tar/zip errors.decompress_body
: ReturnReq.DecompressError
.put_aws_sigv4
: Drop:aws_signature
dependency.retry
: (BREAKING CHANGE) Consider%Req.TransportError{reason: :closed | :econnrefused | :timeout}
as transient. Previously any exceptions with those reason values were consider as such.retry
: (BREAKING CHANGE) Consider%Req.HTTPError{protocol: :http2, reason: :unprocessed}
as transient.run_finch
: (BREAKING CHANGE) ReturnReq.HTTPError
instead ofMint.HTTPError
.run_finch
: (BREAKING CHANGE) ReturnReq.TransportError
instead ofMint.TransportError
.run_finch
: Setinet6: true
if URL looks like IPv6 address.run_plug
: Make public.run_plug
: Add support for simulating network issues usingReq.Test.transport_error/2
.run_plug
: Support passing 2-arity functions as plugs.run_plug
: Automatically fetch query params.verify_checksum
: Fix handling compressed responses.
v0.4.14 (2024-03-15)
redirect
: ReturnReq.TooManyRedirectsError
exception.Previously we always raised a
RuntimeError
. Besides changing the exception struct, now it is returned:iex> Req.get("https://httpbin.org/redirect/4", max_redirects: 3) # 07:08:06.868 [debug] redirecting to /relative-redirect/3 # 07:08:06.988 [debug] redirecting to /relative-redirect/2 # 07:08:07.109 [debug] redirecting to /relative-redirect/1 {:error, %Req.TooManyRedirectsError{max_redirects: 3}}
When users where using functions like
Req.get!
, the exception will of course still be raised.Relax
nimble_ownership
version requirementReq.Test
: Allow plug stub to be amodule
or{module, options}
Req.Test
: Document stubbing with Broadway
v0.4.13 (2024-03-07)
run_finch
: Default toconnect_options: [protocols: [:http1]]
due to regression with HTTP/2 requests over HTTP/1 connections (protocols: [:http1, :http2]
) with request body size exceeding 64kib.
v0.4.12 (2024-03-06)
Req
: Add response body streaming viainto: :self
,Req.parse_message/2
, andReq.cancel_async_response/1
.Req
: DeprecateReq.update/2
in favour ofReq.merge/2
Req.Test
: AddReq.Test.allow/3
compressed
: Defaultcompressed: false
when streaming response bodyput_base_url
: Allow:base_url
to be a 0-arity function or MFArgs
v0.4.11 (2024-02-19)
Req.Test.json/2
: Don't crash compilation when Plug is not available
v0.4.10 (2024-02-19)
run_finch
: Default toconnect_options: [protocols: [:http1, :http2]]
.run_finch
: Change version requirement to~> 0.17
, that is all versions up to1.0
.put_aws_sigv4
: Support streaming request body.auth
: Always updateauthorization
header.decode_body
: Gracefully handle multiple content-type values.Req.Request.new/1
: UseURI.parse
for now.
v0.4.9 (2024-02-14)
retry
: Raise on invalid return from:retry_delay
functionrun_finch
: Update to Finch 0.17run_finch
: Deprecateconnect_options: [protocol: ...]
in favour ofconnect_options: [protocols: ...]]
which defaults to[:http1, :http2]
, that is, make request using HTTP/1 but if negotiated switch to HTTP/2 over the HTTP/1 connection.New step:
put_aws_sigv4
- signs request with AWS Signature Version 4.
v0.4.8 (2023-12-11)
put_plug
: Fix response streaming. Previously we were relying on unreleased Plug features (which may never get released). Now, Plug adapter will emit the entire response body as one chunk. Thus,plug: plug, into: fn ... -> {:halt, acc} end
is not yet supported as it requires Plug changes that are still being discussed. On the flip side, we should have much more stable Plug integration regardless of this small limitation.
v0.4.7 (2023-12-11)
put_plug
: Don't crash if plug is not installed and :plug is not used
v0.4.6 (2023-12-11)
- New step:
checksum
put_plug
: Fix response streaming when plug usessend_resp
orsend_file
retry
: Retry on:closed
v0.4.5 (2023-10-27)
decompress_body
: Removecontent-length
headerauth
: Deprecateauth: {user, pass}
in favour ofauth: {:basic, "user:pass"}
Req.Request
: Allow steps to be{mod, fun, args}
v0.4.4 (2023-10-05)
compressed
: Check for optional depenedencies brotli and ezstd only at compile-time. (backported from v0.3.12.)decode_body
: Check for optional depenedency nimble_csv at compile-time. (backported from v0.3.12.)run_finch
: Add:finch_private
option
v0.4.3 (2023-09-13)
Req.new/1
: Fix setting:redact_auth
v0.4.2 (2023-09-04)
put_plug
: Handle response streaming on Plug 1.15+.Don't warn on mixed-case header names
v0.4.1 (2023-09-01)
- Fix Req.Request Inspect regression
v0.4.0 (2023-09-01)
Req v0.4.0 changes headers to be maps, adds request & response streaming, and improves steps.
Change Headers to be Maps
Previously headers were lists of name/value tuples, e.g.:
[{"content-type", "text/html"}]
This is a standard across the ecosystem (with minor difference that some Erlang libraries use charlists instead of binaries.)
There are some problems with this particular choice though:
- We cannot use
headers[name]
- We cannot use pattern matching
In short, this representation isn't very ergonomic to use.
Now headers are maps of string names and lists of values, e.g.:
%{"content-type" => ["text/html"]}
This allows headers[name]
usage:
response.headers["content-type"]
#=> ["text/html"]
and pattern matching:
case Req.request!(req) do
%{headers: %{"content-type" => ["application/json" <> _]}} ->
# handle JSON response
end
This is a major breaking change. If you cannot easily update your app or your dependencies, do:
# config/config.exs
config :req, legacy_headers_as_lists: true
This legacy fallback will be removed on Req 1.0.
There are two other changes to headers in this release.
Header names are now case-insensitive in functions like
Req.Response.get_header/2
.
Trailer headers, or more precisely trailer fields or simply trailers, are now stored
in a separate trailers
field on the %Req.Response{}
struct as long as you use Finch 0.17+.
Add Request Body Streaming
Req v0.4 adds official support for request body streaming by setting the request body to an
enumerable
. Here's an example:
iex> stream = Stream.duplicate("foo", 3)
iex> Req.post!("https://httpbin.org/post", body: stream).body["data"]
"foofoofoo"
The enumerable is passed through request steps and they may change it. For example,
the compress_body
step gzips the request body on the fly.
Add Response Body Streaming
Req v0.4 also adds response body streaming, via the :into
option.
Here's an example where we download the first 20kb (by making a range request, via the
put_range
step) of Elixir release zip. We stream the response body into a function
and can handle each body chunk. The function receives a {:data, data}, {req, resp}
and returns
a {:cont | :halt, {req, resp}}
tuple.
resp =
Req.get!(
url: "https://github.com/elixir-lang/elixir/releases/download/v1.15.4/elixir-otp-26.zip",
range: 0..20_000,
into: fn {:data, data}, {req, resp} ->
IO.inspect(byte_size(data), label: :chunk)
{:cont, {req, resp}}
end
)
# output: 17:07:38.131 [debug] redirecting to https://objects.githubusercontent.com/github-production-release-asset-2e6(...)
# output: chunk: 16384
# output: chunk: 3617
resp.status #=> 206
resp.headers["content-range"] #=> ["bytes 0-20000/6801977"]
resp.body #=> ""
Notice we only stream response body, that is, Req automatically handles HTTP response status and
headers. Once the stream is done, Req passes the response through response steps which allows
following redirects, retrying on errors, etc. Response body
is set to empty string ""
which is then ignored by decompress_body
, decode_body
, and similar steps. If you need
to decompress or decode incoming chunks, you need to do that in your custom into: fun
function.
As the name :into
implies, we can also stream response body into any Collectable
.
Here's a similar snippet to above where we stream to a file:
resp =
Req.get!(
url: "https://github.com/elixir-lang/elixir/releases/download/v1.15.4/elixir-otp-26.zip",
range: 0..20_000,
into: File.stream!("elixit-otp-26.zip.1")
)
# output: 17:07:38.131 [debug] redirecting to (...)
resp.status #=> 206
resp.headers["content-range"] #=> ["bytes 0-20000/6801977"]
resp.body #=> %File.Stream{}
Full CHANGELOG
Change
request.headers
andresponse.headers
to be maps.Ensure
request.headers
andresponse.headers
are downcased.Per RFC 9110: HTTP Semantics, HTTP headers should be case-insensitive. However, per RFC 9113: HTTP/2 headers must be sent downcased.
Req headers are now stored internally downcased and all accessor functions like
Req.Response.get_header/2
are downcasing the given header name.Add
trailers
field toReq.Response
struct. Trailer field is only filled in on Finch 0.17+.Make
request.registered_options
internal representation private.Make
request.options
internal representation private.Currently
request.options
field is a map but it may change in the future. One possible future change is using keywords lists internally which would allow, for example,Req.new(params: [a: 1]) |> Req.merge(params: [b: 2])
to keep duplicate:params
inrequest.options
which would then allow to decide the duplicate key semantics on a per-step basis. And so, for example,put_params
would merge params but most steps would simply use the first value.To have some room for manoeuvre in the future we should stop pattern matching on
request.options
. Callingrequest.options[key]
,put_in(request.options[key], value)
, andupdate_in(request.options[key], fun)
is allowed.Fix typespecs for some functions
Deprecate
output
step in favour ofinto: File.stream!(path)
.Rename
follow_redirects
step toredirect
redirect
: Rename:follow_redirects
option to:redirect
.redirect
: Rename:location_trusted
option to:redirect_trusted
.redirect
: Change HTTP request method to GET only on POST requests that result in 301..303.Previously we were changing the method to GET for all 3xx except 307 and 308.
decompress_body
: Remove support fordeflate
compression (which was broken)decompress_body
: Don't crash on unknown codecdecompress_body
: Fix handling HEAD requestsdecompress_body
: Re-calculatecontent-length
header after decompresiondecompress_body
: Removecontent-encoding
header after decompressiondecode_body
: Do not decode response withcontent-encoding
headerrun_finch
: Add:inet6
optionretry
: Supportretry: :safe_transient
which retries HTTP 408/429/500/502/503/504 or exceptions withreason
field set to:timeout
/:econnrefused
.:safe_transient
is the new default retry mode. (Previously we retried on 408/429/5xx and any exception.)retry
: Supportretry: :transient
which is the same as:safe_transient
except it retries on all HTTP methodsretry
: Useretry-after
header value on HTTP 503 Service Unavailable. Previously only HTTP 429 Too Many Requests was using this header value.retry
: Supportretry: &fun/2
. The function receivesrequest, response_or_exception
and returns either:true
- retry with the default delay{:delay, milliseconds}
- retry with the given delayfalse/nil
- don't retry
retry
: Deprecateretry: :safe
in favour ofretry: :safe_transient
retry
: Deprecateretry: :never
in favour ofretry: false
Req.request/2
: Improve error message on invalid argumentsReq.merge/2
: Do not duplicate headersReq.merge/2
: Merge:params
Req.Request
: Fix displaying redacted basic authentication
v0.3.12 (2023-08-05)
compressed
: Check for optional depenedencies brotli and ezstd only at compile-time.decode_body
: Check for optional depenedency nimble_csv at compile-time.
v0.3.11 (2023-07-24)
- Support
Req.get(options)
,Req.post(options)
, etc - Add
Req.Request.new/1
retry
: Fix returning correctprivate.req_retry_count
v0.3.10 (2023-06-20)
decompress_body
: No-op on non-binary response bodydecompress_body
: Support multiplecontent-encoding
headersdecode_body
: Remove:extract
option- Remove deprecated
Req.post!(url, body)
and similar functions
v0.3.9 (2023-06-08)
put_path_params
: URI-encode path params
v0.3.8 (2023-05-22)
- Add
:redact_auth
option to redact auth credentials, defaults totrue
. - Soft-deprecate
Req.Request.run,run!
in favour ofReq.Request.run_request/1
.
v0.3.7 (2023-05-18)
- Deprecate setting headers to
%NaiveDateTime{}
, always use%DateTime{}
. decode_body
: Add:decode_json
option- [
follow_redirects
]: Add:redirect_log_level
- [
follow_redirects
]: Preserve HTTP method on 307/308 redirects run_finch
: Allow:finch_request
to perform the underlying request. This deprecates passing 1-arity functionf(finch_request)
in favour of 4-arityf(request, finch_request, finch_name, finch_options)
.
v0.3.6 (2023-03-06)
run_finch
: Fix setting:hostname
optiondecode_body
: Add:extract
option to automatically extract archives (zip, tar, etc)
v0.3.5 (2023-02-01)
- New step:
put_path_params
auth
: Accept string
v0.3.4 (2023-01-03)
retry
: Add:retry_log_level
option
v0.3.3 (2022-12-08)
- [
follow_redirects
]: Inherit scheme from previous location run_finch
: Fix setting connect timeoutrun_finch
: Add:finch_request
option
v0.3.2 (2022-11-14)
decode_body
: Decode JSON when response is json-api mime typeput_params
: Fix bug when params have been duplicated when retrying requesetretry
: Removeretry: :always
optionretry
: Soft-deprecateretry: :never
in favour ofretry: false
run_finch
: Add:transport_opts
,:proxy_headers
,:proxy
, and:client_settings
optionsReq.Response.json/2
: Do not override content-type
v0.3.1 (2022-09-09)
encode_body
: Set Accept header in JSON requestsput_base_url
: Fix merging with leading and/or trailing slashes- Fix merging :adapter option
- Add get/2, post/2, put/2, patch/2, delete/2 and head/2
v0.3.0 (2022-06-21)
Req v0.3.0 brings redesigned API, new steps, and improvements to existing steps.
New API
The new API allows building a request struct with all the built-in steps. It can be then piped
to functions like Req.get!/2
:
iex> req = Req.new(base_url: "https://api.github.com")
iex> req |> Req.get!(url: "/repos/sneako/finch") |> then(& &1.body["description"])
"Elixir HTTP client, focused on performance"
iex> req |> Req.get(url: "/repos/elixir-mint/mint") |> then(& &1.body["description"])
"Functional HTTP client for Elixir with support for HTTP/1 and HTTP/2."
Setting body and encoding it to form/JSON is now done through :body/:form/:json
options:
iex> Req.post!("https://httpbin.org/anything", body: "hello!").body["data"]
"hello!"
iex> req = Req.new(url: "https://httpbin.org/anything")
iex> Req.post!(req, form: [x: 1]).body["form"]
%{"x" => "1"}
iex> Req.post!(req, json: %{x: 2}).body["form"]
%{"x" => 2}
Improved Error Handling
Req now validates option names ensuring users didn't accidentally mistyped them. If they did, it will try to give a helpful error message. Here are some examples:
Req.request!(urll: "https://httpbin.org")
** (ArgumentError) unknown option :urll. Did you mean :url?
Req.new(bas_url: "https://httpbin.org")
** (ArgumentError) unknown option :bas_url. Did you mean :base_url?
Req also has a new option to handle HTTP errors (4xx/5xx). By default it will continue to return the error responses:
Req.get!("https://httpbin.org/status/404")
#=> %Req.Response{status: 404, ...}
but users can now pass http_errors: :raise
to raise an exception instead:
Req.get!("https://httpbin.org/status/404", http_errors: :raise)
** (RuntimeError) The requested URL returned error: 404
Response body: ""
This is especially useful in one-off scripts where we only really care about the "happy path" but would still like to get a good error message when something unexpected happened.
Plugins
From the very beginning, Req could be extended with custom steps. To make using such custom steps by others even easier, they can be packaged up into plugins.
Here are some examples:
And here's how they can be used:
Mix.install([
{:req, "~> 0.3.0"},
{:req_easyhtml, github: "wojtekmach/req_easyhtml"},
{:req_s3, github: "wojtekmach/req_s3"},
{:req_hex, github: "wojtekmach/req_hex"},
{:req_github_oauth, github: "wojtekmach/req_github_oauth"}
])
req =
(Req.new(http_errors: :raise)
|> ReqEasyHTML.attach()
|> ReqS3.attach()
|> ReqHex.attach()
|> ReqGitHubOAuth.attach())
Req.get!(req, url: "https://elixir-lang.org").body[".entry-summary h5"]
#=>
# #EasyHTML[<h5>
# Elixir is a dynamic, functional language for building scalable and maintainable applications.
# </h5>]
Req.get!(req, url: "s3://ossci-datasets").body
#=>
# [
# "mnist/",
# "mnist/t10k-images-idx3-ubyte.gz",
# "mnist/t10k-labels-idx1-ubyte.gz",
# "mnist/train-images-idx3-ubyte.gz",
# "mnist/train-labels-idx1-ubyte.gz"
# ]
Req.get!(req, url: "https://repo.hex.pm/tarballs/req-0.1.0.tar").body["metadata.config"]["links"]
#=> %{"GitHub" => "https://github.com/wojtekmach/req"}
Req.get!(req, url: "https://api.github.com/user").body["login"]
# Outputs:
# paste this user code:
#
# 6C44-30A8
#
# at:
#
# https://github.com/login/device
#
# open browser window? [Yn]
# 15:22:28.350 [info] response: authorization_pending
# 15:22:33.519 [info] response: authorization_pending
# 15:22:38.678 [info] response: authorization_pending
#=> "wojtekmach"
Req.get!(req, url: "https://api.github.com/user").body["login"]
#=> "wojtekmach"
Notice all plugins can be attached to the same request struct which makes it really easy to explore different endpoints.
See "Writing Plugins" section in Req.Request
module documentation
for more information.
Plug Integration
Req can now be used to easily test plugs using the :plug
option:
defmodule Echo do
def call(conn, _) do
"/" <> path = conn.request_path
Plug.Conn.send_resp(conn, 200, path)
end
end
test "echo" do
assert Req.get!("http:///hello", plug: Echo).body == "hello"
end
you can define plugs as functions too:
test "echo" do
echo = fn conn ->
"/" <> path = conn.request_path
Plug.Conn.send_resp(conn, 200, path)
end
assert Req.get!("http:///hello", plug: echo).body == "hello"
end
which is particularly useful to create HTTP service mocks with tools like Bypass.
Request Adapters
While Req always used Finch as the underlying HTTP client, it was designed from the day one to
easily swap it out. This is now even easier with an :adapter
option.
Here is a mock adapter that always returns a successful response:
adapter = fn request ->
response = %Req.Response{status: 200, body: "it works!"}
{request, response}
end
Req.request!(url: "http://example", adapter: adapter).body
#=> "it works!"
Here is another one that uses the json/2
function to conveniently
return a JSON response:
adapter = fn request ->
response = Req.Response.json(%{hello: 42})
{request, response}
end
resp = Req.request!(url: "http://example", adapter: adapter)
resp.headers
#=> [{"content-type", "application/json"}]
resp.body
#=> %{"hello" => 42}
And here is a naive Hackney-based adapter and how we can use it:
hackney = fn request ->
case :hackney.request(
request.method,
URI.to_string(request.url),
request.headers,
request.body,
[:with_body]
) do
{:ok, status, headers, body} ->
headers = for {name, value} <- headers, do: {String.downcase(name), value}
response = %Req.Response{status: status, headers: headers, body: body}
{request, response}
{:error, reason} ->
{request, RuntimeError.exception(inspect(reason))}
end
end
Req.get!("https://api.github.com/repos/elixir-lang/elixir", adapter: hackney).body["description"]
#=> "Elixir is a dynamic, functional language designed for building scalable and maintainable applications"
See "Adapter" section in Req.Request
module documentation for more information.
Major changes
Add high-level functional API:
Req.new(...) |> Req.request(...)
,Req.new(...) |> Req.get!(...)
, etc.Add
Req.Request.options
field that steps can read from. Also, make all steps be arity 1.When using "High-level" API, we now run all steps by default. (The steps, by looking at request.options, can decide to be no-op.)
Move low-level API to
Req.Request
Move built-in steps to
Req.Steps
Add step names
Add
Req.head!/2
Add
Req.patch!/2
Add
Req.Request.adapter
fieldRename
put_if_modified_since
step tocache
Rename
decompress
step todecompress_body
Remove
put_default_steps
stepRemove
run_steps
stepRemove
put_default_headers
stepRemove
encode_headers
step. The headers are now encoded inReq.new/1
andReq.request/2
Remove
Req.Request.unix_socket
field. Add option onrun_finch
step with the same name instead.Require Elixir 1.12
Step changes
New step:
put_plug
New step:
put_user_agent
(replaces part of removedput_default_headers
)New step:
compressed
(replaces part of removedput_default_headers
)New step:
compress_body
New step: [
output
]New step:
handle_http_errors
put_base_url
: Ignore base URL if given URL contains schemerun_finch
: Add:connect_options
which dynamically starts (or re-uses already started) Finch pool with the given connection options.run_finch
: Replace:finch_options
with:receive_timeout
and:pool_timeout
optionsencode_body
: Add:form
and:json
options (previously used as{:form, data}
and{:json, data}
)cache
: Include request method in cache keydecompress_body
,compressed
: Support Brotlidecompress_body
,compressed
: Support Zstandarddecode_body
: Supportdecode_body: false
option to disable automatic body decoding[
follow_redirects
]: Change method to GET on 301..303 redirects[
follow_redirects
]: Don't send auth headers on redirect to different scheme/host/port unlesslocation_trusted: true
is setretry
: TheRetry-After
response header on HTTP 429 responses is now respectedretry
: The:retry
option can now be set to:safe
(default) to only retry GET/HEAD requests on HTTP 408/429/5xx responses or exceptions,:always
to always retry,:never
to never retry, andfun
- a 1-arity function that accepts either aReq.Response
or an exception struct and returns boolean whether to retryretry
: The:retry_delay
option now accepts a function that takes a retry count (starting at 0) and returns the delay. Defaults to a simple exponential backoff: 1s, 2s, 4s, 8s, ...
Deprecations
Deprecate calling
Req.post!(url, body)
in favour ofReq.post!(url, body: body)
. Also, deprecateReq.post!(url, {:form, data})
in favour ofReq.post!(url, form: data)
. andReq.post!(url, {:json, data})
in favour ofReq.post!(url, json: data)
. Same forReq.put!/2
.Deprecate setting
retry: [delay: delay, max_retries: max_retries]
in favour ofretry_delay: delay, max_retries: max_retries
.Deprecate setting
cache: [dir: dir]
in favour ofcache_dir: dir
.Deprecate Req.build/3 in favour of manually building the struct.
v0.2.2 (2022-04-04)
- Relax Finch version requirement
v0.2.1 (2021-11-24)
- Add
:private
field to Response - Update Finch to 0.9.1
v0.2.0 (2021-11-08)
- Rename
normalize_headers
toencode_headers
- Rename
prepend_default_steps
toput_default_steps
- Rename
encode
anddecode
toencode_body
anddecode_body
- Rename
netrc
toload_netrc
- Rename
finch
step torun_finch
- Rename
if_modified_since
toput_if_modified_since
- Rename
range
toput_range
- Rename
params
toput_params
- Rename
request.uri
torequest.url
- Change response/error step contract from
f(req, resp_err)
tof({req, resp_err})
- Support mime 2.x
- Add
Req.Response
struct - Add
put!/3
anddelete!/2
- Add
run_steps/2
- Initial support for UNIX domain sockets
- Accept
{module, args}
andmodule
as steps - Ensure
get_private
andput_private
have atom keys put_default_steps
: Use MFArgs instead of captures for the default stepsput_if_modified_since
: Fix generating internet timeencode_headers
: Encode header valuesretry
: Rename:max_attempts
to:max_retries
v0.1.1 (2021-07-16)
- Fix
append_request_steps/2
andprepend_request_steps/2
(they did the opposite) - Add
finch/1
v0.1.0 (2021-07-15)
- Initial release