View Source ExAws.S3 (ExAws.S3 v2.5.5)

CI Module Version Hex Docs Total Download License Last Updated

Service module for https://github.com/ex-aws/ex_aws

Installation

The package can be installed by adding :ex_aws_s3 to your list of dependencies in mix.exs along with :ex_aws, your preferred JSON codec / HTTP client, and optionally :sweet_xml to support operations like list_objects that require XML parsing.

def deps do
  [
    {:ex_aws, "~> 2.0"},
    {:ex_aws_s3, "~> 2.0"},
    {:poison, "~> 3.0"},
    {:hackney, "~> 1.9"},
    {:sweet_xml, "~> 0.6.6"}, # optional dependency
  ]
end

Operations on AWS S3

Basic Operations

The vast majority of operations here represent a single operation on S3.

Examples

S3.list_objects("my-bucket") |> ExAws.request! #=> %{body: [list, of, objects]}
S3.list_objects("my-bucket") |> ExAws.stream! |> Enum.to_list #=> [list, of, objects]

S3.put_object("my-bucket", "path/to/bucket", contents) |> ExAws.request!

Higher Level Operations

There are also some operations which operate at a higher level to make it easier to download and upload very large files.

Multipart uploads

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request #=> {:ok, :done}

See ExAws.S3.upload/4 for options

Download large file to disk

S3.download_file("my-bucket", "path/on/s3", "path/to/dest/file")
|> ExAws.request #=> {:ok, :done}

More high level functionality

Task.async_stream makes some high level flows so easy you don't need explicit ExAws support.

For example, here is how to concurrently upload many files.

upload_file = fn {src_path, dest_path} ->
  S3.put_object("my_bucket", dest_path, File.read!(src_path))
  |> ExAws.request!
end

paths = %{"path/to/src0" => "path/to/dest0", "path/to/src1" => "path/to/dest1"}

paths
|> Task.async_stream(upload_file, max_concurrency: 10)
|> Stream.run

Bucket as host functionality

Examples

opts = [virtual_host: true, bucket_as_host: true]

ExAws.Config.new(:s3)
|> S3.presigned_url(:get, "bucket.custom-domain.com", "foo.txt", opts)

{:ok, "https://bucket.custom-domain.com/foo.txt"}

Configuration

The scheme, host, and port can be configured to hit alternate endpoints.

For example, this is how to use a local minio instance:

# config.exs
config :ex_aws, :s3,
  scheme: "http://",
  host: "localhost",
  port: 9000

An alternate content_hash_algorithm can be specified as well. The default is :md5. It may be necessary to change this when operating in a FIPS-compliant environment where MD5 isn't available, for instance. At this time, only :sha256, :sha, and :md5 are supported by both Erlang and S3.

# config.exs
config :ex_aws_s3, :content_hash_algorithm, :sha256

Summary

Functions

Delete a bucket

Delete a bucket cors

Delete a bucket lifecycle

Delete a bucket policy

Delete a bucket replication

Delete a bucket tagging

Delete a bucket website

Delete multiple objects within a bucket

Delete an object within a bucket

Remove the entire tag set from the specified object

Download an S3 object to a file.

Get bucket acl

Get bucket cors

Get bucket lifecycle

Get bucket location

Get bucket logging

Get bucket notification

Get bucket policy

Get bucket replication

Get bucket payment configuration

Get bucket tagging

Get bucket versioning

Get bucket website

Get an object from a bucket

Get an object's access control policy

Get a torrent for a bucket

Determine if a bucket exists

Determine if an object exists

List multipart uploads for a bucket

List objects in bucket

List objects in bucket

List the parts of a multipart upload

Restore an object to a particular version

Generate a pre-signed post for an object.

Generate a pre-signed URL for an object. This is a local operation and does not check whether the bucket or object exists.

Creates a bucket in the specified region

Update or create a bucket access control policy

Update or create a bucket CORS policy

Update or create a bucket lifecycle configuration

Update or create a bucket logging configuration

Update or create a bucket notification configuration

Update or create a bucket policy configuration

Update or create a bucket replication configuration

Update or create a bucket requestPayment configuration

Update or create a bucket tagging configuration

Update or create a bucket versioning configuration

Update or create a bucket website configuration

Create an object within a bucket

Create or update an object's access control policy

Add a set of tags to an existing object

Types

@type acl_opt() :: {:acl, canned_acl()} | grant()
@type acl_opts() :: [acl_opt()]
@type amz_meta_opts() :: [{atom(), binary()} | {binary(), binary()}, ...]
@type canned_acl() ::
  :private
  | :public_read
  | :public_read_write
  | :authenticated_read
  | :bucket_owner_read
  | :bucket_owner_full_control
Link to this type

customer_encryption_opts()

View Source
@type customer_encryption_opts() :: [
  customer_algorithm: binary(),
  customer_key: binary(),
  customer_key_md5: binary()
]
@type delete_object_opt() ::
  {:x_amz_mfa, binary()}
  | {:x_amz_request_payer, binary()}
  | {:x_amz_bypass_governance_retention, binary()}
  | {:x_amz_expected_bucket_owner, binary()}
  | {:version_id, binary()}
@type delete_object_opts() :: [delete_object_opt()]
@type download_file_opts() :: [
  max_concurrency: pos_integer(),
  chunk_size: pos_integer(),
  timeout: pos_integer()
]
@type encryption_opts() ::
  binary() | [{:aws_kms_key_id, binary()}] | customer_encryption_opts()
@type expires_in_seconds() :: non_neg_integer()
@type get_object_opts() :: [
  {:response, get_object_response_opts()}
  | {:version_id, binary()}
  | head_object_opt()
]
Link to this type

get_object_response_opts()

View Source
@type get_object_response_opts() :: [
  content_language: binary(),
  expires: binary(),
  cache_control: binary(),
  content_disposition: binary(),
  content_encoding: binary()
]
@type grant() ::
  {:grant_read, grantee()}
  | {:grant_read_acp, grantee()}
  | {:grant_write_acp, grantee()}
  | {:grant_full_control, grantee()}
@type grantee() :: [email: binary(), id: binary(), uri: binary()]
@type hash_algorithm() :: :sha | :sha256 | :md5

The hashing algorithms that both S3 and Erlang support.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html https://www.erlang.org/doc/man/crypto.html#type-hash_algorithm

@type head_object_opt() ::
  {:encryption, customer_encryption_opts()}
  | {:range, binary()}
  | {:version_id, binary()}
  | {:if_modified_since, binary()}
  | {:if_unmodified_since, binary()}
  | {:if_match, binary()}
  | {:if_none_match, binary()}
@type head_object_opts() :: [head_object_opt()]
Link to this type

initiate_multipart_upload_opt()

View Source
@type initiate_multipart_upload_opt() ::
  {:cache_control, binary()}
  | {:content_disposition, binary()}
  | {:content_encoding, binary()}
  | {:content_type, binary()}
  | {:expires, binary()}
  | {:website_redirect_location, binary()}
  | {:encryption, encryption_opts()}
  | {:meta, amz_meta_opts()}
  | acl_opt()
  | storage_class_opt()
Link to this type

initiate_multipart_upload_opts()

View Source
@type initiate_multipart_upload_opts() :: [initiate_multipart_upload_opt()]
@type list_objects_opts() :: [
  delimiter: binary(),
  marker: binary(),
  prefix: binary(),
  encoding_type: binary(),
  max_keys: 0..1000,
  stream_prefixes: boolean()
]
Link to this type

list_objects_v2_opts()

View Source
@type list_objects_v2_opts() :: [
  delimiter: binary(),
  prefix: binary(),
  encoding_type: binary(),
  max_keys: 0..1000,
  stream_prefixes: boolean(),
  continuation_token: binary(),
  fetch_owner: boolean(),
  start_after: binary()
]
@type presigned_post_opts() :: [
  expires_in: expires_in_seconds(),
  acl: binary() | {:starts_with, binary()},
  content_length_range: [integer()],
  key: binary() | {:starts_with, binary()},
  custom_conditions: [any()],
  virtual_host: boolean(),
  s3_accelerate: boolean(),
  bucket_as_host: boolean()
]
Link to this type

presigned_post_result()

View Source
@type presigned_post_result() :: %{
  url: binary(),
  fields: %{required(binary()) => binary()}
}
@type presigned_url_opts() :: [
  expires_in: expires_in_seconds(),
  virtual_host: boolean(),
  s3_accelerate: boolean(),
  query_params: [{binary(), binary()}],
  headers: [{binary(), binary()}],
  bucket_as_host: boolean(),
  start_datetime: Calendar.naive_datetime() | :calendar.datetime()
]
Link to this type

put_object_copy_opts()

View Source
@type put_object_copy_opts() :: [
  {:metadata_directive, :COPY | :REPLACE}
  | {:copy_source_if_modified_since, binary()}
  | {:copy_source_if_unmodified_since, binary()}
  | {:copy_source_if_match, binary()}
  | {:copy_source_if_none_match, binary()}
  | {:website_redirect_location, binary()}
  | {:destination_encryption, encryption_opts()}
  | {:source_encryption, customer_encryption_opts()}
  | {:cache_control, binary()}
  | {:content_disposition, binary()}
  | {:content_encoding, binary()}
  | {:content_length, binary()}
  | {:content_type, binary()}
  | {:expect, binary()}
  | {:expires, binary()}
  | {:website_redirect_location, binary()}
  | {:meta, amz_meta_opts()}
  | acl_opt()
  | storage_class_opt()
]
@type put_object_opts() :: [
  {:cache_control, binary()}
  | {:content_disposition, binary()}
  | {:content_encoding, binary()}
  | {:content_length, binary()}
  | {:content_type, binary()}
  | {:expect, binary()}
  | {:expires, binary()}
  | {:website_redirect_location, binary()}
  | {:encryption, encryption_opts()}
  | {:meta, amz_meta_opts()}
  | acl_opt()
  | storage_class_opt()
]
@type storage_class() ::
  :standard
  | :reduced_redundancy
  | :standard_ia
  | :onezone_ia
  | :intelligent_tiering
  | :glacier
  | :deep_archive
  | :outposts
  | :glacier_ir
  | :snow
@type storage_class_opt() :: {:storage_class, storage_class()}
@type upload_opt() ::
  {:max_concurrency, pos_integer()}
  | {:timeout, pos_integer()}
  | {:refetch_auth_on_request, boolean()}
  | initiate_multipart_upload_opt()
@type upload_opts() :: [upload_opt()]
Link to this type

upload_part_copy_opts()

View Source
@type upload_part_copy_opts() :: [
  copy_source_if_modified_since: binary(),
  copy_source_if_unmodified_since: binary(),
  copy_source_if_match: binary(),
  copy_source_if_none_match: binary(),
  destination_encryption: encryption_opts(),
  source_encryption: customer_encryption_opts()
]

Functions

Link to this function

abort_multipart_upload(bucket, object, upload_id)

View Source
@spec abort_multipart_upload(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary()
) ::
  ExAws.Operation.S3.t()

Abort a multipart upload

Link to this function

calculate_content_header(content)

View Source
@spec calculate_content_header(iodata()) :: map()
Link to this function

complete_multipart_upload(bucket, object, upload_id, parts)

View Source
@spec complete_multipart_upload(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary(),
  parts :: [{binary() | pos_integer(), binary()}, ...]
) :: ExAws.Operation.S3.t()

Complete a multipart upload

Link to this function

delete_all_objects(bucket, objects, opts \\ [])

View Source
@spec delete_all_objects(
  bucket :: binary(),
  objects :: [binary() | {binary(), binary()}, ...] | Enumerable.t(),
  opts :: [{:quiet, true}]
) :: ExAws.Operation.S3DeleteAllObjects.t()

Delete all listed objects.

When performed, this function will continue making delete_multiple_objects requests deleting 1000 objects at a time until all are deleted.

Can be streamed.

Example

stream = ExAws.S3.list_objects(bucket(), prefix: "some/prefix") |> ExAws.stream!() |> Stream.map(& &1.key)
ExAws.S3.delete_all_objects(bucket(), stream) |> ExAws.request()
@spec delete_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket

Link to this function

delete_bucket_cors(bucket)

View Source
@spec delete_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket cors

Link to this function

delete_bucket_lifecycle(bucket)

View Source
@spec delete_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket lifecycle

Link to this function

delete_bucket_policy(bucket)

View Source
@spec delete_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket policy

Link to this function

delete_bucket_replication(bucket)

View Source
@spec delete_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket replication

Link to this function

delete_bucket_tagging(bucket)

View Source
@spec delete_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket tagging

Link to this function

delete_bucket_website(bucket)

View Source
@spec delete_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket website

Link to this function

delete_multiple_objects(bucket, objects, opts \\ [])

View Source
@spec delete_multiple_objects(
  bucket :: binary(),
  objects :: [binary() | {binary(), binary()}, ...],
  opts :: [{:quiet, true}]
) :: ExAws.Operation.S3.t()

Delete multiple objects within a bucket

Limited to 1000 objects.

Link to this function

delete_object(bucket, object, opts \\ [])

View Source
@spec delete_object(
  bucket :: binary(),
  object :: binary(),
  opts :: delete_object_opts()
) ::
  ExAws.Operation.S3.t()

Delete an object within a bucket

Link to this function

delete_object_tagging(bucket, object, opts \\ [])

View Source
@spec delete_object_tagging(
  bucket :: binary(),
  object :: binary(),
  opts :: Keyword.t()
) ::
  ExAws.Operation.S3.t()

Remove the entire tag set from the specified object

Link to this function

download_file(bucket, path, dest, opts \\ [])

View Source
@spec download_file(
  bucket :: binary(),
  path :: binary(),
  dest :: :memory | binary(),
  opts :: download_file_opts()
) :: ExAws.S3.Download.t()

Download an S3 object to a file.

This operation downloads multiple parts of an S3 object concurrently, allowing you to maximize throughput.

Defaults to a concurrency of 8, chunk size of 1MB, and a timeout of 1 minute.

Streaming to memory

In order to use ExAws.stream!/2, the third dest parameter must be set to :memory. An example would be like the following:

ExAws.S3.download_file("example-bucket", "path/to/file.txt", :memory)
|> ExAws.stream!()

Note that this won't start fetching anything immediately since it returns an Elixir Stream.

Streaming by line

Streaming by line can be done with Stream.chunk_while/4. Here is an example:

# Returns a stream which grabs chunks of data from S3 as specified in `opts`
# but processes the stream line by line. For example, the default chunk
# size of 1MB means requests for bytes from S3 will ask for 1MB sizes (to be downloaded)
# however each element of the stream will be a single line.
def generate_stream(bucket, file, opts \\ []) do
  bucket
  |> ExAws.S3.download_file(file, :memory, opts)
  |> ExAws.stream!()
  # Uncomment if you need to gunzip (and add dependency :stream_gzip)
  # |> StreamGzip.gunzip()
  |> Stream.chunk_while("", &chunk_fun/2, &to_line_stream_after_fun/1)
  |> Stream.concat()
end

def chunk_fun(chunk, acc) do
  to_try = acc <> chunk
  {elements, acc} = chunk_by_newline(to_try, "\n", [], {0, byte_size(to_try)})
  {:cont, elements, acc}
end

defp chunk_by_newline(_string, _newline, elements, {_offset, 0}) do
  {Enum.reverse(elements), ""}
end

defp chunk_by_newline(string, newline, elements, {offset, length}) do
  case :binary.match(string, newline, scope: {offset, length}) do
    {newline_offset, newline_length} ->
      difference = newline_length + newline_offset - offset
      element = binary_part(string, offset, difference)

      chunk_by_newline(
        string,
        newline,
        [element | elements],
        {newline_offset + newline_length, length - difference}
      )
    :nomatch ->
      {Enum.reverse(elements), binary_part(string, offset, length)}
  end
end

defp to_line_stream_after_fun(""), do: {:cont, []}
defp to_line_stream_after_fun(acc), do: {:cont, [acc], []}
@spec get_bucket_acl(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket acl

@spec get_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket cors

Link to this function

get_bucket_lifecycle(bucket)

View Source
@spec get_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket lifecycle

Link to this function

get_bucket_location(bucket)

View Source
@spec get_bucket_location(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket location

Link to this function

get_bucket_logging(bucket)

View Source
@spec get_bucket_logging(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket logging

Link to this function

get_bucket_notification(bucket)

View Source
@spec get_bucket_notification(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket notification

Link to this function

get_bucket_object_versions(bucket, opts \\ [])

View Source
@spec get_bucket_object_versions(bucket :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

Get bucket object versions

Link to this function

get_bucket_policy(bucket)

View Source
@spec get_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket policy

Link to this function

get_bucket_replication(bucket)

View Source
@spec get_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket replication

Link to this function

get_bucket_request_payment(bucket)

View Source
@spec get_bucket_request_payment(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket payment configuration

Link to this function

get_bucket_tagging(bucket)

View Source
@spec get_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket tagging

Link to this function

get_bucket_versioning(bucket)

View Source
@spec get_bucket_versioning(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket versioning

Link to this function

get_bucket_website(bucket)

View Source
@spec get_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket website

Link to this function

get_object(bucket, object, opts \\ [])

View Source
@spec get_object(bucket :: binary(), object :: binary(), opts :: get_object_opts()) ::
  ExAws.Operation.S3.t()

Get an object from a bucket

Examples

S3.get_object("my-bucket", "image.png")
S3.get_object("my-bucket", "image.png", version_id: "ae57ekgXPpdiVZLkYVWoTAGRhGJ5swt9")
Link to this function

get_object_acl(bucket, object, opts \\ [])

View Source
@spec get_object_acl(bucket :: binary(), object :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

Get an object's access control policy

Link to this function

get_object_tagging(bucket, object, opts \\ [])

View Source
@spec get_object_tagging(bucket :: binary(), object :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

Get object tagging

Link to this function

get_object_torrent(bucket, object)

View Source
@spec get_object_torrent(bucket :: binary(), object :: binary()) ::
  ExAws.Operation.S3.t()

Get a torrent for a bucket

@spec head_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()

Determine if a bucket exists

Link to this function

head_object(bucket, object, opts \\ [])

View Source
@spec head_object(bucket :: binary(), object :: binary(), opts :: head_object_opts()) ::
  ExAws.Operation.S3.t()

Determine if an object exists

Link to this function

initiate_multipart_upload(bucket, object, opts \\ [])

View Source
@spec initiate_multipart_upload(
  bucket :: binary(),
  object :: binary(),
  opts :: initiate_multipart_upload_opts()
) :: ExAws.Operation.S3.t()

Initiate a multipart upload

Link to this function

list_buckets(opts \\ [])

View Source
@spec list_buckets(opts :: Keyword.t()) :: ExAws.Operation.S3.t()

List buckets

Link to this function

list_multipart_uploads(bucket, opts \\ [])

View Source
@spec list_multipart_uploads(bucket :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

List multipart uploads for a bucket

Link to this function

list_objects(bucket, opts \\ [])

View Source
@spec list_objects(bucket :: binary(), opts :: list_objects_opts()) ::
  ExAws.Operation.S3.t()

List objects in bucket

Can be streamed.

Examples

S3.list_objects("my-bucket") |> ExAws.request

S3.list_objects("my-bucket") |> ExAws.stream!
S3.list_objects("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!
Link to this function

list_objects_v2(bucket, opts \\ [])

View Source
@spec list_objects_v2(bucket :: binary(), opts :: list_objects_v2_opts()) ::
  ExAws.Operation.S3.t()

List objects in bucket

Can be streamed.

Examples

S3.list_objects_v2("my-bucket") |> ExAws.request

S3.list_objects_v2("my-bucket") |> ExAws.stream!
S3.list_objects_v2("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects_v2("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects_v2("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!
Link to this function

list_parts(bucket, object, upload_id, opts \\ [])

View Source
@spec list_parts(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary(),
  opts :: Keyword.t()
) ::
  ExAws.Operation.S3.t()

List the parts of a multipart upload

Link to this function

options_object(bucket, object, origin, request_method, request_headers \\ [])

View Source
@spec options_object(
  bucket :: binary(),
  object :: binary(),
  origin :: binary(),
  request_method :: atom(),
  request_headers :: [binary()]
) :: ExAws.Operation.S3.t()

Determine the CORS configuration for an object

Link to this function

post_object_restore(bucket, object, number_of_days, opts \\ [])

View Source
@spec post_object_restore(
  bucket :: binary(),
  object :: binary(),
  number_of_days :: pos_integer(),
  opts :: [{:version_id, binary()}]
) :: ExAws.Operation.S3.t()

Restore an object to a particular version

Link to this function

presigned_post(config, bucket, key, opts \\ [])

View Source
@spec presigned_post(
  config :: map(),
  bucket :: binary(),
  key :: binary() | nil,
  opts :: presigned_post_opts()
) :: presigned_post_result()

Generate a pre-signed post for an object.

When option param :virtual_host is true, the bucket name will be used in the hostname, along with the s3 default host which will look like - <bucket>.s3.<region>.amazonaws.com host.

When option param :s3_accelerate is true, the bucket name will be used as the hostname, along with the s3-accelerate.amazonaws.com host.

When option param :bucket_as_host is true, the bucket name will be used as the full hostname. In this case, bucket must be set to a full hostname, for example mybucket.example.com. The bucket_as_host must be passed along with virtual_host=true

Link to this function

presigned_url(config, http_method, bucket, object, opts \\ [])

View Source
@spec presigned_url(
  config :: map(),
  http_method :: atom(),
  bucket :: binary(),
  object :: binary(),
  opts :: presigned_url_opts()
) :: {:ok, binary()} | {:error, binary()}

Generate a pre-signed URL for an object. This is a local operation and does not check whether the bucket or object exists.

When option param :virtual_host is true, the bucket name will be used in the hostname, along with the s3 default host which will look like - <bucket>.s3.<region>.amazonaws.com host.

When option param :s3_accelerate is true, the bucket name will be used as the hostname, along with the s3-accelerate.amazonaws.com host.

When option param :bucket_as_host is true, the bucket name will be used as the full hostname. In this case, bucket must be set to a full hostname, for example mybucket.example.com. The bucket_as_host must be passed along with virtual_host=true

Option param :start_datetime can be used to modify the start date for the presigned url, which allows for cache friendly urls.

Additional (signed) query parameters can be added to the url by setting option param :query_params to a list of {"key", "value"} pairs. Useful if you are uploading parts of a multipart upload directly from the browser.

Signed headers can be added to the url by setting option param :headers to a list of {"key", "value"} pairs.

Example

:s3
|> ExAws.Config.new([])
|> ExAws.S3.presigned_url(:get, "my-bucket", "my-object", [])
Link to this function

put_bucket(bucket, region, opts \\ [])

View Source

Creates a bucket in the specified region

Link to this function

put_bucket_acl(bucket, grants)

View Source
@spec put_bucket_acl(bucket :: binary(), opts :: acl_opts()) :: ExAws.Operation.S3.t()

Update or create a bucket access control policy

Link to this function

put_bucket_cors(bucket, cors_rules)

View Source
@spec put_bucket_cors(bucket :: binary(), cors_config :: [map()]) ::
  ExAws.Operation.S3.t()

Update or create a bucket CORS policy

Link to this function

put_bucket_lifecycle(bucket, lifecycle_rules)

View Source
@spec put_bucket_lifecycle(bucket :: binary(), lifecycle_rules :: [map()]) ::
  ExAws.Operation.S3.t()

Update or create a bucket lifecycle configuration

Live-Cycle Rule Format

%{
  # Unique id for the rule (max. 255 chars, max. 1000 rules allowed)
  id: "123",

  # Disabled rules are not executed
  enabled: true,

  # Filters
  # Can be based on prefix, object tag(s), both or none
  filter: %{
    prefix: "prefix/",
    tags: %{
      "key" => "value"
    }
  },

  # Actions
  # https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions
  actions: %{
    transition: %{
      trigger: {:date, ~D[2020-03-26]}, # Date or days based
      storage: ""
    },
    expiration: %{
      trigger: {:days, 2}, # Date or days based
      expired_object_delete_marker: true
    },
    noncurrent_version_transition: %{
      trigger: {:days, 2}, # Only days based
      storage: ""
    },
    noncurrent_version_expiration: %{
      trigger: {:days, 2} # Only days based
      newer_noncurrent_versions: 10
    },
    abort_incomplete_multipart_upload: %{
      trigger: {:days, 2} # Only days based
    }
  }
}
Link to this function

put_bucket_logging(bucket, logging_config)

View Source
@spec put_bucket_logging(bucket :: binary(), logging_config :: map()) :: no_return()

Update or create a bucket logging configuration

Link to this function

put_bucket_notification(bucket, notification_config)

View Source
@spec put_bucket_notification(bucket :: binary(), notification_config :: map()) ::
  no_return()

Update or create a bucket notification configuration

Link to this function

put_bucket_policy(bucket, policy)

View Source
@spec put_bucket_policy(bucket :: binary(), policy :: String.t()) ::
  ExAws.Operation.S3.t()

Update or create a bucket policy configuration

Link to this function

put_bucket_replication(bucket, replication_config)

View Source
@spec put_bucket_replication(bucket :: binary(), replication_config :: map()) ::
  no_return()

Update or create a bucket replication configuration

Link to this function

put_bucket_request_payment(bucket, payer)

View Source
@spec put_bucket_request_payment(
  bucket :: binary(),
  payer :: :requester | :bucket_owner
) :: no_return()

Update or create a bucket requestPayment configuration

Link to this function

put_bucket_tagging(bucket, tags)

View Source
@spec put_bucket_tagging(bucket :: binary(), tags :: map()) :: no_return()

Update or create a bucket tagging configuration

Link to this function

put_bucket_versioning(bucket, version_config)

View Source
@spec put_bucket_versioning(bucket :: binary(), version_config :: binary()) ::
  ExAws.Operation.S3.t()

Update or create a bucket versioning configuration

Example

ExAws.S3.put_bucket_versioning(
 "my-bucket",
 "<VersioningConfiguration><Status>Enabled</Status></VersioningConfiguration>"
)
|> ExAws.request()
Link to this function

put_bucket_website(bucket, website_config)

View Source
@spec put_bucket_website(bucket :: binary(), website_config :: binary()) ::
  no_return()

Update or create a bucket website configuration

Link to this function

put_object(bucket, object, body, opts \\ [])

View Source
@spec put_object(
  bucket :: binary(),
  object :: binary(),
  body :: binary(),
  opts :: put_object_opts()
) ::
  ExAws.Operation.S3.t()

Create an object within a bucket

Link to this function

put_object_acl(bucket, object, acl)

View Source
@spec put_object_acl(bucket :: binary(), object :: binary(), acl :: acl_opts()) ::
  ExAws.Operation.S3.t()

Create or update an object's access control policy

Link to this function

put_object_copy(dest_bucket, dest_object, src_bucket, src_object, opts \\ [])

View Source
@spec put_object_copy(
  dest_bucket :: binary(),
  dest_object :: binary(),
  src_bucket :: binary(),
  src_object :: binary(),
  opts :: put_object_copy_opts()
) :: ExAws.Operation.S3.t()

Copy an object

Link to this function

put_object_tagging(bucket, object, tags, opts \\ [])

View Source
@spec put_object_tagging(
  bucket :: binary(),
  object :: binary(),
  tags :: Access.t(),
  opts :: Keyword.t()
) :: ExAws.Operation.S3.t()

Add a set of tags to an existing object

Options

  • :version_id - The versionId of the object that the tag-set will be added to.
Link to this function

upload(source, bucket, path, opts \\ [])

View Source
@spec upload(
  source :: Enumerable.t(),
  bucket :: String.t(),
  path :: String.t(),
  opts :: upload_opts()
) :: ExAws.S3.Upload.t()

Multipart upload to S3.

Handles initialization, uploading parts concurrently, and multipart upload completion.

Uploading a stream

Streams that emit binaries may be uploaded directly to S3. Each binary will be uploaded as a chunk, so it must be at least 5 megabytes in size. The S3.Upload.stream_file helper takes care of reading the file in 5 megabyte chunks.

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request! #=> :done

Options

These options are specific to this function

  • See Task.async_stream/5's :max_concurrency and :timeout options.
    • :max_concurrency - only applies when uploading a stream. Sets the maximum number of tasks to run at the same time. Defaults to 4
    • :timeout - the maximum amount of time (in milliseconds) each task is allowed to execute for. Defaults to 30_000.
    • :refetch_auth_on_request - re-fetch the auth from the library config on each request in the upload process instead of using the initial auth. Fixes an edge case uploading large files when using a strategy from ex_aws_sts that provides short lived tokens, where uploads could fail if the token expires before the upload is completed. Defaults to false.

All other options (ex. :content_type) are passed through to ExAws.S3.initiate_multipart_upload/3.

Link to this function

upload_part(bucket, object, upload_id, part_number, body, opts \\ [])

View Source
@spec upload_part(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary(),
  part_number :: pos_integer(),
  body :: binary(),
  opts :: [encryption_opts() | {:expect, binary()}]
) :: ExAws.Operation.S3.t()

Upload a part for a multipart upload

Link to this function

upload_part_copy(dest_bucket, dest_object, src_bucket, src_object, upload_id, part_number, source_range, opts \\ [])

View Source
@spec upload_part_copy(
  dest_bucket :: binary(),
  dest_object :: binary(),
  src_bucket :: binary(),
  src_object :: binary(),
  upload_id :: binary(),
  part_number :: pos_integer(),
  source_range :: Range.t(),
  opts :: upload_part_copy_opts()
) :: ExAws.Operation.S3.t()

Upload a part for a multipart copy