View Source ExAws.S3 (ExAws.S3 v2.5.6)

CI Module Version Hex Docs Total Download License Last Updated

Service module for https://github.com/ex-aws/ex_aws

Installation

The package can be installed by adding :ex_aws_s3 to your list of dependencies in mix.exs along with :ex_aws, your preferred JSON codec / HTTP client, and optionally :sweet_xml to support operations like list_objects that require XML parsing.

def deps do
  [
    {:ex_aws, "~> 2.0"},
    {:ex_aws_s3, "~> 2.0"},
    {:poison, "~> 3.0"},
    {:hackney, "~> 1.9"},
    {:sweet_xml, "~> 0.6.6"}, # optional dependency
  ]
end

Operations on AWS S3

Basic Operations

The vast majority of operations here represent a single operation on S3.

Examples

S3.list_objects("my-bucket") |> ExAws.request! #=> %{body: [list, of, objects]}
S3.list_objects("my-bucket") |> ExAws.stream! |> Enum.to_list #=> [list, of, objects]

S3.put_object("my-bucket", "path/to/bucket", contents) |> ExAws.request!

Higher Level Operations

There are also some operations which operate at a higher level to make it easier to download and upload very large files.

Multipart uploads

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request #=> {:ok, :done}

See ExAws.S3.upload/4 for options

Download large file to disk

S3.download_file("my-bucket", "path/on/s3", "path/to/dest/file")
|> ExAws.request #=> {:ok, :done}

More high level functionality

Task.async_stream makes some high level flows so easy you don't need explicit ExAws support.

For example, here is how to concurrently upload many files.

upload_file = fn {src_path, dest_path} ->
  S3.put_object("my_bucket", dest_path, File.read!(src_path))
  |> ExAws.request!
end

paths = %{"path/to/src0" => "path/to/dest0", "path/to/src1" => "path/to/dest1"}

paths
|> Task.async_stream(upload_file, max_concurrency: 10)
|> Stream.run

Bucket as host functionality

Examples

opts = [virtual_host: true, bucket_as_host: true]

ExAws.Config.new(:s3)
|> S3.presigned_url(:get, "bucket.custom-domain.com", "foo.txt", opts)

{:ok, "https://bucket.custom-domain.com/foo.txt"}

Configuration

The scheme, host, and port can be configured to hit alternate endpoints.

For example, this is how to use a local minio instance:

# config.exs
config :ex_aws, :s3,
  scheme: "http://",
  host: "localhost",
  port: 9000

An alternate content_hash_algorithm can be specified as well. The default is :md5. It may be necessary to change this when operating in a FIPS-compliant environment where MD5 isn't available, for instance. At this time, only :sha256, :sha, and :md5 are supported by both Erlang and S3.

# config.exs
config :ex_aws_s3, :content_hash_algorithm, :sha256

Summary

Functions

Delete a bucket

Delete a bucket cors

Delete a bucket lifecycle

Delete a bucket policy

Delete a bucket replication

Delete a bucket tagging

Delete a bucket website

Delete multiple objects within a bucket

Delete an object within a bucket

Remove the entire tag set from the specified object

Download an S3 object to a file.

Get bucket acl

Get bucket cors

Get bucket lifecycle

Get bucket location

Get bucket logging

Get bucket notification

Get bucket policy

Get bucket replication

Get bucket payment configuration

Get bucket tagging

Get bucket versioning

Get bucket website

Get an object from a bucket

Get an object's access control policy

Get a torrent for a bucket

Determine if a bucket exists

Determine if an object exists

List multipart uploads for a bucket

List objects in bucket

List objects in bucket

List the parts of a multipart upload

Restore an object to a particular version

Generate a pre-signed post for an object.

Generate a pre-signed URL for an object. This is a local operation and does not check whether the bucket or object exists.

Creates a bucket in the specified region

Update or create a bucket access control policy

Update or create a bucket CORS policy

Update or create a bucket lifecycle configuration

Update or create a bucket logging configuration

Update or create a bucket notification configuration

Update or create a bucket policy configuration

Update or create a bucket replication configuration

Update or create a bucket requestPayment configuration

Update or create a bucket tagging configuration

Update or create a bucket versioning configuration

Update or create a bucket website configuration

Create an object within a bucket

Create or update an object's access control policy

Add a set of tags to an existing object

Types

acl_opt()

@type acl_opt() :: {:acl, canned_acl()} | grant()

acl_opts()

@type acl_opts() :: [acl_opt()]

amz_meta_opts()

@type amz_meta_opts() :: [{atom(), binary()} | {binary(), binary()}, ...]

canned_acl()

@type canned_acl() ::
  :private
  | :public_read
  | :public_read_write
  | :authenticated_read
  | :bucket_owner_read
  | :bucket_owner_full_control

customer_encryption_opts()

@type customer_encryption_opts() :: [
  customer_algorithm: binary(),
  customer_key: binary(),
  customer_key_md5: binary()
]

delete_object_opt()

@type delete_object_opt() ::
  {:x_amz_mfa, binary()}
  | {:x_amz_request_payer, binary()}
  | {:x_amz_bypass_governance_retention, binary()}
  | {:x_amz_expected_bucket_owner, binary()}
  | {:version_id, binary()}

delete_object_opts()

@type delete_object_opts() :: [delete_object_opt()]

download_file_opts()

@type download_file_opts() :: [
  max_concurrency: pos_integer(),
  chunk_size: pos_integer(),
  timeout: pos_integer()
]

encryption_opts()

@type encryption_opts() ::
  binary() | [{:aws_kms_key_id, binary()}] | customer_encryption_opts()

expires_in_seconds()

@type expires_in_seconds() :: non_neg_integer()

get_object_opts()

@type get_object_opts() :: [
  {:response, get_object_response_opts()}
  | {:version_id, binary()}
  | head_object_opt()
]

get_object_response_opts()

@type get_object_response_opts() :: [
  content_language: binary(),
  expires: binary(),
  cache_control: binary(),
  content_disposition: binary(),
  content_encoding: binary()
]

grant()

@type grant() ::
  {:grant_read, grantee()}
  | {:grant_read_acp, grantee()}
  | {:grant_write_acp, grantee()}
  | {:grant_full_control, grantee()}

grantee()

@type grantee() :: [email: binary(), id: binary(), uri: binary()]

hash_algorithm()

@type hash_algorithm() :: :sha | :sha256 | :md5

The hashing algorithms that both S3 and Erlang support.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html https://www.erlang.org/doc/man/crypto.html#type-hash_algorithm

head_object_opt()

@type head_object_opt() ::
  {:encryption, customer_encryption_opts()}
  | {:range, binary()}
  | {:version_id, binary()}
  | {:if_modified_since, binary()}
  | {:if_unmodified_since, binary()}
  | {:if_match, binary()}
  | {:if_none_match, binary()}

head_object_opts()

@type head_object_opts() :: [head_object_opt()]

initiate_multipart_upload_opt()

@type initiate_multipart_upload_opt() ::
  {:cache_control, binary()}
  | {:content_disposition, binary()}
  | {:content_encoding, binary()}
  | {:content_type, binary()}
  | {:expires, binary()}
  | {:website_redirect_location, binary()}
  | {:encryption, encryption_opts()}
  | {:meta, amz_meta_opts()}
  | acl_opt()
  | storage_class_opt()

initiate_multipart_upload_opts()

@type initiate_multipart_upload_opts() :: [initiate_multipart_upload_opt()]

list_objects_opts()

@type list_objects_opts() :: [
  delimiter: binary(),
  marker: binary(),
  prefix: binary(),
  encoding_type: binary(),
  max_keys: 0..1000,
  stream_prefixes: boolean()
]

list_objects_v2_opts()

@type list_objects_v2_opts() :: [
  delimiter: binary(),
  prefix: binary(),
  encoding_type: binary(),
  max_keys: 0..1000,
  stream_prefixes: boolean(),
  continuation_token: binary(),
  fetch_owner: boolean(),
  start_after: binary()
]

presigned_post_opts()

@type presigned_post_opts() :: [
  expires_in: expires_in_seconds(),
  acl: binary() | {:starts_with, binary()},
  content_length_range: [integer()],
  key: binary() | {:starts_with, binary()},
  custom_conditions: [any()],
  virtual_host: boolean(),
  s3_accelerate: boolean(),
  bucket_as_host: boolean()
]

presigned_post_result()

@type presigned_post_result() :: %{
  url: binary(),
  fields: %{required(binary()) => binary()}
}

presigned_url_opts()

@type presigned_url_opts() :: [
  expires_in: expires_in_seconds(),
  virtual_host: boolean(),
  s3_accelerate: boolean(),
  query_params: [{binary(), binary()}],
  headers: [{binary(), binary()}],
  bucket_as_host: boolean(),
  start_datetime: Calendar.naive_datetime() | :calendar.datetime()
]

put_object_copy_opts()

@type put_object_copy_opts() :: [
  {:metadata_directive, :COPY | :REPLACE}
  | {:copy_source_if_modified_since, binary()}
  | {:copy_source_if_unmodified_since, binary()}
  | {:copy_source_if_match, binary()}
  | {:copy_source_if_none_match, binary()}
  | {:website_redirect_location, binary()}
  | {:destination_encryption, encryption_opts()}
  | {:source_encryption, customer_encryption_opts()}
  | {:cache_control, binary()}
  | {:content_disposition, binary()}
  | {:content_encoding, binary()}
  | {:content_length, binary()}
  | {:content_type, binary()}
  | {:expect, binary()}
  | {:expires, binary()}
  | {:website_redirect_location, binary()}
  | {:meta, amz_meta_opts()}
  | acl_opt()
  | storage_class_opt()
]

put_object_opts()

@type put_object_opts() :: [
  {:cache_control, binary()}
  | {:content_disposition, binary()}
  | {:content_encoding, binary()}
  | {:content_length, binary()}
  | {:content_type, binary()}
  | {:expect, binary()}
  | {:expires, binary()}
  | {:website_redirect_location, binary()}
  | {:encryption, encryption_opts()}
  | {:meta, amz_meta_opts()}
  | acl_opt()
  | storage_class_opt()
]

storage_class()

@type storage_class() ::
  :standard
  | :reduced_redundancy
  | :standard_ia
  | :onezone_ia
  | :intelligent_tiering
  | :glacier
  | :deep_archive
  | :outposts
  | :glacier_ir
  | :snow

storage_class_opt()

@type storage_class_opt() :: {:storage_class, storage_class()}

upload_opt()

@type upload_opt() ::
  {:max_concurrency, pos_integer()}
  | {:timeout, pos_integer()}
  | {:refetch_auth_on_request, boolean()}
  | initiate_multipart_upload_opt()

upload_opts()

@type upload_opts() :: [upload_opt()]

upload_part_copy_opts()

@type upload_part_copy_opts() :: [
  copy_source_if_modified_since: binary(),
  copy_source_if_unmodified_since: binary(),
  copy_source_if_match: binary(),
  copy_source_if_none_match: binary(),
  destination_encryption: encryption_opts(),
  source_encryption: customer_encryption_opts()
]

Functions

abort_multipart_upload(bucket, object, upload_id)

@spec abort_multipart_upload(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary()
) ::
  ExAws.Operation.S3.t()

Abort a multipart upload

calculate_content_header(content)

@spec calculate_content_header(iodata()) :: map()

complete_multipart_upload(bucket, object, upload_id, parts)

@spec complete_multipart_upload(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary(),
  parts :: [{binary() | pos_integer(), binary()}, ...]
) :: ExAws.Operation.S3.t()

Complete a multipart upload

delete_all_objects(bucket, objects, opts \\ [])

@spec delete_all_objects(
  bucket :: binary(),
  objects :: [binary() | {binary(), binary()}, ...] | Enumerable.t(),
  opts :: [{:quiet, true}]
) :: ExAws.Operation.S3DeleteAllObjects.t()

Delete all listed objects.

When performed, this function will continue making delete_multiple_objects requests deleting 1000 objects at a time until all are deleted.

Can be streamed.

Example

stream = ExAws.S3.list_objects(bucket(), prefix: "some/prefix") |> ExAws.stream!() |> Stream.map(& &1.key)
ExAws.S3.delete_all_objects(bucket(), stream) |> ExAws.request()

delete_bucket(bucket)

@spec delete_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket

delete_bucket_cors(bucket)

@spec delete_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket cors

delete_bucket_lifecycle(bucket)

@spec delete_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket lifecycle

delete_bucket_policy(bucket)

@spec delete_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket policy

delete_bucket_replication(bucket)

@spec delete_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket replication

delete_bucket_tagging(bucket)

@spec delete_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket tagging

delete_bucket_website(bucket)

@spec delete_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()

Delete a bucket website

delete_multiple_objects(bucket, objects, opts \\ [])

@spec delete_multiple_objects(
  bucket :: binary(),
  objects :: [binary() | {binary(), binary()}, ...],
  opts :: [{:quiet, true}]
) :: ExAws.Operation.S3.t()

Delete multiple objects within a bucket

Limited to 1000 objects.

delete_object(bucket, object, opts \\ [])

@spec delete_object(
  bucket :: binary(),
  object :: binary(),
  opts :: delete_object_opts()
) ::
  ExAws.Operation.S3.t()

Delete an object within a bucket

delete_object_tagging(bucket, object, opts \\ [])

@spec delete_object_tagging(
  bucket :: binary(),
  object :: binary(),
  opts :: Keyword.t()
) ::
  ExAws.Operation.S3.t()

Remove the entire tag set from the specified object

download_file(bucket, path, dest, opts \\ [])

@spec download_file(
  bucket :: binary(),
  path :: binary(),
  dest :: :memory | binary(),
  opts :: download_file_opts()
) :: ExAws.S3.Download.t()

Download an S3 object to a file.

This operation downloads multiple parts of an S3 object concurrently, allowing you to maximize throughput.

Defaults to a concurrency of 8, chunk size of 1MB, and a timeout of 1 minute.

Streaming to memory

In order to use ExAws.stream!/2, the third dest parameter must be set to :memory. An example would be like the following:

ExAws.S3.download_file("example-bucket", "path/to/file.txt", :memory)
|> ExAws.stream!()

Note that this won't start fetching anything immediately since it returns an Elixir Stream.

Streaming by line

Streaming by line can be done with Stream.chunk_while/4. Here is an example:

# Returns a stream which grabs chunks of data from S3 as specified in `opts`
# but processes the stream line by line. For example, the default chunk
# size of 1MB means requests for bytes from S3 will ask for 1MB sizes (to be downloaded)
# however each element of the stream will be a single line.
def generate_stream(bucket, file, opts \\ []) do
  bucket
  |> ExAws.S3.download_file(file, :memory, opts)
  |> ExAws.stream!()
  # Uncomment if you need to gunzip (and add dependency :stream_gzip)
  # |> StreamGzip.gunzip()
  |> Stream.chunk_while("", &chunk_fun/2, &to_line_stream_after_fun/1)
  |> Stream.concat()
end

def chunk_fun(chunk, acc) do
  to_try = acc <> chunk
  {elements, acc} = chunk_by_newline(to_try, "\n", [], {0, byte_size(to_try)})
  {:cont, elements, acc}
end

defp chunk_by_newline(_string, _newline, elements, {_offset, 0}) do
  {Enum.reverse(elements), ""}
end

defp chunk_by_newline(string, newline, elements, {offset, length}) do
  case :binary.match(string, newline, scope: {offset, length}) do
    {newline_offset, newline_length} ->
      difference = newline_length + newline_offset - offset
      element = binary_part(string, offset, difference)

      chunk_by_newline(
        string,
        newline,
        [element | elements],
        {newline_offset + newline_length, length - difference}
      )
    :nomatch ->
      {Enum.reverse(elements), binary_part(string, offset, length)}
  end
end

defp to_line_stream_after_fun(""), do: {:cont, []}
defp to_line_stream_after_fun(acc), do: {:cont, [acc], []}

get_bucket_acl(bucket)

@spec get_bucket_acl(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket acl

get_bucket_cors(bucket)

@spec get_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket cors

get_bucket_lifecycle(bucket)

@spec get_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket lifecycle

get_bucket_location(bucket)

@spec get_bucket_location(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket location

get_bucket_logging(bucket)

@spec get_bucket_logging(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket logging

get_bucket_notification(bucket)

@spec get_bucket_notification(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket notification

get_bucket_object_versions(bucket, opts \\ [])

@spec get_bucket_object_versions(bucket :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

Get bucket object versions

get_bucket_policy(bucket)

@spec get_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket policy

get_bucket_replication(bucket)

@spec get_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket replication

get_bucket_request_payment(bucket)

@spec get_bucket_request_payment(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket payment configuration

get_bucket_tagging(bucket)

@spec get_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket tagging

get_bucket_versioning(bucket)

@spec get_bucket_versioning(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket versioning

get_bucket_website(bucket)

@spec get_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()

Get bucket website

get_object(bucket, object, opts \\ [])

@spec get_object(bucket :: binary(), object :: binary(), opts :: get_object_opts()) ::
  ExAws.Operation.S3.t()

Get an object from a bucket

Examples

S3.get_object("my-bucket", "image.png")
S3.get_object("my-bucket", "image.png", version_id: "ae57ekgXPpdiVZLkYVWoTAGRhGJ5swt9")

get_object_acl(bucket, object, opts \\ [])

@spec get_object_acl(bucket :: binary(), object :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

Get an object's access control policy

get_object_tagging(bucket, object, opts \\ [])

@spec get_object_tagging(bucket :: binary(), object :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

Get object tagging

get_object_torrent(bucket, object)

@spec get_object_torrent(bucket :: binary(), object :: binary()) ::
  ExAws.Operation.S3.t()

Get a torrent for a bucket

head_bucket(bucket)

@spec head_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()

Determine if a bucket exists

head_object(bucket, object, opts \\ [])

@spec head_object(bucket :: binary(), object :: binary(), opts :: head_object_opts()) ::
  ExAws.Operation.S3.t()

Determine if an object exists

initiate_multipart_upload(bucket, object, opts \\ [])

@spec initiate_multipart_upload(
  bucket :: binary(),
  object :: binary(),
  opts :: initiate_multipart_upload_opts()
) :: ExAws.Operation.S3.t()

Initiate a multipart upload

list_buckets(opts \\ [])

@spec list_buckets(opts :: Keyword.t()) :: ExAws.Operation.S3.t()

List buckets

list_multipart_uploads(bucket, opts \\ [])

@spec list_multipart_uploads(bucket :: binary(), opts :: Keyword.t()) ::
  ExAws.Operation.S3.t()

List multipart uploads for a bucket

list_objects(bucket, opts \\ [])

@spec list_objects(bucket :: binary(), opts :: list_objects_opts()) ::
  ExAws.Operation.S3.t()

List objects in bucket

Can be streamed.

Examples

S3.list_objects("my-bucket") |> ExAws.request

S3.list_objects("my-bucket") |> ExAws.stream!
S3.list_objects("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!

list_objects_v2(bucket, opts \\ [])

@spec list_objects_v2(bucket :: binary(), opts :: list_objects_v2_opts()) ::
  ExAws.Operation.S3.t()

List objects in bucket

Can be streamed.

Examples

S3.list_objects_v2("my-bucket") |> ExAws.request

S3.list_objects_v2("my-bucket") |> ExAws.stream!
S3.list_objects_v2("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects_v2("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects_v2("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!

list_parts(bucket, object, upload_id, opts \\ [])

@spec list_parts(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary(),
  opts :: Keyword.t()
) ::
  ExAws.Operation.S3.t()

List the parts of a multipart upload

options_object(bucket, object, origin, request_method, request_headers \\ [])

@spec options_object(
  bucket :: binary(),
  object :: binary(),
  origin :: binary(),
  request_method :: atom(),
  request_headers :: [binary()]
) :: ExAws.Operation.S3.t()

Determine the CORS configuration for an object

post_object_restore(bucket, object, number_of_days, opts \\ [])

@spec post_object_restore(
  bucket :: binary(),
  object :: binary(),
  number_of_days :: pos_integer(),
  opts :: [{:version_id, binary()}]
) :: ExAws.Operation.S3.t()

Restore an object to a particular version

presigned_post(config, bucket, key, opts \\ [])

@spec presigned_post(
  config :: map(),
  bucket :: binary(),
  key :: binary() | nil,
  opts :: presigned_post_opts()
) :: presigned_post_result()

Generate a pre-signed post for an object.

When option param :virtual_host is true, the bucket name will be used in the hostname, along with the s3 default host which will look like - <bucket>.s3.<region>.amazonaws.com host.

When option param :s3_accelerate is true, the bucket name will be used as the hostname, along with the s3-accelerate.amazonaws.com host.

When option param :bucket_as_host is true, the bucket name will be used as the full hostname. In this case, bucket must be set to a full hostname, for example mybucket.example.com. The bucket_as_host must be passed along with virtual_host=true

presigned_url(config, http_method, bucket, object, opts \\ [])

@spec presigned_url(
  config :: map(),
  http_method :: atom(),
  bucket :: binary(),
  object :: binary(),
  opts :: presigned_url_opts()
) :: {:ok, binary()} | {:error, binary()}

Generate a pre-signed URL for an object. This is a local operation and does not check whether the bucket or object exists.

When option param :virtual_host is true, the bucket name will be used in the hostname, along with the s3 default host which will look like - <bucket>.s3.<region>.amazonaws.com host.

When option param :s3_accelerate is true, the bucket name will be used as the hostname, along with the s3-accelerate.amazonaws.com host.

When option param :bucket_as_host is true, the bucket name will be used as the full hostname. In this case, bucket must be set to a full hostname, for example mybucket.example.com. The bucket_as_host must be passed along with virtual_host=true

Option param :start_datetime can be used to modify the start date for the presigned url, which allows for cache friendly urls.

Additional (signed) query parameters can be added to the url by setting option param :query_params to a list of {"key", "value"} pairs. Useful if you are uploading parts of a multipart upload directly from the browser.

Signed headers can be added to the url by setting option param :headers to a list of {"key", "value"} pairs.

Example

:s3
|> ExAws.Config.new([])
|> ExAws.S3.presigned_url(:get, "my-bucket", "my-object", [])

put_bucket(bucket, region, opts \\ [])

Creates a bucket in the specified region

put_bucket_acl(bucket, grants)

@spec put_bucket_acl(bucket :: binary(), opts :: acl_opts()) :: ExAws.Operation.S3.t()

Update or create a bucket access control policy

put_bucket_cors(bucket, cors_rules)

@spec put_bucket_cors(bucket :: binary(), cors_config :: [map()]) ::
  ExAws.Operation.S3.t()

Update or create a bucket CORS policy

put_bucket_lifecycle(bucket, lifecycle_rules)

@spec put_bucket_lifecycle(bucket :: binary(), lifecycle_rules :: [map()]) ::
  ExAws.Operation.S3.t()

Update or create a bucket lifecycle configuration

Live-Cycle Rule Format

%{
  # Unique id for the rule (max. 255 chars, max. 1000 rules allowed)
  id: "123",

  # Disabled rules are not executed
  enabled: true,

  # Filters
  # Can be based on prefix, object tag(s), both or none
  filter: %{
    prefix: "prefix/",
    tags: %{
      "key" => "value"
    }
  },

  # Actions
  # https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions
  actions: %{
    transition: %{
      trigger: {:date, ~D[2020-03-26]}, # Date or days based
      storage: ""
    },
    expiration: %{
      trigger: {:days, 2}, # Date or days based
      expired_object_delete_marker: true
    },
    noncurrent_version_transition: %{
      trigger: {:days, 2}, # Only days based
      storage: ""
    },
    noncurrent_version_expiration: %{
      trigger: {:days, 2} # Only days based
      newer_noncurrent_versions: 10
    },
    abort_incomplete_multipart_upload: %{
      trigger: {:days, 2} # Only days based
    }
  }
}

put_bucket_logging(bucket, logging_config)

@spec put_bucket_logging(bucket :: binary(), logging_config :: map()) :: no_return()

Update or create a bucket logging configuration

put_bucket_notification(bucket, notification_config)

@spec put_bucket_notification(bucket :: binary(), notification_config :: map()) ::
  no_return()

Update or create a bucket notification configuration

put_bucket_policy(bucket, policy)

@spec put_bucket_policy(bucket :: binary(), policy :: String.t()) ::
  ExAws.Operation.S3.t()

Update or create a bucket policy configuration

put_bucket_replication(bucket, replication_config)

@spec put_bucket_replication(bucket :: binary(), replication_config :: map()) ::
  no_return()

Update or create a bucket replication configuration

put_bucket_request_payment(bucket, payer)

@spec put_bucket_request_payment(
  bucket :: binary(),
  payer :: :requester | :bucket_owner
) :: no_return()

Update or create a bucket requestPayment configuration

put_bucket_tagging(bucket, tags)

@spec put_bucket_tagging(bucket :: binary(), tags :: map()) :: no_return()

Update or create a bucket tagging configuration

put_bucket_versioning(bucket, version_config)

@spec put_bucket_versioning(bucket :: binary(), version_config :: binary()) ::
  ExAws.Operation.S3.t()

Update or create a bucket versioning configuration

Example

ExAws.S3.put_bucket_versioning(
 "my-bucket",
 "<VersioningConfiguration><Status>Enabled</Status></VersioningConfiguration>"
)
|> ExAws.request()

put_bucket_website(bucket, website_config)

@spec put_bucket_website(bucket :: binary(), website_config :: binary()) ::
  no_return()

Update or create a bucket website configuration

put_object(bucket, object, body, opts \\ [])

@spec put_object(
  bucket :: binary(),
  object :: binary(),
  body :: binary(),
  opts :: put_object_opts()
) ::
  ExAws.Operation.S3.t()

Create an object within a bucket

put_object_acl(bucket, object, acl)

@spec put_object_acl(bucket :: binary(), object :: binary(), acl :: acl_opts()) ::
  ExAws.Operation.S3.t()

Create or update an object's access control policy

put_object_copy(dest_bucket, dest_object, src_bucket, src_object, opts \\ [])

@spec put_object_copy(
  dest_bucket :: binary(),
  dest_object :: binary(),
  src_bucket :: binary(),
  src_object :: binary(),
  opts :: put_object_copy_opts()
) :: ExAws.Operation.S3.t()

Copy an object

put_object_tagging(bucket, object, tags, opts \\ [])

@spec put_object_tagging(
  bucket :: binary(),
  object :: binary(),
  tags :: Access.t(),
  opts :: Keyword.t()
) :: ExAws.Operation.S3.t()

Add a set of tags to an existing object

Options

  • :version_id - The versionId of the object that the tag-set will be added to.

upload(source, bucket, path, opts \\ [])

@spec upload(
  source :: Enumerable.t(),
  bucket :: String.t(),
  path :: String.t(),
  opts :: upload_opts()
) :: ExAws.S3.Upload.t()

Multipart upload to S3.

Handles initialization, uploading parts concurrently, and multipart upload completion.

Uploading a stream

Streams that emit binaries may be uploaded directly to S3. Each binary will be uploaded as a chunk, so it must be at least 5 megabytes in size. The S3.Upload.stream_file helper takes care of reading the file in 5 megabyte chunks.

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request! #=> :done

Options

These options are specific to this function

  • See Task.async_stream/5's :max_concurrency and :timeout options.
    • :max_concurrency - only applies when uploading a stream. Sets the maximum number of tasks to run at the same time. Defaults to 4
    • :timeout - the maximum amount of time (in milliseconds) each task is allowed to execute for. Defaults to 30_000.
    • :refetch_auth_on_request - re-fetch the auth from the library config on each request in the upload process instead of using the initial auth. Fixes an edge case uploading large files when using a strategy from ex_aws_sts that provides short lived tokens, where uploads could fail if the token expires before the upload is completed. Defaults to false.

All other options (ex. :content_type) are passed through to ExAws.S3.initiate_multipart_upload/3.

upload_part(bucket, object, upload_id, part_number, body, opts \\ [])

@spec upload_part(
  bucket :: binary(),
  object :: binary(),
  upload_id :: binary(),
  part_number :: pos_integer(),
  body :: binary(),
  opts :: [encryption_opts() | {:expect, binary()}]
) :: ExAws.Operation.S3.t()

Upload a part for a multipart upload

upload_part_copy(dest_bucket, dest_object, src_bucket, src_object, upload_id, part_number, source_range, opts \\ [])

@spec upload_part_copy(
  dest_bucket :: binary(),
  dest_object :: binary(),
  src_bucket :: binary(),
  src_object :: binary(),
  upload_id :: binary(),
  part_number :: pos_integer(),
  source_range :: Range.t(),
  opts :: upload_part_copy_opts()
) :: ExAws.Operation.S3.t()

Upload a part for a multipart copy