View Source ExAws.S3 (ExAws.S3 v2.5.5)
Service module for https://github.com/ex-aws/ex_aws
Installation
The package can be installed by adding :ex_aws_s3
to your list of dependencies in mix.exs
along with :ex_aws
, your preferred JSON codec / HTTP client, and optionally :sweet_xml
to support operations like list_objects
that require XML parsing.
def deps do
[
{:ex_aws, "~> 2.0"},
{:ex_aws_s3, "~> 2.0"},
{:poison, "~> 3.0"},
{:hackney, "~> 1.9"},
{:sweet_xml, "~> 0.6.6"}, # optional dependency
]
end
Operations on AWS S3
Basic Operations
The vast majority of operations here represent a single operation on S3.
Examples
S3.list_objects("my-bucket") |> ExAws.request! #=> %{body: [list, of, objects]}
S3.list_objects("my-bucket") |> ExAws.stream! |> Enum.to_list #=> [list, of, objects]
S3.put_object("my-bucket", "path/to/bucket", contents) |> ExAws.request!
Higher Level Operations
There are also some operations which operate at a higher level to make it easier to download and upload very large files.
Multipart uploads
"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request #=> {:ok, :done}
See ExAws.S3.upload/4
for options
Download large file to disk
S3.download_file("my-bucket", "path/on/s3", "path/to/dest/file")
|> ExAws.request #=> {:ok, :done}
More high level functionality
Task.async_stream makes some high level flows so easy you don't need explicit ExAws support.
For example, here is how to concurrently upload many files.
upload_file = fn {src_path, dest_path} ->
S3.put_object("my_bucket", dest_path, File.read!(src_path))
|> ExAws.request!
end
paths = %{"path/to/src0" => "path/to/dest0", "path/to/src1" => "path/to/dest1"}
paths
|> Task.async_stream(upload_file, max_concurrency: 10)
|> Stream.run
Bucket as host functionality
Examples
opts = [virtual_host: true, bucket_as_host: true]
ExAws.Config.new(:s3)
|> S3.presigned_url(:get, "bucket.custom-domain.com", "foo.txt", opts)
{:ok, "https://bucket.custom-domain.com/foo.txt"}
Configuration
The scheme
, host
, and port
can be configured to hit alternate endpoints.
For example, this is how to use a local minio instance:
# config.exs
config :ex_aws, :s3,
scheme: "http://",
host: "localhost",
port: 9000
An alternate content_hash_algorithm
can be specified as well. The default is :md5
. It may be necessary to change this when operating in a FIPS-compliant environment where MD5 isn't available, for instance. At this time, only :sha256
, :sha
, and :md5
are supported by both Erlang and S3.
# config.exs
config :ex_aws_s3, :content_hash_algorithm, :sha256
Summary
Types
The hashing algorithms that both S3 and Erlang support.
Functions
Abort a multipart upload
Complete a multipart upload
Delete all listed objects.
Delete a bucket
Delete a bucket cors
Delete a bucket lifecycle
Delete a bucket policy
Delete a bucket replication
Delete a bucket tagging
Delete a bucket website
Delete multiple objects within a bucket
Delete an object within a bucket
Remove the entire tag set from the specified object
Download an S3 object to a file.
Get bucket acl
Get bucket cors
Get bucket lifecycle
Get bucket location
Get bucket logging
Get bucket notification
Get bucket object versions
Get bucket policy
Get bucket replication
Get bucket payment configuration
Get bucket tagging
Get bucket versioning
Get bucket website
Get an object from a bucket
Get an object's access control policy
Get object tagging
Get a torrent for a bucket
Determine if a bucket exists
Determine if an object exists
Initiate a multipart upload
List buckets
List multipart uploads for a bucket
List objects in bucket
List objects in bucket
List the parts of a multipart upload
Determine the CORS configuration for an object
Restore an object to a particular version
Generate a pre-signed post for an object.
Generate a pre-signed URL for an object. This is a local operation and does not check whether the bucket or object exists.
Creates a bucket in the specified region
Update or create a bucket access control policy
Update or create a bucket CORS policy
Update or create a bucket lifecycle configuration
Update or create a bucket logging configuration
Update or create a bucket notification configuration
Update or create a bucket policy configuration
Update or create a bucket replication configuration
Update or create a bucket requestPayment configuration
Update or create a bucket tagging configuration
Update or create a bucket versioning configuration
Update or create a bucket website configuration
Create an object within a bucket
Create or update an object's access control policy
Add a set of tags to an existing object
Multipart upload to S3.
Upload a part for a multipart upload
Upload a part for a multipart copy
Types
@type acl_opt() :: {:acl, canned_acl()} | grant()
@type acl_opts() :: [acl_opt()]
@type canned_acl() ::
:private
| :public_read
| :public_read_write
| :authenticated_read
| :bucket_owner_read
| :bucket_owner_full_control
@type delete_object_opts() :: [delete_object_opt()]
@type download_file_opts() :: [ max_concurrency: pos_integer(), chunk_size: pos_integer(), timeout: pos_integer() ]
@type encryption_opts() :: binary() | [{:aws_kms_key_id, binary()}] | customer_encryption_opts()
@type expires_in_seconds() :: non_neg_integer()
@type get_object_opts() :: [ {:response, get_object_response_opts()} | {:version_id, binary()} | head_object_opt() ]
@type hash_algorithm() :: :sha | :sha256 | :md5
The hashing algorithms that both S3 and Erlang support.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html https://www.erlang.org/doc/man/crypto.html#type-hash_algorithm
@type head_object_opts() :: [head_object_opt()]
@type initiate_multipart_upload_opt() :: {:cache_control, binary()} | {:content_disposition, binary()} | {:content_encoding, binary()} | {:content_type, binary()} | {:expires, binary()} | {:website_redirect_location, binary()} | {:encryption, encryption_opts()} | {:meta, amz_meta_opts()} | acl_opt() | storage_class_opt()
@type initiate_multipart_upload_opts() :: [initiate_multipart_upload_opt()]
@type presigned_url_opts() :: [ expires_in: expires_in_seconds(), virtual_host: boolean(), s3_accelerate: boolean(), query_params: [{binary(), binary()}], headers: [{binary(), binary()}], bucket_as_host: boolean(), start_datetime: Calendar.naive_datetime() | :calendar.datetime() ]
@type put_object_copy_opts() :: [ {:metadata_directive, :COPY | :REPLACE} | {:copy_source_if_modified_since, binary()} | {:copy_source_if_unmodified_since, binary()} | {:copy_source_if_match, binary()} | {:copy_source_if_none_match, binary()} | {:website_redirect_location, binary()} | {:destination_encryption, encryption_opts()} | {:source_encryption, customer_encryption_opts()} | {:cache_control, binary()} | {:content_disposition, binary()} | {:content_encoding, binary()} | {:content_length, binary()} | {:content_type, binary()} | {:expect, binary()} | {:expires, binary()} | {:website_redirect_location, binary()} | {:meta, amz_meta_opts()} | acl_opt() | storage_class_opt() ]
@type put_object_opts() :: [ {:cache_control, binary()} | {:content_disposition, binary()} | {:content_encoding, binary()} | {:content_length, binary()} | {:content_type, binary()} | {:expect, binary()} | {:expires, binary()} | {:website_redirect_location, binary()} | {:encryption, encryption_opts()} | {:meta, amz_meta_opts()} | acl_opt() | storage_class_opt() ]
@type storage_class() ::
:standard
| :reduced_redundancy
| :standard_ia
| :onezone_ia
| :intelligent_tiering
| :glacier
| :deep_archive
| :outposts
| :glacier_ir
| :snow
@type storage_class_opt() :: {:storage_class, storage_class()}
@type upload_opt() :: {:max_concurrency, pos_integer()} | {:timeout, pos_integer()} | {:refetch_auth_on_request, boolean()} | initiate_multipart_upload_opt()
@type upload_opts() :: [upload_opt()]
@type upload_part_copy_opts() :: [ copy_source_if_modified_since: binary(), copy_source_if_unmodified_since: binary(), copy_source_if_match: binary(), copy_source_if_none_match: binary(), destination_encryption: encryption_opts(), source_encryption: customer_encryption_opts() ]
Functions
@spec abort_multipart_upload( bucket :: binary(), object :: binary(), upload_id :: binary() ) :: ExAws.Operation.S3.t()
Abort a multipart upload
@spec complete_multipart_upload( bucket :: binary(), object :: binary(), upload_id :: binary(), parts :: [{binary() | pos_integer(), binary()}, ...] ) :: ExAws.Operation.S3.t()
Complete a multipart upload
@spec delete_all_objects( bucket :: binary(), objects :: [binary() | {binary(), binary()}, ...] | Enumerable.t(), opts :: [{:quiet, true}] ) :: ExAws.Operation.S3DeleteAllObjects.t()
Delete all listed objects.
When performed, this function will continue making delete_multiple_objects
requests deleting 1000 objects at a time until all are deleted.
Can be streamed.
Example
stream = ExAws.S3.list_objects(bucket(), prefix: "some/prefix") |> ExAws.stream!() |> Stream.map(& &1.key)
ExAws.S3.delete_all_objects(bucket(), stream) |> ExAws.request()
@spec delete_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket
@spec delete_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket cors
@spec delete_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket lifecycle
@spec delete_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket policy
@spec delete_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket replication
@spec delete_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket tagging
@spec delete_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()
Delete a bucket website
@spec delete_multiple_objects( bucket :: binary(), objects :: [binary() | {binary(), binary()}, ...], opts :: [{:quiet, true}] ) :: ExAws.Operation.S3.t()
Delete multiple objects within a bucket
Limited to 1000 objects.
@spec delete_object( bucket :: binary(), object :: binary(), opts :: delete_object_opts() ) :: ExAws.Operation.S3.t()
Delete an object within a bucket
@spec delete_object_tagging( bucket :: binary(), object :: binary(), opts :: Keyword.t() ) :: ExAws.Operation.S3.t()
Remove the entire tag set from the specified object
@spec download_file( bucket :: binary(), path :: binary(), dest :: :memory | binary(), opts :: download_file_opts() ) :: ExAws.S3.Download.t()
Download an S3 object to a file.
This operation downloads multiple parts of an S3 object concurrently, allowing you to maximize throughput.
Defaults to a concurrency of 8, chunk size of 1MB, and a timeout of 1 minute.
Streaming to memory
In order to use ExAws.stream!/2
, the third dest
parameter must be set to :memory
.
An example would be like the following:
ExAws.S3.download_file("example-bucket", "path/to/file.txt", :memory)
|> ExAws.stream!()
Note that this won't start fetching anything immediately since it returns an Elixir Stream
.
Streaming by line
Streaming by line can be done with Stream.chunk_while/4
. Here is an example:
# Returns a stream which grabs chunks of data from S3 as specified in `opts`
# but processes the stream line by line. For example, the default chunk
# size of 1MB means requests for bytes from S3 will ask for 1MB sizes (to be downloaded)
# however each element of the stream will be a single line.
def generate_stream(bucket, file, opts \\ []) do
bucket
|> ExAws.S3.download_file(file, :memory, opts)
|> ExAws.stream!()
# Uncomment if you need to gunzip (and add dependency :stream_gzip)
# |> StreamGzip.gunzip()
|> Stream.chunk_while("", &chunk_fun/2, &to_line_stream_after_fun/1)
|> Stream.concat()
end
def chunk_fun(chunk, acc) do
to_try = acc <> chunk
{elements, acc} = chunk_by_newline(to_try, "\n", [], {0, byte_size(to_try)})
{:cont, elements, acc}
end
defp chunk_by_newline(_string, _newline, elements, {_offset, 0}) do
{Enum.reverse(elements), ""}
end
defp chunk_by_newline(string, newline, elements, {offset, length}) do
case :binary.match(string, newline, scope: {offset, length}) do
{newline_offset, newline_length} ->
difference = newline_length + newline_offset - offset
element = binary_part(string, offset, difference)
chunk_by_newline(
string,
newline,
[element | elements],
{newline_offset + newline_length, length - difference}
)
:nomatch ->
{Enum.reverse(elements), binary_part(string, offset, length)}
end
end
defp to_line_stream_after_fun(""), do: {:cont, []}
defp to_line_stream_after_fun(acc), do: {:cont, [acc], []}
@spec get_bucket_acl(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket acl
@spec get_bucket_cors(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket cors
@spec get_bucket_lifecycle(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket lifecycle
@spec get_bucket_location(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket location
@spec get_bucket_logging(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket logging
@spec get_bucket_notification(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket notification
@spec get_bucket_object_versions(bucket :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
Get bucket object versions
@spec get_bucket_policy(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket policy
@spec get_bucket_replication(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket replication
@spec get_bucket_request_payment(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket payment configuration
@spec get_bucket_tagging(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket tagging
@spec get_bucket_versioning(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket versioning
@spec get_bucket_website(bucket :: binary()) :: ExAws.Operation.S3.t()
Get bucket website
@spec get_object(bucket :: binary(), object :: binary(), opts :: get_object_opts()) :: ExAws.Operation.S3.t()
Get an object from a bucket
Examples
S3.get_object("my-bucket", "image.png")
S3.get_object("my-bucket", "image.png", version_id: "ae57ekgXPpdiVZLkYVWoTAGRhGJ5swt9")
@spec get_object_acl(bucket :: binary(), object :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
Get an object's access control policy
@spec get_object_tagging(bucket :: binary(), object :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
Get object tagging
@spec get_object_torrent(bucket :: binary(), object :: binary()) :: ExAws.Operation.S3.t()
Get a torrent for a bucket
@spec head_bucket(bucket :: binary()) :: ExAws.Operation.S3.t()
Determine if a bucket exists
@spec head_object(bucket :: binary(), object :: binary(), opts :: head_object_opts()) :: ExAws.Operation.S3.t()
Determine if an object exists
@spec initiate_multipart_upload( bucket :: binary(), object :: binary(), opts :: initiate_multipart_upload_opts() ) :: ExAws.Operation.S3.t()
Initiate a multipart upload
@spec list_buckets(opts :: Keyword.t()) :: ExAws.Operation.S3.t()
List buckets
@spec list_multipart_uploads(bucket :: binary(), opts :: Keyword.t()) :: ExAws.Operation.S3.t()
List multipart uploads for a bucket
@spec list_objects(bucket :: binary(), opts :: list_objects_opts()) :: ExAws.Operation.S3.t()
List objects in bucket
Can be streamed.
Examples
S3.list_objects("my-bucket") |> ExAws.request
S3.list_objects("my-bucket") |> ExAws.stream!
S3.list_objects("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!
@spec list_objects_v2(bucket :: binary(), opts :: list_objects_v2_opts()) :: ExAws.Operation.S3.t()
List objects in bucket
Can be streamed.
Examples
S3.list_objects_v2("my-bucket") |> ExAws.request
S3.list_objects_v2("my-bucket") |> ExAws.stream!
S3.list_objects_v2("my-bucket", delimiter: "/", prefix: "backup") |> ExAws.stream!
S3.list_objects_v2("my-bucket", prefix: "some/inner/location/path") |> ExAws.stream!
S3.list_objects_v2("my-bucket", max_keys: 5, encoding_type: "url") |> ExAws.stream!
@spec list_parts( bucket :: binary(), object :: binary(), upload_id :: binary(), opts :: Keyword.t() ) :: ExAws.Operation.S3.t()
List the parts of a multipart upload
options_object(bucket, object, origin, request_method, request_headers \\ [])
View Source@spec options_object( bucket :: binary(), object :: binary(), origin :: binary(), request_method :: atom(), request_headers :: [binary()] ) :: ExAws.Operation.S3.t()
Determine the CORS configuration for an object
@spec post_object_restore( bucket :: binary(), object :: binary(), number_of_days :: pos_integer(), opts :: [{:version_id, binary()}] ) :: ExAws.Operation.S3.t()
Restore an object to a particular version
@spec presigned_post( config :: map(), bucket :: binary(), key :: binary() | nil, opts :: presigned_post_opts() ) :: presigned_post_result()
Generate a pre-signed post for an object.
When option param :virtual_host
is true
, the bucket name will be used in
the hostname, along with the s3 default host which will look like -
<bucket>.s3.<region>.amazonaws.com
host.
When option param :s3_accelerate
is true
, the bucket name will be used as
the hostname, along with the s3-accelerate.amazonaws.com
host.
When option param :bucket_as_host
is true
, the bucket name will be used as the full hostname.
In this case, bucket must be set to a full hostname, for example mybucket.example.com
.
The bucket_as_host
must be passed along with virtual_host=true
@spec presigned_url( config :: map(), http_method :: atom(), bucket :: binary(), object :: binary(), opts :: presigned_url_opts() ) :: {:ok, binary()} | {:error, binary()}
Generate a pre-signed URL for an object. This is a local operation and does not check whether the bucket or object exists.
When option param :virtual_host
is true
, the bucket name will be used in
the hostname, along with the s3 default host which will look like -
<bucket>.s3.<region>.amazonaws.com
host.
When option param :s3_accelerate
is true
, the bucket name will be used as
the hostname, along with the s3-accelerate.amazonaws.com
host.
When option param :bucket_as_host
is true
, the bucket name will be used as the full hostname.
In this case, bucket must be set to a full hostname, for example mybucket.example.com
.
The bucket_as_host
must be passed along with virtual_host=true
Option param :start_datetime
can be used to modify the start date for the presigned url, which
allows for cache friendly urls.
Additional (signed) query parameters can be added to the url by setting option param
:query_params
to a list of {"key", "value"}
pairs. Useful if you are uploading parts of
a multipart upload directly from the browser.
Signed headers can be added to the url by setting option param :headers
to
a list of {"key", "value"}
pairs.
Example
:s3
|> ExAws.Config.new([])
|> ExAws.S3.presigned_url(:get, "my-bucket", "my-object", [])
Creates a bucket in the specified region
@spec put_bucket_acl(bucket :: binary(), opts :: acl_opts()) :: ExAws.Operation.S3.t()
Update or create a bucket access control policy
@spec put_bucket_cors(bucket :: binary(), cors_config :: [map()]) :: ExAws.Operation.S3.t()
Update or create a bucket CORS policy
@spec put_bucket_lifecycle(bucket :: binary(), lifecycle_rules :: [map()]) :: ExAws.Operation.S3.t()
Update or create a bucket lifecycle configuration
Live-Cycle Rule Format
%{
# Unique id for the rule (max. 255 chars, max. 1000 rules allowed)
id: "123",
# Disabled rules are not executed
enabled: true,
# Filters
# Can be based on prefix, object tag(s), both or none
filter: %{
prefix: "prefix/",
tags: %{
"key" => "value"
}
},
# Actions
# https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-actions
actions: %{
transition: %{
trigger: {:date, ~D[2020-03-26]}, # Date or days based
storage: ""
},
expiration: %{
trigger: {:days, 2}, # Date or days based
expired_object_delete_marker: true
},
noncurrent_version_transition: %{
trigger: {:days, 2}, # Only days based
storage: ""
},
noncurrent_version_expiration: %{
trigger: {:days, 2} # Only days based
newer_noncurrent_versions: 10
},
abort_incomplete_multipart_upload: %{
trigger: {:days, 2} # Only days based
}
}
}
Update or create a bucket logging configuration
Update or create a bucket notification configuration
@spec put_bucket_policy(bucket :: binary(), policy :: String.t()) :: ExAws.Operation.S3.t()
Update or create a bucket policy configuration
Update or create a bucket replication configuration
@spec put_bucket_request_payment( bucket :: binary(), payer :: :requester | :bucket_owner ) :: no_return()
Update or create a bucket requestPayment configuration
Update or create a bucket tagging configuration
@spec put_bucket_versioning(bucket :: binary(), version_config :: binary()) :: ExAws.Operation.S3.t()
Update or create a bucket versioning configuration
Example
ExAws.S3.put_bucket_versioning(
"my-bucket",
"<VersioningConfiguration><Status>Enabled</Status></VersioningConfiguration>"
)
|> ExAws.request()
Update or create a bucket website configuration
@spec put_object( bucket :: binary(), object :: binary(), body :: binary(), opts :: put_object_opts() ) :: ExAws.Operation.S3.t()
Create an object within a bucket
@spec put_object_acl(bucket :: binary(), object :: binary(), acl :: acl_opts()) :: ExAws.Operation.S3.t()
Create or update an object's access control policy
put_object_copy(dest_bucket, dest_object, src_bucket, src_object, opts \\ [])
View Source@spec put_object_copy( dest_bucket :: binary(), dest_object :: binary(), src_bucket :: binary(), src_object :: binary(), opts :: put_object_copy_opts() ) :: ExAws.Operation.S3.t()
Copy an object
@spec put_object_tagging( bucket :: binary(), object :: binary(), tags :: Access.t(), opts :: Keyword.t() ) :: ExAws.Operation.S3.t()
Add a set of tags to an existing object
Options
:version_id
- The versionId of the object that the tag-set will be added to.
@spec upload( source :: Enumerable.t(), bucket :: String.t(), path :: String.t(), opts :: upload_opts() ) :: ExAws.S3.Upload.t()
Multipart upload to S3.
Handles initialization, uploading parts concurrently, and multipart upload completion.
Uploading a stream
Streams that emit binaries may be uploaded directly to S3. Each binary will be uploaded
as a chunk, so it must be at least 5 megabytes in size. The S3.Upload.stream_file
helper takes care of reading the file in 5 megabyte chunks.
"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request! #=> :done
Options
These options are specific to this function
- See
Task.async_stream/5
's:max_concurrency
and:timeout
options.:max_concurrency
- only applies when uploading a stream. Sets the maximum number of tasks to run at the same time. Defaults to4
:timeout
- the maximum amount of time (in milliseconds) each task is allowed to execute for. Defaults to30_000
.:refetch_auth_on_request
- re-fetch the auth from the library config on each request in the upload process instead of using the initial auth. Fixes an edge case uploading large files when using a strategy fromex_aws_sts
that provides short lived tokens, where uploads could fail if the token expires before the upload is completed. Defaults tofalse
.
All other options (ex. :content_type
) are passed through to
ExAws.S3.initiate_multipart_upload/3
.
upload_part(bucket, object, upload_id, part_number, body, opts \\ [])
View Source@spec upload_part( bucket :: binary(), object :: binary(), upload_id :: binary(), part_number :: pos_integer(), body :: binary(), opts :: [encryption_opts() | {:expect, binary()}] ) :: ExAws.Operation.S3.t()
Upload a part for a multipart upload
upload_part_copy(dest_bucket, dest_object, src_bucket, src_object, upload_id, part_number, source_range, opts \\ [])
View Source@spec upload_part_copy( dest_bucket :: binary(), dest_object :: binary(), src_bucket :: binary(), src_object :: binary(), upload_id :: binary(), part_number :: pos_integer(), source_range :: Range.t(), opts :: upload_part_copy_opts() ) :: ExAws.Operation.S3.t()
Upload a part for a multipart copy