aws_dynamodb
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability.Summary
Functions
-
batch_get_item(Client, Input)
The
BatchGetItem
operation returns the attributes of one or more items from one or more tables. - batch_get_item(Client, Input, Options)
-
batch_write_item(Client, Input)
The
BatchWriteItem
operation puts or deletes multiple items in one or more tables. - batch_write_item(Client, Input, Options)
-
create_backup(Client, Input)
Creates a backup for an existing table.
- create_backup(Client, Input, Options)
-
create_global_table(Client, Input)
Creates a global table from an existing table.
- create_global_table(Client, Input, Options)
-
create_table(Client, Input)
The
CreateTable
operation adds a new table to your account. - create_table(Client, Input, Options)
-
delete_backup(Client, Input)
Deletes an existing backup of a table.
- delete_backup(Client, Input, Options)
-
delete_item(Client, Input)
Deletes a single item in a table by primary key.
- delete_item(Client, Input, Options)
-
delete_table(Client, Input)
The
DeleteTable
operation deletes a table and all of its items. - delete_table(Client, Input, Options)
-
describe_backup(Client, Input)
Describes an existing backup of a table.
- describe_backup(Client, Input, Options)
-
describe_continuous_backups(Client, Input)
Checks the status of continuous backups and point in time recovery on the specified table.
- describe_continuous_backups(Client, Input, Options)
-
describe_contributor_insights(Client, Input)
Returns information about contributor insights, for a given table or global secondary index.
- describe_contributor_insights(Client, Input, Options)
-
describe_endpoints(Client, Input)
Returns the regional endpoint information.
- describe_endpoints(Client, Input, Options)
-
describe_global_table(Client, Input)
Returns information about the specified global table.
- describe_global_table(Client, Input, Options)
-
describe_global_table_settings(Client, Input)
Describes Region-specific settings for a global table.
- describe_global_table_settings(Client, Input, Options)
-
describe_limits(Client, Input)
Returns the current provisioned-capacity limits for your AWS account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
- describe_limits(Client, Input, Options)
-
describe_table(Client, Input)
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
- describe_table(Client, Input, Options)
-
describe_table_replica_auto_scaling(Client, Input)
Describes auto scaling settings across replicas of the global table at once.
- describe_table_replica_auto_scaling(Client, Input, Options)
-
describe_time_to_live(Client, Input)
Gives a description of the Time to Live (TTL) status on the specified table.
- describe_time_to_live(Client, Input, Options)
-
get_item(Client, Input)
The
GetItem
operation returns a set of attributes for the item with the given primary key. - get_item(Client, Input, Options)
-
list_backups(Client, Input)
List backups associated with an AWS account.
- list_backups(Client, Input, Options)
-
list_contributor_insights(Client, Input)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
- list_contributor_insights(Client, Input, Options)
-
list_global_tables(Client, Input)
Lists all global tables that have a replica in the specified Region.
- list_global_tables(Client, Input, Options)
-
list_tables(Client, Input)
Returns an array of table names associated with the current account and endpoint.
- list_tables(Client, Input, Options)
-
list_tags_of_resource(Client, Input)
List all tags on an Amazon DynamoDB resource.
- list_tags_of_resource(Client, Input, Options)
-
put_item(Client, Input)
Creates a new item, or replaces an old item with a new item.
- put_item(Client, Input, Options)
-
query(Client, Input)
The
Query
operation finds items based on primary key values. - query(Client, Input, Options)
-
restore_table_from_backup(Client, Input)
Creates a new table from an existing backup.
- restore_table_from_backup(Client, Input, Options)
-
restore_table_to_point_in_time(Client, Input)
Restores the specified table to the specified point in time within
EarliestRestorableDateTime
andLatestRestorableDateTime
. - restore_table_to_point_in_time(Client, Input, Options)
-
scan(Client, Input)
The
Scan
operation returns one or more items and item attributes by accessing every item in a table or a secondary index. - scan(Client, Input, Options)
-
tag_resource(Client, Input)
Associate a set of tags with an Amazon DynamoDB resource.
- tag_resource(Client, Input, Options)
-
transact_get_items(Client, Input)
TransactGetItems
is a synchronous operation that atomically retrieves multiple items from one or more tables (but not from indexes) in a single account and Region. - transact_get_items(Client, Input, Options)
-
transact_write_items(Client, Input)
TransactWriteItems
is a synchronous write operation that groups up to 25 action requests. - transact_write_items(Client, Input, Options)
-
untag_resource(Client, Input)
Removes the association of tags from an Amazon DynamoDB resource.
- untag_resource(Client, Input, Options)
-
update_continuous_backups(Client, Input)
UpdateContinuousBackups
enables or disables point in time recovery for the specified table. - update_continuous_backups(Client, Input, Options)
-
update_contributor_insights(Client, Input)
Updates the status for contributor insights for a specific table or index.
- update_contributor_insights(Client, Input, Options)
-
update_global_table(Client, Input)
Adds or removes replicas in the specified global table.
- update_global_table(Client, Input, Options)
-
update_global_table_settings(Client, Input)
Updates settings for a global table.
- update_global_table_settings(Client, Input, Options)
-
update_item(Client, Input)
Edits an existing item's attributes, or adds a new item to the table if it does not already exist.
- update_item(Client, Input, Options)
-
update_table(Client, Input)
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
- update_table(Client, Input, Options)
-
update_table_replica_auto_scaling(Client, Input)
Updates auto scaling settings on your global tables at once.
- update_table_replica_auto_scaling(Client, Input, Options)
-
update_time_to_live(Client, Input)
The
UpdateTimeToLive
method enables or disables Time to Live (TTL) for the specified table. - update_time_to_live(Client, Input, Options)
Functions
batch_get_item(Client, Input)
The BatchGetItem
operation returns the attributes of one
or more items from one or more tables. You identify requested items by
primary key.
A single operation can retrieve up to 16 MB of data, which can contain as
many as 100 items. BatchGetItem
returns a partial result if
the response size limit is exceeded, the table's provisioned throughput is
exceeded, or an internal processing failure occurs. If a partial result is
returned, the operation returns a value for UnprocessedKeys
.
You can use this value to retry the operation starting with the next item
to get.
BatchGetItem
returns a ValidationException
with the message "Too many
items requested for the BatchGetItem call."
UnprocessedKeys
value so you can get the next page of
results. If desired, your application can include its own logic to
assemble the pages of results into one dataset.
If none of the items can be processed due to insufficient
provisioned throughput on all of the tables in the request, then
BatchGetItem
returns a
ProvisionedThroughputExceededException
. If at least
one of the items is successfully processed, then
BatchGetItem
completes successfully, while returning the keys
of the unread items in UnprocessedKeys
.
BatchGetItem
performs eventually
consistent reads on every table in the request. If you want strongly
consistent reads instead, you can set ConsistentRead
to
true
for any or all tables.
In order to minimize response latency, BatchGetItem
retrieves
items in parallel.
When designing your application, keep in mind that DynamoDB does not
return items in any particular order. To help parse the response by item,
include the primary key values for the items in your request in the
ProjectionExpression
parameter.
batch_get_item(Client, Input, Options)
batch_write_item(Client, Input)
The BatchWriteItem
operation puts or deletes multiple
items in one or more tables. A single call to BatchWriteItem
can write up to 16 MB of data, which can comprise as many as 25 put or
delete requests. Individual items to be written can be as large as 400 KB.
BatchWriteItem
cannot update items. To update items,
use the UpdateItem
action.
PutItem
and DeleteItem
operations specified in BatchWriteItem
are atomic; however
BatchWriteItem
as a whole is not. If any requested operations
fail because the table's provisioned throughput is exceeded or an internal
processing failure occurs, the failed operations are returned in the
UnprocessedItems
response parameter. You can investigate and
optionally resend the requests. Typically, you would call
BatchWriteItem
in a loop. Each iteration would check for
unprocessed items and submit a new BatchWriteItem
request
with those unprocessed items until all items have been processed.
If none of the items can be processed due to insufficient
provisioned throughput on all of the tables in the request, then
BatchWriteItem
returns a
ProvisionedThroughputExceededException
.
BatchWriteItem
, you can efficiently write
or delete large amounts of data, such as from Amazon EMR, or copy data
from another database into DynamoDB. In order to improve performance with
these large-scale operations, BatchWriteItem
does not behave
in the same way as individual PutItem
and
DeleteItem
calls would. For example, you cannot specify
conditions on individual put and delete requests, and
BatchWriteItem
does not return deleted items in the response.
If you use a programming language that supports concurrency, you can use
threads to write items in parallel. Your application must include the
necessary logic to manage the threads. With languages that don't support
threading, you must update or delete the specified items one at a time. In
both situations, BatchWriteItem
performs the specified put
and delete operations in parallel, giving you the power of the thread pool
approach without having to introduce complexity into your application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the
BatchWriteItem
request does not exist.Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same
BatchWriteItem
request. For example, you cannot put and delete the same item in the sameBatchWriteItem
request.Your request contains at least two items with identical hash and range keys (which essentially is two put operations).
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
batch_write_item(Client, Input, Options)
create_backup(Client, Input)
Creates a backup for an existing table.
Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken.
When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call CreateBackup
at a maximum rate of 50 times per
second.
All backups in DynamoDB work without consuming any provisioned throughput on the table.
If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency.
Along with data, the following are also included on the backups:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Streams
Provisioned read and write capacity
create_backup(Client, Input, Options)
create_global_table(Client, Input)
Creates a global table from an existing table. A global table creates a replication relationship between two or more DynamoDB tables with the same table name in the provided Regions.
The table must have the same primary key as all of the other replicas.
The table must have the same name as all of the other replicas.
The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the item.
None of the replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
If local secondary indexes are specified, then the following conditions must also be met:
The local secondary indexes must have the same name.
The local secondary indexes must have the same hash key and sort key (if present).
create_global_table(Client, Input, Options)
create_table(Client, Input)
The CreateTable
operation adds a new table to your
account. In an AWS account, table names must be unique within each Region.
That is, you can have two tables with same name if you create the tables
in different Regions.
CreateTable
is an asynchronous operation. Upon receiving a
CreateTable
request, DynamoDB immediately returns a response
with a TableStatus
of CREATING
. After the table
is created, DynamoDB sets the TableStatus
to
ACTIVE
. You can perform read and write operations only on an
ACTIVE
table.
You can optionally define secondary indexes on the new table, as part of
the CreateTable
operation. If you want to create multiple
tables with secondary indexes on them, you must create the tables
sequentially. Only one table with secondary indexes can be in the
CREATING
state at any given time.
DescribeTable
action to check the table
status.
create_table(Client, Input, Options)
delete_backup(Client, Input)
Deletes an existing backup of a table.
You can callDeleteBackup
at a maximum rate of 10 times per
second.
delete_backup(Client, Input, Options)
delete_item(Client, Input)
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute
values in the same operation, using the ReturnValues
parameter.
Unless you specify conditions, the DeleteItem
is an
idempotent operation; running it multiple times on the same item or
attribute does not result in an error response.
delete_item(Client, Input, Options)
delete_table(Client, Input)
The DeleteTable
operation deletes a table and all of its
items. After a DeleteTable
request, the specified table is in
the DELETING
state until DynamoDB completes the deletion. If
the table is in the ACTIVE
state, you can delete it. If a
table is in CREATING
or UPDATING
states, then
DynamoDB returns a ResourceInUseException
. If the specified
table does not exist, DynamoDB returns a
ResourceNotFoundException
. If table is already in the
DELETING
state, no error is returned.
GetItem
and PutItem
, on a table in the
DELETING
state until the table deletion is complete.
If you have DynamoDB Streams enabled on the table, then the corresponding
stream on that table goes into the DISABLED
state, and the
stream is automatically deleted after 24 hours.
DescribeTable
action to check the status of the
table.
delete_table(Client, Input, Options)
describe_backup(Client, Input)
Describes an existing backup of a table.
You can callDescribeBackup
at a maximum rate of 10 times per
second.
describe_backup(Client, Input, Options)
describe_continuous_backups(Client, Input)
Checks the status of continuous backups and point in time recovery on
the specified table. Continuous backups are ENABLED
on all
tables at table creation. If point in time recovery is enabled,
PointInTimeRecoveryStatus
will be set to ENABLED.
After continuous backups and point in time recovery are enabled, you can
restore to any point in time within
EarliestRestorableDateTime
and
LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the
current time. You can restore your table to any point in time during the
last 35 days.
DescribeContinuousBackups
at a maximum rate of
10 times per second.
describe_continuous_backups(Client, Input, Options)
describe_contributor_insights(Client, Input)
Returns information about contributor insights, for a given table or global secondary index.
describe_contributor_insights(Client, Input, Options)
describe_endpoints(Client, Input)
Returns the regional endpoint information.
describe_endpoints(Client, Input, Options)
describe_global_table(Client, Input)
Returns information about the specified global table.
describe_global_table(Client, Input, Options)
describe_global_table_settings(Client, Input)
Describes Region-specific settings for a global table.
describe_global_table_settings(Client, Input, Options)
describe_limits(Client, Input)
Returns the current provisioned-capacity limits for your AWS account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an AWS account, the account has initial limits on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table limits that apply when you create a table there. For more information, see Limits page in the Amazon DynamoDB Developer Guide.
Although you can increase these limits by filing a case at AWS Support
Center, obtaining the increase is not instantaneous. The
DescribeLimits
action lets you write code to compare the
capacity you are currently using to those limits imposed by your account
so that you have enough time to apply for an increase before you hit a
limit.
For example, you could use one of the AWS SDKs to do the following:
Call
DescribeLimits
for a particular Region to obtain your current account limits on provisioned capacity there.Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
Call
ListTables
to obtain a list of all your DynamoDB tables.For each table name listed by
ListTables
, do the following:Call
DescribeTable
with the table name.Use the data returned by
DescribeTable
to add the read capacity units and write capacity units provisioned for the table itself to your variables.If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
Report the account limits for that Region returned by
DescribeLimits
, along with the total current provisioned capacity levels you have calculated.
This will let you see whether you are getting close to your account-level limits.
The per-table limits apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly. But the only upper limit that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account limits.
DescribeLimits
should only be called periodically. You
can expect throttling errors if you call it more than once in a minute.
DescribeLimits
Request element has no content.
describe_limits(Client, Input, Options)
describe_table(Client, Input)
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
DescribeTable
request immediately after
a CreateTable
request, DynamoDB might return a
ResourceNotFoundException
. This is because
DescribeTable
uses an eventually consistent query, and the
metadata for your table might not be available at that moment. Wait for a
few seconds, and then try the DescribeTable
request again.
describe_table(Client, Input, Options)
describe_table_replica_auto_scaling(Client, Input)
Describes auto scaling settings across replicas of the global table at once.
describe_table_replica_auto_scaling(Client, Input, Options)
describe_time_to_live(Client, Input)
Gives a description of the Time to Live (TTL) status on the specified table.
describe_time_to_live(Client, Input, Options)
get_item(Client, Input)
The GetItem
operation returns a set of attributes for
the item with the given primary key. If there is no matching item,
GetItem
does not return any data and there will be no
Item
element in the response.
GetItem
provides an eventually consistent read by default. If
your application requires a strongly consistent read, set
ConsistentRead
to true
. Although a strongly
consistent read might take more time than an eventually consistent read,
it always returns the last updated value.
get_item(Client, Input, Options)
list_backups(Client, Input)
List backups associated with an AWS account. To list backups for a
given table, specify TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of items in a
page. You can also specify a limit for the maximum number of entries to be
returned in a page.
In the request, start time is inclusive, but end time is exclusive. Note that these limits are for the time at which the original backup was requested.
You can callListBackups
a maximum of five times per second.
list_backups(Client, Input, Options)
list_contributor_insights(Client, Input)
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
list_contributor_insights(Client, Input, Options)
list_global_tables(Client, Input)
Lists all global tables that have a replica in the specified Region.
list_global_tables(Client, Input, Options)
list_tables(Client, Input)
Returns an array of table names associated with the current account
and endpoint. The output from ListTables
is paginated, with
each page returning a maximum of 100 table names.
list_tables(Client, Input, Options)
list_tags_of_resource(Client, Input)
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.list_tags_of_resource(Client, Input, Options)
put_item(Client, Input)
Creates a new item, or replaces an old item with a new item. If an
item that has the same primary key as the new item already exists in the
specified table, the new item completely replaces the existing item. You
can perform a conditional put operation (add a new item if one with the
specified primary key doesn't exist), or replace an existing item if it
has certain attribute values. You can return the item's attribute values
in the same operation, using the ReturnValues
parameter.
PutItem
API.
For information on how to call the PutItem
API using the AWS
SDK in specific languages, see the following:
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
ValidationException
exception.
attribute_not_exists
function with the name of the attribute being used as the partition key
for the table. Since every record must contain that attribute, the
attribute_not_exists
function will only succeed if no
matching item exists.
PutItem
, see Working
with Items in the Amazon DynamoDB Developer Guide.
put_item(Client, Input, Options)
query(Client, Input)
The Query
operation finds items based on primary key
values. You can query any table or secondary index that has a composite
primary key (a partition key and a sort key).
Use the KeyConditionExpression
parameter to provide a
specific value for the partition key. The Query
operation
will return all of the items from the table or index with that partition
key value. You can optionally narrow the scope of the Query
operation by specifying a sort key value and a comparison operator in
KeyConditionExpression
. To further refine the
Query
results, you can optionally provide a
FilterExpression
. A FilterExpression
determines
which items within the results should be returned to you. All of the other
results are discarded.
A Query
operation always returns a result set. If no matching
items are found, the result set will be empty. Queries that do not return
results consume the minimum number of read capacity units for that type of
read operation.
FilterExpression
.
Query
results are always sorted by the sort key
value. If the data type of the sort key is Number, the results are
returned in numeric order; otherwise, the results are returned in order of
UTF-8 bytes. By default, the sort order is ascending. To reverse the
order, set the ScanIndexForward
parameter to false.
A single Query
operation will read up to the maximum number
of items set (if using the Limit
parameter) or a maximum of 1
MB of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present
in the response, you will need to paginate the result set. For more
information, see Paginating
the Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the results are returned. A
FilterExpression
cannot contain partition key or sort key
attributes. You need to specify those attributes in the
KeyConditionExpression
.
Query
operation can return an empty result set and a
LastEvaluatedKey
if all the items read for the page of
results are filtered out.
ConsistentRead
parameter to true
and
obtain a strongly consistent result. Global secondary indexes support
eventually consistent reads only, so do not specify
ConsistentRead
when querying a global secondary index.
query(Client, Input, Options)
restore_table_from_backup(Client, Input)
Creates a new table from an existing backup. Any number of users can execute up to 4 concurrent restores (any type of restore) in a given account.
You can call RestoreTableFromBackup
at a maximum rate of 10
times per second.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
restore_table_from_backup(Client, Input, Options)
restore_table_to_point_in_time(Client, Input)
Restores the specified table to the specified point in time within
EarliestRestorableDateTime
and
LatestRestorableDateTime
. You can restore your table to any
point in time during the last 35 days. Any number of users can execute up
to 4 concurrent restores (any type of restore) in a given account.
When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table.
Along with data, the following are also included on the new restored table using point in time recovery:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Provisioned read and write capacity
Encryption settings
All these settings come from the current settings of the source table at the time of restore.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
Point in time recovery settings
restore_table_to_point_in_time(Client, Input, Options)
scan(Client, Input)
The Scan
operation returns one or more items and item
attributes by accessing every item in a table or a secondary index. To
have DynamoDB return fewer items, you can provide a
FilterExpression
operation.
If the total number of scanned items exceeds the maximum dataset size
limit of 1 MB, the scan stops and results are returned to the user as a
LastEvaluatedKey
value to continue the scan in a subsequent
operation. The results also include the number of items exceeding the
limit. A scan can result in no table data meeting the filter criteria.
A single Scan
operation reads up to the maximum number of
items set (if using the Limit
parameter) or a maximum of 1 MB
of data and then apply any filtering to the results using
FilterExpression
. If LastEvaluatedKey
is present
in the response, you need to paginate the result set. For more
information, see Paginating
the Results in the Amazon DynamoDB Developer Guide.
Scan
operations proceed sequentially; however, for faster
performance on a large table or secondary index, applications can request
a parallel Scan
operation by providing the
Segment
and TotalSegments
parameters. For more
information, see Parallel
Scan in the Amazon DynamoDB Developer Guide.
Scan
uses eventually consistent reads when accessing the data
in a table; therefore, the result set might not include the changes to
data in the table immediately before the operation began. If you need a
consistent copy of the data, as of the time that the Scan
begins, you can set the ConsistentRead
parameter to
true
.
scan(Client, Input, Options)
tag_resource(Client, Input)
Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.tag_resource(Client, Input, Options)
transact_get_items(Client, Input)
TransactGetItems
is a synchronous operation that
atomically retrieves multiple items from one or more tables (but not from
indexes) in a single account and Region. A TransactGetItems
call can contain up to 25 TransactGetItem
objects, each of
which contains a Get
structure that specifies an item to
retrieve from a table in the account and Region. A call to
TransactGetItems
cannot retrieve items from tables in more
than one AWS account or Region. The aggregate size of the items in the
transaction cannot exceed 4 MB.
DynamoDB rejects the entire TransactGetItems
request if any
of the following is true:
A conflicting operation is in the process of updating an item to be read.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
The aggregate size of the items in the transaction cannot exceed 4 MB.
transact_get_items(Client, Input, Options)
transact_write_items(Client, Input)
TransactWriteItems
is a synchronous write operation that
groups up to 25 action requests. These actions can target items in
different tables, but not in different AWS accounts or Regions, and no two
actions can target the same item. For example, you cannot both
ConditionCheck
and Update
the same item. The
aggregate size of the items in the transaction cannot exceed 4 MB.
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
Put
Initiates aPutItem
operation to write a new item. This structure specifies the primary key of the item to be written, the name of the table to write it in, an optional condition expression that must be satisfied for the write to succeed, a list of the item's attributes, and a field indicating whether to retrieve the item's attributes if the condition is not met.Update
Initiates anUpdateItem
operation to update an existing item. This structure specifies the primary key of the item to be updated, the name of the table where it resides, an optional condition expression that must be satisfied for the update to succeed, an expression that defines one or more attributes to be updated, and a field indicating whether to retrieve the item's attributes if the condition is not met.Delete
Initiates aDeleteItem
operation to delete an existing item. This structure specifies the primary key of the item to be deleted, the name of the table where it resides, an optional condition expression that must be satisfied for the deletion to succeed, and a field indicating whether to retrieve the item's attributes if the condition is not met.ConditionCheck
Applies a condition to an item that is not being modified by the transaction. This structure specifies the primary key of the item to be checked, the name of the table where it resides, a condition expression that must be satisfied for the transaction to succeed, and a field indicating whether to retrieve the item's attributes if the condition is not met.
DynamoDB rejects the entire TransactWriteItems
request if any of the following is true:
A condition in one of the condition expressions is not met.
An ongoing operation is in the process of updating the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (bigger than 400 KB), a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
The aggregate size of the items in the transaction exceeds 4 MB.
There is a user error, such as an invalid data format.
transact_write_items(Client, Input, Options)
untag_resource(Client, Input)
Removes the association of tags from an Amazon DynamoDB resource. You
can call UntagResource
up to five times per second, per
account.
untag_resource(Client, Input, Options)
update_continuous_backups(Client, Input)
UpdateContinuousBackups
enables or disables point in
time recovery for the specified table. A successful
UpdateContinuousBackups
call returns the current
ContinuousBackupsDescription
. Continuous backups are
ENABLED
on all tables at table creation. If point in time
recovery is enabled, PointInTimeRecoveryStatus
will be set to
ENABLED.
Once continuous backups and point in time recovery are enabled, you can
restore to any point in time within
EarliestRestorableDateTime
and
LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the
current time. You can restore your table to any point in time during the
last 35 days.
update_continuous_backups(Client, Input, Options)
update_contributor_insights(Client, Input)
Updates the status for contributor insights for a specific table or index.
update_contributor_insights(Client, Input, Options)
update_global_table(Client, Input)
Adds or removes replicas in the specified global table. The global table must already exist to be able to use this operation. Any replica to be added must be empty, have the same name as the global table, have the same key schema, have DynamoDB Streams enabled, and have the same provisioned and maximum write capacity units.
UpdateGlobalTable
to add replicas
and remove replicas in a single request, for simplicity we recommend that
you issue separate requests for adding or removing replicas.
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
The global secondary indexes must have the same provisioned and maximum write capacity units.
update_global_table(Client, Input, Options)
update_global_table_settings(Client, Input)
Updates settings for a global table.
update_global_table_settings(Client, Input, Options)
update_item(Client, Input)
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the sameUpdateItem
operation using the ReturnValues
parameter.
update_item(Client, Input, Options)
update_table(Client, Input)
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
You can only perform one of the following operations at once:
Modify the provisioned throughput settings of the table.
Enable or disable DynamoDB Streams on the table.
Remove a global secondary index from the table.
Create a new global secondary index on the table. After the index begins backfilling, you can use
UpdateTable
to perform other operations.
UpdateTable
is an asynchronous operation; while
it is executing, the table status changes from ACTIVE
to
UPDATING
. While it is UPDATING
, you cannot issue
another UpdateTable
request. When the table returns to the
ACTIVE
state, the UpdateTable
operation is
complete.
update_table(Client, Input, Options)
update_table_replica_auto_scaling(Client, Input)
Updates auto scaling settings on your global tables at once.
update_table_replica_auto_scaling(Client, Input, Options)
update_time_to_live(Client, Input)
The UpdateTimeToLive
method enables or disables Time to
Live (TTL) for the specified table. A successful
UpdateTimeToLive
call returns the current
TimeToLiveSpecification
. It can take up to one hour for the
change to fully process. Any additional UpdateTimeToLive
calls for the same table during this one hour duration result in a
ValidationException
.
TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.