Backing up DynamoDB

by Shawn Bower

As we have been helping folks move their applications to AWS we have found many of the services to be provided to be amazing.  We started using DynamoDB, the AWS manage NoSQL database, to store application data in.  The story behind DynamoDB is fascinating as it is one of the key building blocks used for AWS services.  We have been very impressed with DynamoDB it-self as it provides a completely managed scalable solution that allows us to focus on applications rather than infrastructure tasks.  Almost.  While the data stored in dynamo is highly durable there is no safeguard against human error; dropping an item is forever.  Originally this problem seemed like it would be trivial to solve, surely AWS offers an easy backup feature.  My first attempt was to try and use the export function from the AWS console.

 

dynamo - export

Then I ended up here…

dynamo-export-pipeline

What?  Why would I want to create a data pipeline to back up my DynamoDB table?  Some of are tables are very small and most are not much more then a key value store.  Looking into this process the data pipeline actually creates an Elastic Map Reduce cluster to facilitate the backup to s3.  You can get full details on the setup here.  The output of this process is a compressed zip file of a JSON representation of the table.  It seemed to me that this process was too heavy weight for our use case.  I started thinking that this would be a relatively straightforward with lambda given you can now schedule lambda functions with a cron like syntax.  The full code is available here.

The first thing I wanted to do was to describe the table and write that metadata to the first line of the output file.

dyano-backup-describe

Using the api call for describeTable we can get back the structure of the table as well as configuration information such as the Read/Write Capacity.  The results of this call will look something like:

{“AttributeDefinitions”:[{“AttributeName”:”group”,”AttributeType”:”S”},{“AttributeName”:”name”,”AttributeType”:”S”}],”TableName”:”alarms”,”KeySchema”:[{“AttributeName”:”name”,”KeyType”:”HASH”}],”TableStatus”:”ACTIVE”,”CreationDateTime”:”2015-03-20T14:03:31.849Z”,”ProvisionedThroughput”:{“NumberOfDecreasesToday”:0,”ReadCapacityUnits”:1,”WriteCapacityUnits”:1},”TableSizeBytes”:16676,”ItemCount”:70,”TableArn”:”arn:aws:dynamodb:us-east-1:078742956215:table/alarms”,”GlobalSecondaryIndexes”:[{“IndexName”:”group-index”,”KeySchema”:[{“AttributeName”:”group”,”KeyType”:”HASH”}],”Projection”:{“ProjectionType”:”ALL”},”IndexStatus”:”ACTIVE”,”ProvisionedThroughput”:{“NumberOfDecreasesToday”:0,”ReadCapacityUnits”:1,”WriteCapacityUnits”:2},”IndexSizeBytes”:16676,”ItemCount”:70,”IndexArn”:”arn:aws:dynamodb:us-east-1:078742956215:table/alarms/index/group-index”}]}

Having the table metadata makes it easy to recreate the table.  It’s also worth pointing out that we use the knowledge of the provisioned ReadCapacityUnits to limit our scan queries while pulling data out of the table.  The next thing we need to do is write every item out to our backup file.  This is accomplished by scanning the table providing a callback onScan.

 

dynamo-backup-onscan

In this function we loop through the data items and write them out to a file.  After that we look at the LastEvaluatedKey. If it is undefined then we have scanned the entire table, otherwise we will recursively call the onScan function providing the LastEvaluatedKey as a parameter to mark the starting point for the next scan.  The data is continually shipped to s3 and compressed, this is achieved using a stream pipe.

dynamo-backup-streaming

 

You will have to update the bucket name to the s3 bucket in your account that you wish to store the dynamo backups in. Once the code was in place we uploaded it to lambda and used a cron schedule to run the process nightly.  For details on how to install and use this backup process please refer to the github repository.  As we move more to AWS and use more of the AWS services, we find that there are some gaps.  We have begun to log them and try to tackle them with our cross campus working group.  As we come up with solutions to these gaps we will post them, so everyone on campus can benefit.  If anyone is interested in contributing to this joint effort please email cloud-support@cornell.edu