AWSCLI S3 Backup via AWS Direct Connect

By Ken Lassey, Cornell EMCS / IPP

Problem

AWSCLI tools default to using the Internet for its connection when reaching out to services like S3. This is due to the fact that S3 only provides public endpoints for network access to the service.  This is an issue if your device is in Cornell 10-space as it cannot get on the Internet.  It also could be a security concern depending on the data, although AWSCLI S3 commands do use HTTPS for their connections.

Another concern is the potential data egress charges for transferring large amounts of data from AWS back to campus. If you need to restore an on-premise system directly from S3, this transfer would accrue egress charges.

Solution

Assuming you have worked with Cloudification to enable Cornell’s AWS Direct Connect service in your VPC, you can force the connection through Direct Connect with a free tier eligible t2.micro Linux instance. By running the AWSCLI commands from this EC2 instance, it can transfer data from your on-premise system, through Direct Connect and drop the data in S3 all in one go.

Note that if your on-premise systems have only public routable IP Addresses, and no 10-Space addresses, your on-premise systems will not be able to route over Direct Connect (unless Cloudification has worked with you to enable special routing). In most campus accounts, VPC’s are only configured to route 10-Space traffic over Direct Connect. If the systems you want to back up have a 10-Space address, you are good to go!

Example Cases

  1. You have several servers that backup to a local storage server. You want to copy the backup data into AWS S3 buckets for disaster recovery.
  2. You want to backup several servers directly into AWS S3 buckets.

In either case, you do not want your data going in or out over the internet, or it cannot due to the servers being in Cornell 10-space.

AWSCLI Over Internet

awscli s3 sync <backupfolder> <s3bucket>

By running the awscli command from the backup server or individual server the data goes out over the internet.

Example Solution

By utilizing a free tier eligible t2.micro Linux server you can force the traffic over Cornell direct connect to AWS.

AWS CLI Over Direct Connect

Running the awscli commands from the t2.micro instance you force the data to utilize AWS direct connect.

On your local Windows server (backup or individual) you need to ‘share’ the folder you want to copy to S3.  You should be using a service account and not your own NetID. You can alternatively enable NFS on your folder if you are copying files from a Linux server, or even just tunnel through SSH. The following examples are copying from a Windows share.

In AWS:

  1. Create an s3 bucket to hold your data

On your systems:

  1. Share the folders to be backed up
  2. Create a service account for the backup process
  3. Give the service account at least read access to the shared backup folder
  4. If needed allow access through the Managed Firewall from your AWS VPC to the server(s) or subnet where the servers reside

On the t2.micro instance you need to:

  1. Install the CIFS Utilities:
    sudo yum install cifs-utils
  2. Create a mount point for the backup folder:
    mkdir /mnt/<any mountname you want>
  3. Mount the shared backup folder:
    sudo mount.cifs //<servername or ip>/<shared folder> /mnt/<mountname> -o \ 
    name=”<service account>”,password=”<service account password>”,domain=”yourdomain>”
  4. Run the s3 sync from the t2.micro instance:
    aws s3 sync \\<servername/IP>/\<share> s3:\\<s3-bucket-name>

 

Instead of manually running this I created on the t2.micro instance a script to perform the backup.  The script can be added to a CRON task and run on a schedule.

Sample Script 1

  1. I used nano to create backup.sh
       #!/bin/sh
       backupfolder=/mnt/<sharedfoldername>
       s3path=s3://<s3bucketname>
       aws s3 sync $backupfolder $s3path
  1. The script needs to be execuatable so run this command on the t2.micro instance
    sudo chmod +x <scriptname>

Sample Script 2

This script mounts the share and backups up multiple folders on my backup server then unmounts the shared folder

#!/bin/bash
sudo mount.cifs //128.253.109.11/ALC -o user="<serviceaccount>",password="<serviceaccount pwd>",domain=”<your domain"
srvr=(FOLDER1 FOLDER2 FOLDER3 FOLDER4 FOLDER5 FOLDER6)
for s in "${srvr[@]}"; do
    bufolder=/mnt/<mountname>/$s
    s3path=s3://<s3bucketname>/$s
    aws s3 sync $bufolder $s3path
done
sudo umount /mnt/<mountname> -f