Backup MySQL databases to Amazon S3 Storage

Posted on 271 views

In this guide, I’ll take you through the steps to Backup MySQL databases to Amazon S3 on Ubuntu and CentOS based MySQL servers. Amazon S3 is a web service offered by Amazon Web Services. It provides storage through web services interfaces and provides APIs for you to objects stored on it.

Since S3 is a highly available distributed object storage service, It is an ideal place to store critical data like databases which don’t consume large space. Before you can interact with S3 over an API, you need to install awscli tool which provides aws command.

Step 1: Install awscli on CentOS, Ubuntu, and macOS

The only requirement is Python 2 version 2.6.5+ or Python 3 version 3.3+To install awscli on CentOS 7, run the commands:

### CentOS / Fedora / RHEL ###
sudo yum install python3-pip # RHEL 8 / Fedora
sudo yum install python-pip  # CentOS 7 / RHEL 7

### Ubuntu / Debian ###
sudo apt update
sudo apt install python3-pip # Debian 11
sudo apt install python-pip # Debian 10/9

### macOS ###
brew install python3
curl -O
python3 --user # For MacOS

Then install awscli from pip

# For python3
sudo pip3 install awscli

# For python2
sudo pip install awscli

Verify that the AWS CLI installed correctly by running aws –version.

$ aws --version
aws-cli/1.25.87 Python/3.10.6 Linux/5.15.0-40-generic botocore/1.27.86

The AWS CLI is updated regularly to add support for new services and commands. To update to the latest version of the AWS CLI, run the installation command again.

sudo pip3 install awscli --upgrade

If you need to uninstall the AWS CLI, use pip uninstall.

sudo pip3 uninstall awscli

Step 2: Configure awscli for AWS S3 storage

Configure AWS CLI tool with your user’s credentials by running:

$ aws configure
AWS Access Key ID [None]: 
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:

Provide at least Access Key ID and Secret Access Key. This will create a folder with ~/.aws with your credentials.

ls .aws
config credentials

Step 3: Dump MySQL databases

Now it’s time to backup your MySQL databases. We first need to export some variables used to dump MySQL databases.

export DB_USER="root"
export DB_PASSWORD=""
export DB_HOST="localhost"
export date_format=`date +%a`
export db_dir="/tmp/databases/$date_format"
export dest_backup_file="/tmp/mysql-databases-$date_format.tgz"
export s3_bucket="s3://bucket-name"
mkdir -p $db_dir 2>/dev/null

Create db_dir if it doesn’t exist already:

if [ ! -d $db_dir ]; then
   mkdir -p $db_dir

The next thing to do is dump the databases to file.

# Get all MySQL databases
databases=`mysql -u $USER -h $HOST -p$PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`

# Dump all databases
for db in $databases; do
 if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
 echo "Dumping database: $db"
 mysqldump -u $USER -h $HOST -p$PASSWORD --databases $db > $db_dir/$db-$date_format.sql

All your databases excluding mysql and performance_schema will be exported to specified db_dir value.

Step 4: Backup MySQL databases to S3 Object Storage

You have all the databases exported and you should be ready to back them up to s3. It is a good practice to compress them before syncing to s3.

tar -zcvf $dest_backup_file -C $db_dir .
aws s3 cp $dest_backup_file $s3_bucket

This will upload the compressed file to AWS S3. Once it is done, the check the contents using:

$ aws s3 ls $s3_bucket

Backup MySQL databases to S3 – The scripted way

A script which can be used to automate this process is on the link below:


Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.