I’ve been spending a few hours setting up a good back-up strategy for my EC2 server, running NOMP.se.
The service runs on a single reserved small instance at present. It’s using Amazon’s Linux distro with an Elastic Block Storage (EBS) root disk.
The first thing you should do after setting up an EC2 host is to make an EBS snapshot. An EBS snapshot is a full disk device dump (like “dd” produces if you’re a Unix hacker). While EBS snapshots are a great feature, and should be a cornerstone in any EC2 backup strategy, they are full volume dumps, and hence take a lot space.
To compliment my EBS snapshots, which I run manually before and after bigger changes (yum update, package installs etc), I hacked together a little shell script in 1337 bytes (really) to backup my MySQL databases in a supported manner (mysqldump) and also backing up a number of configuration files from the file system. The script makes use of a great tool called s3cmd which is used to upload files to S3 (Amazon’s Simple Storage Service).
How to set up the script (all steps as root):
- Install s3cmd
- Run s3cmd –configure
- Copy the generated .s3cfg file to /etc
- Download the S3 backup script to /etc/cron.daily/
- Edit the script to suit your needs.
I hope someone finds this useful!
Here’s what the script looks like:
## Specify data base schemas to backup and credentials DATABASES="nompdb wp_blog" ## Syntax databasename as per above _USER and _PW ## _USER is mandatory _PW is optional nompdb_USER=root wp_blog_USER=root ## Specify directories to backup (it's clever to use relaive paths) DIRECTORIES="root etc/cron.daily etc/httpd etc/tomcat6 tmp/jenkinsbackup" ## Initialize some variables DATE=$(date +%Y%m%d) DATETIME=$(date +%Y%m%d-%H%m) BACKUP_DIRECTORY=/tmp/backups S3_CMD="/usr/bin/s3cmd --config /etc/s3cfg" ## Specify where the backups should be placed S3_BUCKET_URL=s3://nomp-backup/$DATE/ ## The script cd / mkdir -p $BACKUP_DIRECTORY rm -rf $BACKUP_DIRECTORY/* ## Backup MySQL:s for DB in $DATABASES do BACKUP_FILE=$BACKUP_DIRECTORY/${DATETIME}_${DB}.sql if [ ! -n "${DB}_PW" ] then PASSWORD=$(eval echo \$${DB}_PW) USER=$(eval echo \$${DB}_USER) /usr/bin/mysqldump -v --user $USER --password $PASSWORD -h localhost -r $BACKUP_FILE $DB 2>&1 else /usr/bin/mysqldump -v --user $USER -h localhost -r $BACKUP_FILE $DB 2>&1 fi /bin/gzip $BACKUP_FILE 2>&1 $S3_CMD put ${BACKUP_FILE}.gz $S3_BUCKET_URL 2>&1 done ## Backup of config directories for DIR in $DIRECTORIES do BACKUP_FILE=${DATETIME}_$(echo $DIR | sed 's/\//-/g').tgz /bin/tar zcvf ${BACKUP_FILE} $DIR 2>&1 $S3_CMD put ${BACKUP_FILE} $S3_BUCKET_URL 2>&1 done
Hi Stefan,
There’s a couple of bugs with your script that I thought I’d point out. The first is a simple one in the date formatting of DATETIME. You have +%Y%m%d-%H%m. This puts the month after the hour rather than what I think you were trying to achieve which is +%Y-%m-%d_%k%M.
Secondly, if the password isn’t set then the $USER value is never set.
USER=$(eval echo \$${DB}_USER)
Should be outside of the if [ ! -n “${DB}_PW” ] block
Thanks for sharing, it’s a great script that has really helped me.
Regards,
Adam
Hi Adam,
Thanks a bunch for the updates. I’ll try to find time to update the post soon!
Cheers,
Stefan
Pingback: Automatic Backup of MySQL database to S3 | matt-helps