Linux command line network copy

I'm looking to setup a backup regime whereby I make a daily copy of my SQL db and some supporting files (SMF forum basically).

This is all working fine, but what I want to do next is copy these files (they're not big - about 100M) off-server.

I don't have a server to copy to at the moment, so am looking for some suggestions from folk who have done this before.

I did try installing and running dropbox, but it is extremely painful to use.

Any suggestions welcome.

Thanks.

10 Replies

We have more than one Linode so we copy our Linode A backups to Linode B and our Linode B backups to Linode A. We use rsync to handle this. I download a copy too but only keep Sunday copies for more than a week.

You can also get a dedicated external drive for your computer and download the backup each day. I used to do it this way. I'd leave a copy of the last week worth of backups on the server as well as a two week old copy, three week, one month, two month and three month. It took a little time every weekend to clean things up but I had plenty of copies on the server ready to access just in case something went wrong (and everything at home).

@Main Street James:

You can also get a dedicated external drive for your computer and download the backup each day. I used to do it this way. I'd leave a copy of the last week worth of backups on the server as well as a two week old copy, three week, one month, two month and three month. It took a little time every weekend to clean things up but I had plenty of copies on the server ready to access just in case something went wrong (and everything at home).

I do the same, pull down to home, but I also push them over to Amazon S3

Tarsnap is pretty legit: http://tarsnap.com/

  • Les

rsync, rsync, rsync. That's all the OP needs. It's fast, efficient, smart, free, open source, multi-os (even Cygwin).

Here's my core rsync script I use on all my developer instances Linux and Windows, pay attention to whether you need a trailing slash in the paths:

#!/bin/sh

#[Note: This is a FULL system backup script and requires root. If you
# only want to backup your user files then tailor the script.]
# Use "sudo crontab -e" to set up a cron job to run it.
#
#[Note: --delete will remove target files and dirs that no longer exist in
# the source, you may or may not want this sync'ing.]
#
#[Note: The first backup will take a while, to add the files to the
# target, after that it should only take a matter of minutes.]
#
#[Note: rsync must be installed on the source and the target.]
#

BINPRE="rsync -r -t -p -o -g -v -l -D --delete"
SSH="-e ssh -p 22"
BINPOST="<target_user>@<target_host_ip>:/<target_backup_dir>"
EXCLUDES="--exclude=/mnt --exclude=/tmp --exclude=/proc --exclude=/dev "
EXCLUDES=$EXCLUDES"--exclude=/sys --exclude=/var/run --exclude=/srv "
EXCLUDES=$EXCLUDES"--exclude=/media "

date >> /root/start

$BINPRE "$SSH" / $EXCLUDES $BINPOST

date >> /root/stop</target_backup_dir></target_host_ip></target_user> 

This is how I backup my MySQL and Postgres databases:

# mysql
/usr/bin/mysqldump -u root -ppassword --all-databases | gzip > /root/databasebackups-mysql/database_"`date | tr \" \" \"-\"`".sql.gz

# postgresql - this expects a ~/.pgpass file to be present
/usr/bin/pg_dumpall -h localhost -U postgres | gzip > /root/databasebackups-postgres/database_"`date | tr \" \" \"-\"`".sql.gz

I do backups of a few servers in different locations to my big file server at home. All the scripting happens on my home file server; it uses SSH (which is pre-configured for key-based auth) to execute mysqldump on the remote machine (followed by 7zip of the resulting file), then it locally executes rsync to pull down an updated copy of the remote server's filesystem (with a nice exclude list to avoid backing up system files and such), and then it does a ZFS snapshot. The file server runs zfsonlinux for its main storage array, so it takes nightly snapshots of the backups. Ever so often, I manually prune the snapshots by deleting everything but the first backup of the month for everything except the current and previous month (you can delete snapshots in a range).

@jebblue:

rsync, rsync, rsync. That's all the OP needs.

rsync doesn't do jack on a single server. OP is asking for suggestions where to send it to as well

@glg:

@jebblue:

rsync, rsync, rsync. That's all the OP needs.

rsync doesn't do jack on a single server. OP is asking for suggestions where to send it to as well

Just change BINPOST in my script to a local directory. Call me wild and crazy.

BTW he said "off-server" which sounds like you know, off server.

@jebblue:

BTW he said "off-server" which sounds like you know, off server.

Which he followed up with "I don't have a server to copy to at the moment"

@glg:

@jebblue:

BTW he said "off-server" which sounds like you know, off server.

Which he followed up with "I don't have a server to copy to at the moment"

No problem, until he does he can use my script and set the target to a local directory to adjust it to his needs.

What I do is have a little low-powered ARM-based server running Debian at home. That box as a cron job that runs rsnapshot. Rsnapshot is an Rsync based backup system that uses hardlinks to provide lightweight snapshots. Rsync has the advantage of doing delta-based copies. It only copies files that differ from files in the last backup, and then, only the parts of the file that have changed, which makes it relatively fast and efficient (network-wise).

I use rsnapshot to run a script to backup my mysql database. The script does a streaming, gzipped backup over the network to a local file, which rsnapshot then takes care of shapshotting. It works well enough, and I chose it because it has less of an impact on the Linode's disk IO, but from a data transfer perspective, it would be more efficient to dump to the server and let rsync calculate and transfer a delta from the last dump the next time rsnapshot runs.

Communications runs over ssh (I choose the blowfish cipher because it runs faster on the weak CPU in my little server). The linode box doesn't need rsnapshot, but it does need rsync, and key-based ssh authentication needs to be setup.

If you want something "cloud" based, I'd look into tarsnap, or something that will do space efficient backups to something like Amazon S3. I haven't used it, but Duplicity might fill the bill (http://duplicity.nongnu.org)

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct