Linux command line network copy
This is all working fine, but what I want to do next is copy these files (they're not big - about 100M) off-server.
I don't have a server to copy to at the moment, so am looking for some suggestions from folk who have done this before.
I did try installing and running dropbox, but it is extremely painful to use.
Any suggestions welcome.
Thanks.
10 Replies
You can also get a dedicated external drive for your computer and download the backup each day. I used to do it this way. I'd leave a copy of the last week worth of backups on the server as well as a two week old copy, three week, one month, two month and three month. It took a little time every weekend to clean things up but I had plenty of copies on the server ready to access just in case something went wrong (and everything at home).
@Main Street James:
You can also get a dedicated external drive for your computer and download the backup each day. I used to do it this way. I'd leave a copy of the last week worth of backups on the server as well as a two week old copy, three week, one month, two month and three month. It took a little time every weekend to clean things up but I had plenty of copies on the server ready to access just in case something went wrong (and everything at home).
I do the same, pull down to home, but I also push them over to Amazon S3
- Les
Here's my core rsync script I use on all my developer instances Linux and Windows, pay attention to whether you need a trailing slash in the paths:
#!/bin/sh
#[Note: This is a FULL system backup script and requires root. If you
# only want to backup your user files then tailor the script.]
# Use "sudo crontab -e" to set up a cron job to run it.
#
#[Note: --delete will remove target files and dirs that no longer exist in
# the source, you may or may not want this sync'ing.]
#
#[Note: The first backup will take a while, to add the files to the
# target, after that it should only take a matter of minutes.]
#
#[Note: rsync must be installed on the source and the target.]
#
BINPRE="rsync -r -t -p -o -g -v -l -D --delete"
SSH="-e ssh -p 22"
BINPOST="<target_user>@<target_host_ip>:/<target_backup_dir>"
EXCLUDES="--exclude=/mnt --exclude=/tmp --exclude=/proc --exclude=/dev "
EXCLUDES=$EXCLUDES"--exclude=/sys --exclude=/var/run --exclude=/srv "
EXCLUDES=$EXCLUDES"--exclude=/media "
date >> /root/start
$BINPRE "$SSH" / $EXCLUDES $BINPOST
date >> /root/stop</target_backup_dir></target_host_ip></target_user>
This is how I backup my MySQL and Postgres databases:
# mysql
/usr/bin/mysqldump -u root -ppassword --all-databases | gzip > /root/databasebackups-mysql/database_"`date | tr \" \" \"-\"`".sql.gz
# postgresql - this expects a ~/.pgpass file to be present
/usr/bin/pg_dumpall -h localhost -U postgres | gzip > /root/databasebackups-postgres/database_"`date | tr \" \" \"-\"`".sql.gz
@jebblue:
rsync, rsync, rsync. That's all the OP needs.
rsync doesn't do jack on a single server. OP is asking for suggestions where to send it to as well
@glg:
@jebblue:rsync, rsync, rsync. That's all the OP needs.
rsync doesn't do jack on a single server. OP is asking for suggestions where to send it to as well
Just change BINPOST in my script to a local directory. Call me wild and crazy.
BTW he said "off-server" which sounds like you know, off server.
@jebblue:
BTW he said "off-server" which sounds like you know, off server.
Which he followed up with "I don't have a server to copy to at the moment"
@glg:
@jebblue:BTW he said "off-server" which sounds like you know, off server.
Which he followed up with "I don't have a server to copy to at the moment"
No problem, until he does he can use my script and set the target to a local directory to adjust it to his needs.
I use rsnapshot to run a script to backup my mysql database. The script does a streaming, gzipped backup over the network to a local file, which rsnapshot then takes care of shapshotting. It works well enough, and I chose it because it has less of an impact on the Linode's disk IO, but from a data transfer perspective, it would be more efficient to dump to the server and let rsync calculate and transfer a delta from the last dump the next time rsnapshot runs.
Communications runs over ssh (I choose the blowfish cipher because it runs faster on the weak CPU in my little server). The linode box doesn't need rsnapshot, but it does need rsync, and key-based ssh authentication needs to be setup.
If you want something "cloud" based, I'd look into tarsnap, or something that will do space efficient backups to something like Amazon S3. I haven't used it, but Duplicity might fill the bill (