I'm trying to get my Mint desktop computer's home directory and my Debian media server's movies/shows directoiries backed up. When this is all said and done I'll be backing everything up to a 5 TB external USB HDD with it's own powers source connected to a Raspberry Pi 2 stored away from my house via colocation. But I'd like to get the many TBs of data I have copied before I move the PI2 + HDD off site. I've been reading here -> http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#BackupPC-Introduction on setting up backuppc but my head is starting to hurt. When I read over that material it sounds like they are expecting the destination of my data to be the server that's running backuppc but that's not how I'd like to run this. I want my backups to be initiated from my in house desktop and media server (the sources) using backuppc, to the RP2+HDD (destination). Is this possible?
Also, before I move the RP2 off site what would be the best means of using backuppc to copy my data locally? Can I setup backuppc to copy from my desktop and server to the HDD via the USB connection and then change the backuppc configuration after moving the PI2 off site or am I doing to need to do this over my home network via the RP2's ethernet port? If anyone has any suggestions and or corrections please give me a hand, I've never used backuppc and I have no idea what I'm doing? Thanks.

Recommended Answers

All 4 Replies

  1. Make the disc have a standard Linux file system such as ext3 or ext4.
  2. Have the PI system mount the disc file system in a local directory such as /mnt/backup
  3. Have the PI export the file system (/mnt/backup if as above) as an NFS share.
  4. On your host, mount the shared file system as type NFS.
  5. Copy the files there.

FYI, all current Linux system support NFS sharing and mounting "out-of-the-box". Assuming your PI when off-site is running a VPN, then you would use that to connect to the PI so you can mount the backup file system.

Finally, DO NOT USE anything like backuppc - use standard Linux tools such as rsync, cp, dd, etc. I cannot in good conscience advise to use other such tools are you are trying to use. FWIW, I have over 30 years of Unix and Linux systems programming and management experience. You are making a common newbie mistake in thinking you need "something special" for such activities when Linux systems have most everything you will need as standard tools. The rsync tool can be awesome for this sort of backup activity to backup new/changed files on a scheduled basis. You might want to look into cron and crontabs.

Thanks. backuppc was actually recommended to me by another very experienced Linux administrator after I told him I was planning on using rsync. It's at his house that I'll be keeping the pi2 + hdd. So why would backuppc be a bad choice? When I told the other guy I was planning on using rsync he said that I didn't need to use rsync because rsync was going to make a copy of my files and that I didn't need to make copies but that incremental backups would be better so he recommended I use backuppc, he then told be that backuppc uses rsync which did confuse me a little, why not just stick with my original plan and use rysnc then?

Sounds like backuppc is a more friendly front-end to rsync. Sorry, but I have never used it and it doesn't appear in any of my CentOS 6.7 repositories (standard + epel + rpmforge).

If you want uservfriendly frontend you can use grsync. It has the most interesting options.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.