It seems odd for the author to compare it to something abandoned (and thankfully reborn as borg) and ignore what has happened in two years.
Would love to see similar tests run against borg.
That would beat having to install something like Backblaze on every family member's machines. Cloud backup is great but it's always better to have a local (LAN) copy and then an off-site copy.
Borg is really nice, and rsync.net is of that variety of service that are always my favourites: it does one thing very well.
Also they offer a discount if you use borg or attic (possibly others) as they turn off their ZFS snapshot system and assume your software handles that.
Do you mean backup to the NAS or for backing up the NAS itself to a third location?
Either way, there's no need for an either/or approach. Just do both. I've tried multiple backup applications and many support local and remote backup options as standard. I'm using Borg and Back In Time and with Borg it's just a second cronjob with nearly identical scripts for the off-site backup over SSH. With Back In Time I was using the AWS CLI to push the backups to S3, but I found a better deal with cheaper storage. I have about 200GB data in total but want a 1TB archive available online for older backup sets.
Side note - OP's review is helpful but Borg Backup is oddly listed under 'Attic', which it was forked from years ago. Don't bother looking for Attic if you are comparing backup tools available today.
I use borgbackup to back-up my stuff locally with rclone to mirror the borg repository in the cloud (personal Google Drive in my case) and have also experimented with running borgbackup to an offsite Raspberry Pi.
Yes and no. Duplicity has the "--full-if-older-than" option so you can do incrementals normally, but if your previous full is older than whatever interval you define, it will do a full backup, without changing the command line. So that can be e.g. in a cron job.
The clever trick is to reencode the previous most-recent backup as a delta from the current state, and do a full-backup of the current state, rather than encoding each new backup as a delta from the previous state (which becomes slower and slower to compute, the more previous states you have).
Problem solved :)
It lacks the freedom to run the program as you wish, for any purpose (freedom 0).
https://en.wikipedia.org/wiki/The_Free_Software_Definition#T...
https://github.com/gilbertchen/duplicacy/blob/master/LICENSE...
Your software is not "free software" in the "libre" or "free to modify and share" sense.
- No network-based tests; e.g. a typical fast internet connection (say 100/40 or 50/20 MBit/s) with a few dozen ms latency to some server or cloud service. This is of course difficult because these tend to be bad on reproducibility. For a network-based test, not only time is interesting, but total RX/TX as well.
- I'm really surprised at restic's performance. It uses far more CPU than Borg in almost all tests... and Borg is already notoriously inefficient in it's CPU usage when looking at object throughput (restic: "fast, efficient"?). I don't mean to bash, I'm just surprised.
- restic's deduplication performance might hint at Rabin Fingerprints being worse than Buzhash, but there might be other issue(s) leading to this result.
- Besides CPU time, memory (peak) usage would be interesting.
> For instance, file hashes enable users to quickly identify which files in existing backups are changed. They also allow third-party tools to compare files on disks to those in the backups.
To be fair, Borg can calculate a variety of file hashes (MD5, SHA1, SHA2, ...) on the fly with "borg list". There are "borg diff" (to compare two archives) and "borg mount -o versions" as well, though the latter is generally impractical for looking at a large number of archives.
> Again, by not computing the file hash helped improve the performance, but at the risk of possible undetected data corruption.
I can't deduce how the last part follows (", but..."). Care to explain?
Also, recent bup versions allow delete. No encryption IiRC, but you can examine it with git tools which is a feature on its own.
Also, it can be difficult in a lot of environments to sustain more than 100mbps write to a remote, off-site system - halving the stored data can be a much bigger win then.
All that said, it's interesting to see that a) duplicity seems slow, and b) very consistent in terms of speed. I wonder if there's some low-hanging fruit for optimization there.
Personally I've had some luck using backupninja[b] in combination with duplicity. It's one of the few Free alternatives that allow the backup-system to encrypt "one-way" - so that compromising the backup-system doesn't immediately give read access to encrypted backups. It's a bit complicated to set it up for separate encrypt-to and signing keys though :/
[s] Today I would probably recommend ZFS - but I've always wanted to give NILFS2 a real test, especially on solid-state disks: http://nilfs.osdn.jp/en/
[c] https://github.com/Tripwire/tripwire-open-source
https://github.com/integrit/integrit (Speaking of projects that might be fun/useful to redo in a safe language like rust or go - it would appear this would be a prime example, btw. On the whole moving integrity to the fs, as with zfs might be the better option, though).
Duplicity is classic delta-backup. It always reads all files and calculates a delta to a different version of the file, hence the fairly consistent performance. Performance of deduplicating archivers is more difficult to predict.
Duplicity/deja-dup (a GNOME frontend for duplicity) is pre-installed on most GNOME-based DE's, which makes it convenient for end-users, but I found the limitation of only being able to configure a single backup destination to be too limiting.
By contrast, BiT supported multiple destinations and profiles, meaning I could have one local, one off-site, one "Personal Data" backup, one "System" backup to fall back if an OS update fails, etc. Its configuration options were much more attractive.
I had to juggle a few things around, but it works well. As does obnam, which I use in a couple of other places too.