(I am _not_ affiliated with Backblaze in anyway. Just a happy user)
The main thing for me is legacy. In my will it is a lot easier to explain how to get files from the standard backup plan, rather than install a tool from Github etc.
Worth noting it's only available on Windows + OSX.
Presumably since they offer unlimited storage and don't want people installing the cloud backup tool on their NAS.
I used to be a Backup customer until recently when it threw out errors on my macOS machine. The solution according to support was to remove my backup including version history. When I complained that was a problem, they suggested a second account, which they would refund the price for, after I made the switch.
Rclone give me more control, which I like. I use tmutil to create snapshots of what I backup, too. Very slick.
They generate a public/private key pair for the user. The client gets the public key and the server gets the private key. During backups the data is encrypted on the client with a symmetric key (which I believe is generated on the client). The encrypted data is sent to the server. The symmetric key is encrypted using the public key and also sent to the server.
On a restore they use the private key on the server to decrypt the encrypted symmetric key, use that key to decrypt the backup data, and then make the decrypted files available in a zip file that the user can download. The download is over HTTPS so is encrypted in transit.
If you don't like the idea of them having such access to your private key they do offer an option to add additional protection [1]:
> The user’s private key which is stored safely in our data center is protected by a password that is highly guarded. But for some users this is not good enough and we allow the user to secure this file with their own password. When this is done it is impossible to access the data without the user’s password. Unfortunately, this also means we can’t help the user if they ever forget this password so we don’t recommend it for most users.
If you do that then when you restore you have to enter that password on their site when requesting the restore, so their server can decrypt the private key.
They give some more detail in their "Security Question Round-up!" [2]:
> The answer shows a weak point in the Backblaze system. As you prepare a restore, you must type in your private passphrase into the restore server. This is not written to disk, but held in RAM and for the period of time of decrypting all your files, and they are then stored in "clear text" on our very highly secured servers until they are ZIPPED up and offered to you to be downloaded. At that moment you can download them (by HTTPS only), then you can "delete the restore zip" which means you close the window of time that your files are available in plain text.
> So to recap: if you never actually prepare a restore, we cannot possibly know what is in your files, but if you prepare a restore (let's say of a few files) then for the couple minutes they are being prepared and downloaded they are in "plain text" on a HIGHLY SECURE system in the Backblaze datacenter. At that moment, if a Backblaze employee were malicious enough and dedicated enough and was watching (which is actually pretty hard, we get thousands of restores every day so it would fly by quickly) they could see your filenames appear on the Linux servers right before they are ZIPPED up into a new bundle. A few minutes of exposure.
> We actually want to improve this to provide a password encrypted ZIP file for download, and then the FINAL improvement is to actually allow you to download the private encryption key, download the encrypted files, and provide the pass phrase in the privacy of your computer. We hope to add this functionality in the future.
[1] https://www.backblaze.com/blog/how-to-make-strong-encryption...
[2] https://help.backblaze.com/hc/en-us/articles/217664798-Secur...
https://www.backblaze.com/cloud-backup/pricing
It's still a pretty good deal though...
your links are to their "backup" service which is only for Windows and Mac computers, and is limited to their backup app which many people report having throttling and other issues, it is "unlimited" in the sense that it should only be used for a single computer, which is why they never support Linux on it, because they believe (probably correctly) that linux support would mean most people will install it to NAS devices and ruin the business model for everyone else
Historically that has been the case for all of the backup solutions that offered "unlimited" data for a fixed price monthly, I think BackBlaze is the only remaining vendor in the game that does
https://www.backblaze.com/cloud-storage/pricing
$6/TB/Month. Very manageable egress-fees. If you're using it for cold-backup, hopefully you rarely have to pay them.
Do you have access to your file via mobile access, sharing feature, etc.? https://www.backblaze.com/cloud-backup
With their personal computer backup offering, there’s a web interface that you can use to download individual files from your backups, share files, or even have them mail you a flash drive containing your full backups.
https://blog.sapico.me/posts/how-i-backup-my-servers/
I'm basically backing up to: /backups/type/{day-of-month} and backups/type/latest every day.
> Stability: Currently our tools are in beta state and miss regression tests. It is not recommended to use them in production backups, yet.
Actually I'm referring to their drive stats and not their storage pod stats like in this report.
Thank you Backblaze.
Thanks indeed.
Total Data Stored Monthly Downloads Cost
1TB 1000TB $119,712/yr
1000TB 1000TB $72,000/yr
What is the cheapest TDS for a given 1000TB monthly downloads?Seems to be:
334TB 1000TB $24,048/yr
So there are situations where uploading dummy files that pad with zeros will reduce your bill.---
Edit: The 334 now seems obvious when you consider:
* Egress is $10/TB
* Store is $6/TB
Free Egress is 3x monthly data store. So egress costs $6/TB when you get it 'free' vs $10/TB, so you need it all free to get the cheapest egress, which means you need 1/3 storage that is egress. Any higher and your wasting money on storage!
EDIT
Above, you wrote that egress is 10 USD / TB. What are egress costs for Amazon, Google, and Microsoft?
Pulling the egress costs from the hyperscalers' storage pricing pages:
* Amazon S3 ranges from $50-$90/TB depending on monthly volume.
* Google Cloud Storage ranges from $80-$230/TB depending on monthly volume and where you're transferring data to/from.
* Azure Blob Storage ranges from $40-$181/TB (with the First 100GB/month free) depending on monthly volume and whether you route data via the Microsoft Premium Global Network.
Cloudflare published a blog post a couple of years ago explaining just how much money AWS makes on egress - customers are paying up to 80x Amazon's costs: https://blog.cloudflare.com/aws-egregious-egress
> I wonder if they are willing to strike a deal if you need relatively high egress, but very low storage.
It depends on what "very low" means. We have a capacity-based pricing option, Backblaze B2 Reserve, starting at 20 TB, that includes all egress and transaction fees.
However, we were having 1 outage per month with B2, and in the middle of 2023 we decided to go back to AWS and only use S3 for replication.
We still monitor both services simultaneously (S3 and B2), and every other month we are still having an episode where the latency rises from 15ms to something like 25 seconds for each write/read operation.
If you’re just putting Cloudflare’s regular proxy in front of it, then I _believe_ their standard “web assets only” policy applies.
I had some Latency and Bandwidth problems when trying to push alot of data into BackBlaze not terrible but slower than Wasabi it however was usable. Some of the other vendors I tried were not even usable
Here’s my rough pricing notes:
https://gist.github.com/Manouchehri/733e6235457e60de24fdbb15...
One useful trick is if you’re already using a Cloudflare Worker, you can force cache fetches to Backblaze B2 (or any S3 provider). This is allowed by Cloudflare’s ToS as you’re using a Worker.
My takeaway is that it's still essentially impossible to negotiate anywhere close to a fair price with commodity server vendors until you are buying hundreds of machines, and then only if you are capable of demonstrating that you are willing to design and build them yourself. And yet despite this, it's still cheaper than cloud even paying advertised prices.
Where do regular people buy servers without insane markup if you need say 10-20? Used to be I could actually buy supermicro barebones, but that ended a really long time ago.
IMO, for a small number of servers, the insane markup isn't a huge cost of business so doesn't matter that much.
Tale as old as time.
Before someone uses Google's servers as an example, I would say that strapping together consumer grade components with zip ties and no case isn't what I'd consider a 'server'; rather, a loose collection of parts.
We simply do not witness the same bottlenecks.
However, it does cause you to write some very fault tolerant code.
In the aftermath of the Dot Com Bust, manufacturers became conservative, and lost their ability to dream big. Into this power vacuum stepped the so called Cloud Providers, who in some cases made their own hardware and tools to solve their problems.
Over time manufacturing caught up, missing tools were written, and the Cloud providers went back to solving the main problem nearly none of their customers of suppliers could ever solve: the speed of light (locality).
Sounds like it was money.
Today there are lots of high storage density devices, so no point to build your own if you can get it already engineered and with a warranty from somewhere else.
Yep - we explained it all here: https://www.backblaze.com/blog/the-storage-pod-story-innovat...
For incrementally backing up laptop data, the process needs to run on my laptop anyway, so I may as well just use Arq.
For NAS data that changes frequently enough to desire incrementals, choose Synology Hyperbackup.
For NAS data that doesn't change frequently and can just be sync'd, choose Synology Cloud Sync.
For NAS data that you don't need locally and only need to archive, choose Synology Cloud Sync with one-way-sync. (Less hassle than AWS Glacier.)
As for the cloud provider itself, I'm think I'll probably go with Wasabi, at $6/TB. I heard that Backblaze occasionally has weird gotchas that surprise people, like auto-deleting files from your backups that you delete locally, so I just feel cautious.
Main things giving me pause: minimum block size on Wasabi (I'm sure some of my files are smaller; don't know what sticker shock I'll experience), and unsure why I should consider a command line tool like restic instead of the above.
I tried several different configurations and finally landed on the following.
I have a Synology that I have been running for a couple years now.
- RAID 0 w 2 14TB drives, 400GB is highly critical
The 400GB of highly critical is further backed up:
- external 1 TB SSD that lives at my sisters house and is updated monthly or so using Sync w encrypted EXT4 formatting
- B2 sync with 1 way, encrypted backup and nightly updates (using Backblaze’s encryption because Synology requires your device and if that goes… you are sol)
I friggin love it. I have restored things several times to test the process and it just works.
It only costs me $2/ mo for Backblaze B2 with that amount of data. Well worth it.
Good luck
> I heard that Backblaze occasionally has weird gotchas that surprise people, like auto-deleting files from your backups that you delete locally, so I just feel cautious.
We have two similar products, and it's easy to mix them up. To clarify:
Backblaze Computer Backup (a different product from Backblaze B2) deletes old versions of files from your backup either 30 days or 1 year (you choose which) after you delete them locally. You also have the option to enable "Forever Version History", which costs $6/TB per month once your files age out of the backup.
Backblaze B2 (S3-compatible cloud object storage) will never delete anything unless you tell it to do so, either via the API or a lifecycle rule.
Dell/Supermicro sales guys: “look even the experts who build their own pods buy our servers”
Or the way in which they were handled during the relocation?
Seems like they changed something.
For example: there should be a sliding window (or a histogram) relating to AFR after deployment for a time frame.
IE: AFR between 0-300 days, AFR between 301-600 days, AFR between 601-900 days (etc).
Otherwise we're looking at failures historically for the entire period, which might hide a spat of failures that consistently occur 3 years in, giving a relatively unfair advantage in numbers to newer drives.
That said, I really do love these and I hope they continue.
The HDD afr is under 2% for the first 3.5 years. Due to factory testing, 'the bathtub now has no left side, which makes it hard to take a bath'. https://i.gzn.jp/img/2021/12/21/black-blaze-how-long-disk-dr...
How do you mean? CrashPlan Essential (200GB) is $3/mo. CrashPlan Professional (unlimited) is $88/year.
I was using it at the time and had to find an alternative, which I never really did. I tried restic and similar programs, but they were all too slow finding what changed. The key to Crashplan was that it monitored the filesystem for changes, so it didn't have to do an expensive traversal.
In the end I settled for just using the daily full-image backups I was already taking and forego the <15min backup points I had with Crashplan.
Been missing it a few times, but overall surviving without.
[1]: https://www.theverge.com/2017/8/22/16184430/crashplan-home-s...
Alternatively, you could get a handful of flash thumb drives and back up onto all of them, syncing with a different one each week/month/whenever so that they're all getting continuously refreshed and you'd have snapshots in case you accidentally delete something and that deletion gets synced to the most-recent drive.
Also...holy shit, I haven't kept up on how precipitously prices for storage have dropped. Just saw a 10 TB external drive for only $150!
My bad!!
https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-... contains the necessary data, but it seems old.
I don't know why they're buying COTS server gear because their whole premise was the economy-of-scale of having their own pods made. I don't understand why they don't go to Quanta or Foxxconn to build them whatever they need because Dell is really just a marketing front like CDW that relies heavily on third-party contract designers and major component manufacturing, and then only does final assembly itself.
...is this a joke?
That's Q3 2023. Assuming the quarters are calendar quarters, Q4 2023 ended literally 3 days ago. They're not going to have a report out right away.