And the technical limits are really good now: https://docs.hetzner.com/storage/object-storage/overview#lim...
Maybe xx operations/s per TB of storage would have been better since that way large buckets would be able to scale.
Though, it's still too small if a HEAD request counts as an operation, since we need to check if files updated or not.
What kind of redundancy does Object Storage offer? How resilient is the product to failures?
"Each uploaded data object is divided into chunks, which are distributed across multiple servers within the cluster. Using erasure coding, the system can ensure data integrity even if up to three storage servers fail.
As always, each of our products can only be one part of a secure backup strategy."
What location is data stored in?
"The entire data of a Bucket is stored in the location you selected. In that location, the data is stored in a single data center. The power and network infrastructure is designed with built-in redundancy for high availability."
So it is single data center and they don't specifically claim durability percentage.