No, I'm talking about mitigating system failure (be it a dead disk, PHY, entire server, single PDU, single rack or entire feed. I didn't even go down the level of individual object durability yet (or web access to those objects, consistent access control and the likes).
There is some overlap in the sense that having redundant copies makes it possible to replace a bad copy with a good copy if a checksum mismatches on one of them. That also allows for bringing the copy count back in spec if a single copy goes missing (regardless of the type of failure).
But no matter what methods are used, data is data and needs to be stored somewhere. It the bits constituting that data go missing, the data is is gone. To prevent that, you need to make sure those bits exist in more than one place. The specific places come with differences in cost, mitigations and effort:
- Two copies on the same disk mitigates bit flips in one copy but not disk failure
- Two copies on two disks on the same HBA mitigates bit flips and disk failure but not HBA failure
The list goes on until you reach the requirement posted at the top of this Ask HN where it is stated that OneZone IA is used. That means it does not need multiple zones for zone-outage mitigation. Effectively that means the racks are allowed to be placed in the same datacenter. So that datacenter being unavailable or destroyed means the data is unavailable (temporarily or permanently), which appears to be the accepted risk.
But within that zone (or datacenter) you would still need all other mitigations offered by the durable object storage S3 provides (unless specified differently - if we just make up new requirements we can make it very cheap and just accept total system failure with 1 bit flip and be done with it).