We don’t touch the data at all.
> The data is all images and videos, and no queries need to be performed on the data.
OK, so this definitely helps a bit.
At 10PB my assumption is that storage costs are the major thing to optimize for. Compression is an obvious must, but as it's image and video you're going to have some trouble there.
Aggregation where you can is probably a good idea - like if a user has a photo album, it might make sense to store all of those photos together, compressed, and then store an index of photo ID to album. Deduplication is another thing to consider architecting for - if the user has the same photo, across N albums, you should ensure it's only stored the one time. Depending on what you expect to be more or less common this will change your approach a lot.
Of course, you want to avoid mutating objects in S3 too - so an external index to track all of this will be important. You don't want to have to pull from S3 just to determine that your data was never there. You can also store object metadata and query that first.
AFAIK S3 is the cheapest way to store a huge amount of data other than running your own custom hardware. I don't think you're at that scale yet.
Latency is probably an easy one. Just don't use Glacier, basically, or use it sparingly for data that is extremely rare to access ie: if you back up disabled user accounts in case they come back or something like that.
I think this'll be less of a "do we use S3 or XYZ" and more of a "how do we organize our data so that we can compress as much of it together, deduplicate as much of it as possible, and access the least bytes necessary".