From my own personal experience doing distributed archiving with no relation to Archive.org, Filecoin/IPFS's UX isn't quite there yet. They still don't let you serve data to the network from a normal filesystem, you have to let their system ingest all of your stuff so you end up double-storing data or you have to give into everything being stored as inscrutable binary blobs.
That's why I still haven't integrated ArchiveBox with IPFS/Filecoin/Storj, let my data live in a normal filesystem dammit!
I don't understand this part. What data would you have to give them? Why can't it just live next to your stuff on your OS' filesystem?
For Filecoin, if you want fast access, you do need to keep a second hot plaintext copy, as well as the sealed Filecoin copy. But that works for the backup case for IA, because the hot copy would be served from the archive's existing infrastructure (and/or a distributed IPFS hot cache) -- you'd just use Filecoin for the proven safe backup.
The project to back up IA to Filecoin is still ongoing. The IA dashboard that shows the current state is (perhaps predictably) down at the moment, but it crossed the 1PiB line last year[2], and they've been optimising the onboarding flow recently.
[1] https://docs.ipfs.tech/reference/kubo/cli/#ipfs-add
[2] https://blog.archive.org/2023/10/20/celebrating-1-petabyte-o...
(Disclosure: I work at the Filecoin Foundation/Filecoin Foundation for the Decentralized Web, which partners with the Archive on this project, as well as supporting other Internet Archive backup projects.)