Of course it isn't perfect, if filebase goes bankrupt you need to scramble to get another set of pins for your data. But I think it does provide a much better system overall.
You can also imagine shifting the data around providers over time much like S3 storage tiers, maybe on provider provides cheaper storage but more expensive access, you can pin cold data there and remove it from your hot data provider. This would be completely transparent to any users.
It also provides interesting opportunities for bring-your-own storage. Users can just give my the CID and use it on my app/service. It will be around as long as they keep it alive. Or they can pay me and I can pin it for them.
I think this is a big feature that I hadn't recognized until you phrased it this way. It allows a strategy of guaranteeing *at least 1* pin of your data if it's important, but from the application side, you can forget about the details of the data storage.
I doubt IPFS is fast enough to justify it over other strategies today, but it makes me think of a type of cache where if a CID can't be found, you might rehydrate some slow, cold-storage data and push it back out to the network.
Who decides your data is important?
What happens to the so-called guarantees when your data suddenly becomes unimportant?
Simple example: You have 1TB of data stored in a us-east-1 AWS S3 bucket. Virginia gets hit with a natural disaster, and all of us-east-1 goes offline. You no longer have access to your data. The only "easy button" way you could have prevented this scenario is if you had setup backup or bucket replication policies prior to the event. You decide to follow AWS docs. Disaster recovery 101 typically calls for 3 copies of your data at a minimum. You are now paying $70.66 for storage ($23.55 per TB x 3 regions) and $40.96 for inter-region bandwidth ($20.48 per TB x 2 replicated buckets) for a grand total of $111.62 just to store 1 TB of data. These costs don't even cover you or your customers downloading any of that data either.
At Filebase, we store your data on the Sia network. A 10 of 30 erasure coding profile is used, effectively creating a 3x redundancy overhead. Datacenters and clouds have been using EC for a while too. The critical difference here is that each shard is stored on a different server, spread geographically across the world. An entire portion of the internet can go down, and Filebase can simply fetch the shards it needs from other parts of the world. With a 10 of 30 profile, we only need 10 of those shards to fully reconstruct the file - we can suffer 20 servers or hosts going offline all at the same time.
If Filebase's infrastructure goes down, our health checks fail, and our DNS-level load balancer seamlessly redirects your HTTP requests to another Filebase edge location. And since all Filebase edge locations talk to the same decentralized Sia network, your data is magically available, and you may not even realize you were redirected. That same 1TB of data storage on Filebase will cost you $5.99, a ~95% cost savings.