In a lot of cases, if your use case is simple, 30% is enough if you’re doing the most common GET and PUT operations etc. But all it takes is one unsupported call in your desired workflow to rule out that vendor as an option until such time that said API is supported. My main beef with this is that there’s no easy way to tell usually unless the vendor provides a support matrix that you have to map to the operations you need, like this: https://docs.storj.io/dcs/api/s3/s3-compatibility. If no such matrix is provided on both the client side and server side you have no easy way to tell if it will even work without wiring things in and attempting to actually execute the code.
One thing to note is that it’s quite unrealistic for vendors to strive for 100% compat - there’s some AWS specific stuff in the API that will basically never be relevant for anyone other than AWS. But the current situation of Wild West could stand for some significant improvement
We are transparent with what's the level of compatibility - https://supabase.com/docs/guides/storage/s3/compatibility
The most often used APIs are covered but if something is missing, let me know!
The announcement is that Supabase now supports (user) —s3 protocol—> (Supabase)
Above you say that (Supabase) —Supabase S3 Driver—> (AWS S3)
Are you further saying that that (Supabase) —Supabase S3 Driver—> (any S3 compatible storage provider) ? If so, how does the user configure that?
It seems more likely that you mean that for any application with the architecture (user) —s3 protocol—> (any S3 compatible storage provider), Supabase can now be swapped in as that storage target.
(user) -> s3 protocol -> (Supabase) -> (AWS S3)
you could fork (or contribute) a database driver for any s3 compatible backend of choice.
(user) -> s3 protocl -> (pbronez-base) -> (GCP Cloud Storage)