Why are more and more devs trying to use s3 as a general purpose DB?
Working on a system right now where the architects have made this mistake it has insanely poor performance (High latency) and lack any proper ACID compliance. I've now been asked to "make it faster" and the answer is to switch back to an actual DBMS.
> Top tier SaaS services like S3 are able to deliver amazing simplicity, reliability, durability, scalability, and low price because their technologies are structurally oriented to deliver those things. Serving customers over large resource pools provides unparalleled efficiency and reliability at scale
In terms of simplicity using s3 is anything but simple. Sure the CRUD api is simple but there are a bunch of gotchas. What about transactionality, partial updates, running multi document queries, consistency of the whole set of documents. You have to rewrite a whole DBMS on top of s3 itself or use redshift to get these things.
In terms of scalability there are, limits 3500rps per key prefix.
It's actually not lower price than a DBMS when you have a lot of data.
The missing link is really a serverless postgres (which many are working on but nothing has impressed me so far.
Used AWS Aurora Serverless v2 (MySQL) and it worked pretty good actually never used the postgres version but it's now available.
Sure, there is the benefit of being able to dump your cold data in cheaply and read flexibly, but... the dev ux is just PITA.
No, you get a DBMS and only change the storage underneath. You can't use S3 for appending to WAL though.
All those can be fixed besides the latency for a cold GET from S3 and appending WAL to S3.
I think what you mean is what we have implemented a side channel DBMS which holds a copy which you use for the transactionality. It's a terrible approach I would not do this at all you don't get any benefit from using s3 here.
This is not to say you can't use s3 to pull large blob storage off the DB and reference it in the DB I'm talking about the entire DB as s3.
https://learn.microsoft.com/en-us/azure/azure-sql/database/h...
The only downside we can spot so far is the presence of a 100mb/s throttle for txn log writes in order to satisfy replication requirements. Beyond this, it is indistinguishable from an express instance on a local dev machine. You lose some of the other on-prem features when you go managed, but most new applications don't need that stuff anymore. The message broker pieces are the only ones I miss, but there are a lot of other managed options for that, and you can still DIY with a simple Messages table and 3-4 stored procedures.
On the read & reporting side, I see no downsides. You mostly get OLAP+OLTP in the same abstraction without much headache. If someone really wanted to go absolutely bananas with reporting queries, data mining, AI crap, whatever, you could give them their own geo replica in a completely different region of the planet. Just make sure they aren't doing any non-queries and everything should be fine.
For large binary data, we rely on external blob storage and URLs. The txn log write limit shouldn't feel like much of a restriction if you are using the right tools for each part of the job. Think about how many blob URLs you could fit within 100 megabytes. If you make assumptions about URL structure, you can increase this by a factor of 2-3x.
lambda is very reliable, more so than ec2. for serious systems, use it to manage servers.
s3 and dynamo are the same thing with different settings. yes dynamo also adds a kitchen sink, but the only feature you should use is CAS.
s3 is x10 cheaper for storage, x10 more expensive per request, x10 slower per request. dynamo is the opposite.
many great system designs can run properly serverless, ie without any ec2 or ec2-spot. they are simpler. serious systems require you to understand what lambda/s3/dynamo give you and what they do not.
more systems can be designed by adding ec2 and/or ec2-spot. the same understanding is required.
s3/dynamo are equidistant from every point within that region. there is no cross az bandwidth cost. there is no bottleneck. there is no contention. a lot of cool designs fall out of this.
lambda can burst to thousands of cpus in a second, for a second.
ec2-spot boots in 30s, and often has very large nvme physically attached.
there’s nothing fundamentally wrong with misusing all these tools and building inefficient systems. the builders will probably do better on their next system. if the owners wanted it done better initially, they could have hired more expensive builders.
- "serverless" is a really bad name for these systems. As is often commented, some variation of "somebody-elses-server" would be better.
- Cost wasn't mentioned in the article, but the cost of renting databases and search-indices is still really high, even though these technologies are no longer the new hotness.
> Top tier SaaS services like S3 are able to deliver amazing simplicity, reliability, durability, scalability, and low price because their technologies are structurally oriented to deliver those things
The whole point is to explore how to drive cost down (due to higher efficiencies in sharing resources) by having multi tenant approach from the get-go.
The point isn't to compare to running your own database on your own rack. It's to compare single tenant RDS with something like an S3-like multi tenant postgres for example.
Also, the cost of renting databases has nothing to do with newness. Databases have evolved a lot in the past two decades. They do all sorts of stuff under the hood, which is why they can now often consume massive amounts of RAM. That doesn't come for free.
or when I mean "without a public IPv4" (as in, without a server), like offline-first (which is still /eventually connected to a server/ smh)
FNaaS != "without servers" so much as "reducing the amount of the pizza shop you run yourself" like that diagram of homemade pizza at home, frozen pizza boxes, papa johns, pizza delivery, pizza shop owner...
It's kinda an overloaded term like how cryptography gets abused by web3 scams. There's an entire row of books on computer science at a college library, one book case is on security, one shelf in that case is on cryptography, most of those books are RSA, two are EC, and one is on essoteric cryptography like dining cryptographers, blind signing, ZKP, etc - and I worry the new books will be aaaaaaaaall blockchain and DAO instead of... Tor V3 papers, or Veilid, or Vuvuzella, or Cwtch/fuzzytokens,
What does it mean though?
Very strange world that transferring data between servers is 2x as fast as reading from a PCI bus.
The other thing I've been careful about is to ensure that the backend is fully no-code. As soon as you allow the user to execute custom code on your backend, it opens up security risks with multi-tenancy. The risk doesn't fully go away when you containerize as vulnerabilities have been encountered in the past in Docker which allow escaping the sandbox.
In my case, although the user can customize back end behavior, they can only do so in a highly constrained way using well defined parameters, not custom code. It saves a lot of effort not having to write a VM or restrict each container to a single host.
Cloudflare does this. https://developers.cloudflare.com/durable-objects/api/transa...