No need to microservice or sync read replicas even (unless you are making a game). No load balancers. Just up the RAM and CPU up to TB levels for heavy real world apps (99% of you wont ever run into this issue)
Seriously its so create scalable backend services with postgrest, rpc, triggers, v8, even queues now all in Postgres. You dont even need cloud. Even a mildly RAM'd VPS will do for most apps.
got rid of redis, kubernetes, rabbitmq, bunch of SaaS tools. I just do everything on Postgres and scale vertically.
One server. No serverless. No microservice or load handlers. It's sooo easy.
There are definitely ways to make HA work, especially if you run your own hardware, but the point is that you'll need (at least) a 2nd server to take over the load of the primary one that died.
How do you manage transactions with PostgREST? Is there a way to do it inside it? Or does it need to be in a good old endpoint/microservice? I can’t find anything in their documentation about complex business logic beyond CRUD operations.
I also find it very difficult to trust your advice when you’re telling folks to stick Postgres on a VPS - for almost any real organization using a managed database will pay for itself many times over, especially at the start.
But my point is you won't ever hit this type of traffic. You don't even need Kafka to handle streams of logs from a fleet of generators from the wild. Postgres just works.
In general, the problem with modern backend architectural thinking is that it treats database as some unreliable bottleneck but that is an old fashioned belief.
Vast majority of HN users and startups are not going to be servicing more than 1 million transactions per second. Even a medium sized VPS from Digital Ocean running Postgres can handle that load just fine.
Postgres is very fast and efficient and you dont need to build your architecture around problems you wont ever hit and prepay that premium for that <0.1% peak that happens so infrequently (unless you are a bank and receive fines for that).
What happens if this server dies?
Most would probably get two servers with a simple failover strategy. But on the other hand servers rarely die. At the scale of a datacenter it happens often, but if you have like six of them, buy server grade stuff and replace them every 3-5 years chances you won't experience any hardware issues
maybe add another for good measure....if the biz insurance needs extreme HA then absolutely have multiple failover
my point is you arent doing extreme orchestration or routing
throw a cloudflare ddos protection too
making read replicas function also as writes is needed for such cases but already when you have more than one place to write you run into edge cases and complexities in debugging
not sure what CPU at TB levels means but hope your wallet scales better vertically
While I was mostly living out of the "High Availability, Load Balancing, and Replication" chapter, I couldn't help but poke around and found the docs to be excellent in general. Highly recommend checking them out.
To be fair, it could be because I'm frustrated with Django's design decisions having come from Rails.
When learning Django a few years ago, I still carry a deep loathing against polymorphism (generic relations[0]), and model validations (full clean[1]),
You know what - it's design decisions...
[0] https://docs.djangoproject.com/en/5.1/ref/contrib/contenttyp...
[1] https://docs.djangoproject.com/en/5.1/ref/models/instances/#...
1. try to make most things static-ish reads and cache generic stuff, e.g. most things became non-user specific HTML that got cached as SSI via nginx or memcached
2. move dynamic content to services to load after static-ish main content, e.g. comments, likes, etc. would be loaded via JSON after the page load
3. Move write operations to microservices, i.e. creating new content and changes to DB become mostly deferrable background operations
I guess the strategy was to do as much serving of content without dipping into ruby layer except for write or infrequent reads that would update cache.
[1] High Performance PostgreSQL for Rails Reliable, Scalable, Maintainable Database Applications by Andrew Atkinson:
https://pragprog.com/titles/aapsql/high-performance-postgres...
[1] https://github.com/lfittl/activerecord-clean-db-structure/is...
I hope you’re able to check out the podcast episode and enjoy it. Thanks for weighing in within the gem comments, and for commenting here on this connection. :)
Their baseline was 800 instances of the Rails app...lol.
I'm not going to name-names (you've heard of them) ... but this is a company that had to invent an entirely new and novel deployment process in order to get new code onto the massive beast of Rails servers within a finite amount of time.
Rails these days isn't the top of the speed meters but it's not that slow either.
We're running 270k+ RPM no sweat, and our spend for those containers is maybe 1/100th what you're quoting there.
The idea that Rails can't handle high load is just such bloody nonsense.
You can build an abomination with any framework, if you try.
Can you deploy something to vercel that supports a million concurrent users for less than $250K/month? What about using AWS Lambdas? Go microservices running in K8s?
I think your infra bills are going to skyrocket no matter your software stack if you're serving 1 million+ concurrent users.
You might get surprised as how far you can go with the KISS approach with modern hardware and open source tools.
IDE smartness (auto complete, refactoring), compile error instead of runtime, clear APIs...
Kotlin is a pretty nice "Type-safe Ruby" to me nowadays.
Microsoft acquired companies with web and mobile platforms with varied backgrounds at a high rate. I got the sense that the tech stack—at least when it was based on open source—was evaluated for ongoing maintenance and evolution on a case by case basis. There was a cloud migration to Azure and encouragement to adopt Surface laptops and VS Code, but the leadership advocated for continuing development in the stack as feature development was ongoing, and the team was small.
Besides hosted commercial versions, I was happy to see Microsoft supporting community/open source PostgreSQL so much and they continue to do so.
https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...
https://techcommunity.microsoft.com/t5/azure-database-for-po...
/s
I am not sure why are we boliling the oceans for the sake of a language like Ruby and a framework like Rails. I love those to death but Amazons approach is much better (or it used to be): you can't make a service for 10.000+ users in anything else than: C++, Java (probably Rust as well nowadays).
For millions of users the CPU cost difference probably justifies the rewrite cost.
So if you have a lot of money then you can start implementing from scratch your own web framework in C. It will be the perfect framework for your own product and you can put 50 dev/sec/ops/* on the team to make sure both the framework and product code are written.
But some (probably most) products are started with 1-2 people trying to find product market fit or whatever name is for solving a real problem for paying users as fast as they can. And then delegate scaling for when money are going in.
This is similar because this is about a startup/product bought by Microsoft and not built inhouse.
For fast delivery of stable secure code for web apps Rails is a perfect fit. I am not saying the only one but there are not that many offering the stability and batteries included to deliver with a small team a web app that can scale to product market fit while keeping the team small.
My go-to example is graphql-ruby, which really chokes serializing complex object graphs (or did, it's been a while now since I've had to use it). It is pretty easy to consume 100s of ms purely on compute to serialize a complex graphql response.
> it is not about the language
Sure how about these people?
https://thenewstack.io/which-programming-languages-use-the-l...