One feat is still amazes me - my AWS Lambda React webapp example (Todo with server rendering) which were deployed in 2019, still works as today, and I have not changed, or redeployed it since.
Maintenance hell is a symptom of the frameworks in use, not Lambda. If you’re using stable tools, you can go years before doing a 5 minute runtime update and then go years.
Debugging and deployment speed are a stronger argument - the best balance I’ve found is to mandate modular design and local development so developers can work locally except when they are troubleshooting environmental interactions. Framework complexity also matters here - if you’re deploying a heavyweight app using AWS SAM your deployments will be at least 1-2 orders of magnitude slower than a simple Lambda.
And it works pretty well. A lot of internal and external JSON APIs are a good fit.
I've found the article ok, but it would have been a much nicer read without all the emotional stuff and just kept - serverless is pushed as panacea but actually isn't a good fit because...
That said, I do prefer the development model of containers. Run them anywhere. That said, has it's own limitations. For example, he claims to be able to run state within container. Doesn't make sense if you want to scale out. Persistence is a problem. You can't run DBs on ECS Fargate for example.
And the worst aspect of running containers is: in bigger orgs the standard will probably be K8s. And that has nothing to do any more with the simplicity of containers as mentioned in the article.
Containers don't carry state. They can be made to do so if you wish but there's nothing inherent to them that does it.
> in bigger orgs the standard will probably be K8s. And that has nothing to do any more with the simplicity of containers as mentioned in the article.
K8s can be very simple if there's a platform team ensuring great developer experience. I appreciate that this is likely rarer than you or I would like though.
(IMO, if it can get a fly.io like command line experience, it will thrive more.)
Overall there was a lot of trial and error and no clear way to test everything locally in the container.
In particular, the whole blue/green CI/CD approach makes it both trickier to know what's going on, but harder to trigger an outage.
Thus, while the complexity complaints are largely on point, to label it all a "scam" is too strident.
because "just use a container" is more or less the solution that "second-gen" serverless platforms all offer.
but also this:
> A container keeps state (just add a Docker volume!)
is just absolutely terrible as general-purpose advice.
like, yes, it can be annoying that "serverless" platforms are generally stateless, which forces you to move your state into a hosted database of some kind.
but...that reflects the underlying reality of the cloud platform. the servers that your "serverless" code runs on are generally themselves stateless.
if you were to blindly follow this "just add a Docker volume" approach to managing state, you're in for a rude awakening the moment you want to scale your "serverless" code from 1 server to 2 servers.
and unsurprisingly, the article glosses over this a few paragraphs farther down:
> You can deploy one container, or ten. Scale them. Monitor them. Keep state. Run background jobs. Use your own database.
run 10 containers...each with their own Docker volume? use my own database? what. this is blogspam nonsense.