The YAML config, IAM permissions, generating requests and responses, it's all so painful to get anything done.
Admittedly I speak as a software engineer primarily building CRUD apps, where frameworks have had decades of development. I can see use cases for event-driven applications where serverless may make life easier. But for CRUD, currently no chance.
I do see its usefulness, but its not a one size fits all tool.
Ya, this is the majority of us.
https://github.com/dbos-inc/dbos-transact-py
You can build your software as a monolith and deploy it with one command to your own cloud or our cloud.
And gateway+lambda is a near perfect "dumb crud" app, though it is not without a startup cost.
If you need RDS for example you need the VPS.
It only looks good on the outside.
I find FaaS best when needing to automate something completely unrelated to what goes in to serving the customer. Stuff like bots to report CWV metrics from DataDog to a Slack channel.
But that means you're not starting with serverless, and it's your pivot from the original monolith.
There is no unification of APIs - every provider has their own bespoke abstractions typically requiring heavy integration into further vendor-specific services - moreso if you are to leverage USPs.
Testing and reproducing locally is usually a pipe-dream (or take significantly more effort than the production deploy). Migrating to a different cloud usually requires significant rewrites and sometimes rearchitecturing.
I want my code to be written and executed on my machine in a way that can at least kind of resemble the production execution environment. I want a binary that gets run and some IO access, most of the time.
If I have a VM or a "serverless"-style compute like Fargate on ECS, I can define an entry point, some environment variables, and we're off to the races in a very similar environment to my local (thank god for containers and VMs).
The _idea_ of lambda and the similar services is awesome to me, but it's just such a PITA to deal with as a developer, at least in my experience.
[0] https://github.com/aws/apprunner-roadmap/issues/9 (amusingly the issue OP posts on HN)
feedback: why make a clear distinction between "magic node" and "BYON"? Two new concepts to learn when I feel a value-prop for some users would be to not have think about these distinctions? (Just talking about wording and communication here - can you get the value prop across with less reification?)
The second one is even more important though: Time. How many of my systems are guaranteed to stop after 15 minutes or less? Web Servers wouldn't like that, anything stateful doesn't like that. Even compute-heavy tasks where you might like elastic scaling fall down if they take longer than 15 minutes sometimes, and then you've paid for those 15 minutes at a steep premium and have to restart the task.
Serverless only makes sense if I can do it for minor markup compared to owning a server and if I can set time limits that make sense. Neither of these are true in the current landscape.
Every time I try to solve a problem with anything other than Rails, I run into endless issues and headaches that would have been already solved if I just. used. Rails.
This is a good way to get nonfunctioning product. Or at least a lot of frustrating meetings.
The thing is, "serverless" still has a server in it, it's just one that you don't own or control and instead lease by short timeslices. Mostly that doesn't matter, but the costs are really there.
from the complexity outside and within
by dreaming of abstractions so perfect that no one will need to be good
but the latency that is will shadow
the "simple" that pretends to be
Now I'm stuck in the reality of backlash
And cashed–in chips.
seem very confusing to grug
If it were 5% worse performance for 7% more cost, most people would probably not bat an eye.
When it can be 50% less performant for 200% more cost, eventually someone is going to say: sure there's overhead to owning that but I will be at a major competitive advantage if I can do it even just OK. And it turns out for most businesses doing it at the scale they need isn't all that difficult to get right.
I have a friend who recently made a stupid bug in his processing pipeline on AWS. He woke up on morning and saw a message from his bank that his CC was over the limit.
When we have a bug, our Nagios send us a message that responces are more than 150% of average and we do a rollback.
So it's not only the risk of vendor lock-in, but also in surprising bills and policy changes, updates and other 3-rd party risks you end up with.
Serverless also means a lot of things. We also serve static content from an S3 bucket and cloudfront. Nothing else to manage once its setup.
The flip side of serverless is you really do need to think of state yourself. The J2EE code was rock solid in reliability, including recovering from almost every kind of issue you can imagine over a decade (database, connectivity, software crashes).
I think the answer is in the first sentence. A lot of engineers make products that don't touch the internet. This concept is lost in the noise quite a bit.
Maybe it's an expired certificate but the guy who knew how that stuff works built a 12,000 line shell script that uses awk, perl, and a cert library that for some reason requires both CMake and Autotools. It also requires GCC 4.6.4 because nobody can figure out how to turn off warnings are errors.
Not all abstractions and simplified services are good in all situations.
I really wish the 9mm headphone jack wasn’t being replaced with just Bluetooth. 9mm has worked for me 100% of the time. Bluetooth is regularly a piece of garbage.
If you remember the olden days of web development, when CGI was king, the web applications didn't listen. Instead, a separate web server (e.g. Apache) called upon the application as a subprocess and communicated with it using system primitives like environment variables, stdin, and stdout.
Over time, we started moving away from the CGI model, moving the server process into the application itself. While often a fronting web server (e.g. nginx) would proxy the requests to the application, technically the application was able to stand on its own.
Serverless returns to the old CGI model, although not necessarily using the CGI protocol anymore, removing the server from the application. The application is less a server, hence the name.
Of course I think that would be a DRM nightmare for big-corps. One could stream items another person's system owns for "free" without dealing with companies.
They aren't your servers and the server processes running your code are only active temporarily, usually with auto-scaling features.
This must be why they say programming is dead once you turn 40: You can no longer communicate with the young-ins.
Back when the hype was virtualization (so probably mid-2000s, before my time at the company), a big project was run to try moving to virtual machines. After the research phase had been deemed a success, they were gearing up to go into production and put in a hardware order. This was supposedly rejected by an executive who complained that they should not need physical servers if they do everything on virtual machines now.
Serverless functions are quite interesting for certain use cases, but those are mostly additions to the main application. I'd hesitate to build a typical web application with mostly CRUD around serverless, it's just more complexity I don't need. But for handling jobs that are potentially resource intensive or that come in bursts something like Lambda would be a good fit.
In fact, the solution still used serverless afaik: https://www.youtube.com/watch?v=BcMm0aaqnnI
(take that u/UltraSane! https://news.ycombinator.com/item?id=42506205)
It likely could have been solved by serverless too, by using local storage and having the pipeline condensed into a single action...
FD: I'm not a fan of serverless for production anything.
Amazon runs both and serverless is a billing model. Many serverless runtimes consume containers.
Serverless, like microservices are a design philosophy.
Serverless is all about outsourcing the infrastructure for scaling a micro service. How you design the service itself, or the system its a part of, can vary widely.
There are definitely dedign constraints of going serverless, but I'd argue those are largely just the constraints of going with microservices rather than a monolith.
> Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%
> The move from a distributed microservices architecture to a monolith application helped achieve higher scale, resilience, and reduce costs.
We don't make buildings from Lego blocks. We do use modular components on buildings (ceramic bricks, steel beams, etc), but they are cemented or soldered together into a monolithic whole.
In my opinion, "serverless" (which, as others have noted, is an horrible misnomer since the server still exists; true "serverless" software would run code exclusively on the client, like desktop software of old) suffers from the same issue as "remote procedure call" style distributed software from back when that was the fashion: introducing the network in place of a simple synchronous in-process call also introduces several extra failure modes.
* https://einaregilsson.com/serverless-15-percent-slower-and-e...
I worked for a company once whose entire product was built on hundreds of lambdas, it was a nightmare.
Any time AWS is mentioned I know it's going to be some huge expensive setup.
The initial development learning curve was higher, but the end result is a system that runs with high reliability in customer clouds that doesn't require customers (or us) to manage servers. There are also benefits for data sovereignty and compliance from running in the customer's cloud account.
But another upside to serverless is the flexibility we've found when orchestrating the components. Deploying certain modules in specific configurations has been more manageable for us with this serverless / cloud-native architecture vs. past projects with EC2s and other servers.
The only downside that we see is possible vendor lock-in, but having worked across the major cloud providers, I don't think it's an impossible task to eventually offer Azure and GCP versions of our platform.
First most companies thought they needed to do containers before serverless, and frankly it took them a while to get good at that.
Second the programming model was crap. It's really hard to debug across a bunch of function calls that are completely seperate apps. It's just a lot of work, and it made you want to go monolith and containers.
Third, the spin up time was a deal killer in that most people would not let that go, and wanted something always running so there was no latency. Sure workload exist that do not require that, but they are niche, and serverless stayed niche.
This isn't really saying anything about serverless though. The issue here is not with serverless but that Lambda wants you to break up your server into multiple smaller functions. Google cloud run[0] let's you simply upload a Dockerfile and it will run it for you and deal with scalling (including scaling to zero).
There's always part of the stack (at least on the kinds of problems I work on) that is CPU intense. That part makes sense to have elastic scaling.
But there's also a ton of the stack that is stateful and / or requires long buildup to start from scratch. That part will always be a server. It's just much easier.
For my own projects, I prefer lambda. It comes with zero sysadmin, costs zero to start and maintain, and can more easily scale to infinity than a backend server. It's not without costs, but most the backend services I use can easily work in lambda or a traditional server (fastapi, axum), so it is a two-way door.
- Serverless can get very expensive - DevEx is less than stellar, can't run a debugger - Vendor lock-in - You might be forced to update when they stop supporting older runtime versions
Without tooling to run a serverless service locally, this is always going to be a sticking point. This is fine for hobby projects where you can push to prod in order to test (which is what I've ended up doing) but if you want stronger safeguards, it's a real problem.
If you are lucky enough to have your company go viral and receive a sudden spike in traffic, will the rest of the infrastructure tolerate it? Will your database accept hundreds of concurrent connections, or will it tip over?
If you need to engineer and test the auto-scaling capabilities of the rest of your infrastructure, is there value in not needing to think about the scaling of your APIs?
These may sound snarky, but they are real questions -- I used to administer ~300k CPU cores, so I have some trouble imagining the use-cases for serverless
When I read the copy trying to peddle them, to me it sounds quite like someone saying "Heey.. PSST! Wanna borrow 5000$ in cash, I can give it to you right now! Don't worry about 'interest rates', we'll get back to that LATER".
When I build stuff out of 'serverless', I find it rather difficult to figure out what my operation costs are going to be; I usually learn later through perusing the monthly bills.
I think the main two things I have appreciated(?), is
(1) that I can publish/update functions on cloud in 1-5 seconds, whereas the older web services I also use, often take 30-120 SECONDS(not minutes, sorry) to 'flip around' for each publish.
(2) I can publish/deploy relatively small units of code with 'functions'. But again, that is not quite accurate. It's more like 'I need to include less boilerplate' with some code to deploy it.. Because to do anything relevant, I more or less need to publish the same amount of domain/business-logic code as I used to with the older technologies.
Part from that, I mostly see downsides - my 'function/serverless' code becomes very tied-to-vendor. - testing a local dev setup is either impossible or convoluted, so I usually end up doing my dev work directly against cloud instances.
I'm probably just old dog, but I much prefer a dev environment that allows me to work on my own laptop, even if the TCP/IP cable is yanked.
Oh yeah, and spit on you too, YAML :-) They found a curse to match the abomination of "coding in xml languages" of 20 years ago..
My current employer standardized on serverless and for many things it works well enough, but from my standpoint it's just more expensive.
What started was the rebranding from distributed systems.
We have had Sun RPC (The network is the computer, a slogan now owned by Cloudflare), DCE, CORBA, DCOM, RMI, Jini, .NET Remoting, SOAP, XML-RPC, JSON-RPC,....
Client-Server, N-Tier Architecture, SOA, WebServices,...
Apparently the new trend is Microservices-based, API-first, Cloud-native, and Headless with SaaS products, aka MACH.
Not really. Microservices normally refers to humans and how they work together, or, perhaps, don't work together. Microservices is the same service model found in the macro economy but applied to the micro economy of a single business, which was a novel idea at least to the general public, hence the name.
Due to Conway's law, the product ends up being a distributed system more often than not, but that is only a side effect. Theoretically you could have microservices without distributed systems, and we do see some instances of services found in the macro economy that are not offered as a distributed computing products, not to mention that services even predate the network. But distributed is definitely the way most things are going.
I work for a community project that is building a descentralized orchestration mechanism that is intended, among other things, to democratise access to serverless open compute while also being cloudless.
Take a look at the project at https://nunet.io to know more about it!
It doesn't solve all problems (tt isn't a CRUD framework) - but it does make the developer experience much better as compared to Amplify.
Separately, when you factor in data privacy, your decision making tree will certainly change quickly.