It's nothing that new though, Firebase (part of GCP) and Netlify have had that for years. CloudFlare just have the right combination of marketing, reputation, pricing and tech to make headlines with it again.
Do you have some examples? I always get excited by this serverless stuff, but often the use-cases are quite limited if you think about it, or maybe I’m thinking wrong, especially if you take vendor lock into consideration.
Contact Form - Comments System - Authentication System - Image resizing (thumbnails) - GraphQL Gateway (working on this now) - Bypass CORS - Generate a random number on the server
I also built full apps on CloudFlare workers (and doing it now).
Only one caveat: They are heavily invested in marketing but their tooling is real cr*p. They are not investing in the Rust integration; or the more regular tools/integrations you are used to.
You drop a file in the functions dir of your repository, edit the firebase.js(on? it's been a while) config file to say that function X maps to path Y (so you can have api.site.com or site.com/api or whatever you want as a redirect to your function), and run firebase deploy.
Drop a file into a directory which is all or mostly static Markdown, HTML or CSS, but those files can contain markup to call other modules, as well as code files.
The existence of a file is enough for its index code to be run (if it has any) and any URL-set and page components it makes available are available to other pages to be used, or served as they are. A single file can offer many URLs, usually anchored at the name of the file but not necessarily. The reason it can affect URLs outside the name of the file itslf is that it's efficient to index all files in a directory tree, even on every HTTP request, if that's done by file change detection and the results are efficiently cached.
Some of those files act as filtering middleware, processing the input and output of some or all URLs and passing along what they don't change.
Updating static Markdown, HTML and CSS, dynamic content, code, and JSON data, are done by 'git push' or 'rsync', or just by editing files in place if you prefer. The server automatically detects file changes and keeps track of all code, file and data dependencies when individual things are used to calculate a response. The full dependency structure for each response calculation is recorded with each cached response.
Cached responses include both full HTTP responses, and components and data available for use in the manner of a subrequest. If a previously cached response depends on a file that has since changed, or been removed, or even a file that was previously absent but now present, or another cache condition such as a JSON or SQLite file change, or logged in user etc, the cached response is invalid and must be regenerated. Regeneration is a combination of on-demand and ahead-of-time, to be able to respond quickly with the speed of a static site for small finite page collections, while behaving correctly with large or infinite collections. Some code updates trigger a process restart because some code can't be safely unloaded or replaced; some code is fine updated, though, and this entire process is well automated. In practice, pleasant and fast to use: edit a file and see the result immediately on reload, with no compile/build step or extra actions.
The dependency structure partly reaches the browser, so that requests to cached responses can be served more efficiently, sometimes even zero latency. In some circumtances, changes in files cause events that ripple through to the browser causing real-time updates to components in-page without a refresh. The result of those is a little like Meteor or LiveView, except almost everything is made from static page files in Markdown or HTML, code files, and JSON data files, and the set of available pages (and "table of contents" pages) are built by indexing those files.
In practice it's mostly writing Markdown, and this is great for emphasis on content first. Or Markdown templates: content that varies a little according to data. But with extensions to be able to drop in useful rendered components (graphs, generated images, templated CSS, transclusions for headers/footers, etc) and dynamic components (live updating when the underlying data files are edited).
It even serves PDFs and thumbnails of those as in-page components, where the PDF content is HTML rendered by running Chrome on the fly from within an in-page component to generate the PDF or image, with Chrome told to fetch a subrequest which serves it the contents of that very same component. This makes for some pretty lists of downloadable PDFs, all generated from JSON data. This probably sounds complicated, but it was actually a fairly simple single file of code, dropped into a directory to make the component available by name inside the other Markdown quasi-static files.
One small VM served a few thousand requests per second last time I checked. Not as fast as a Rust server, but good enough for my uses. It made heavy use of Perl coroutines to serve and cache concurrently, and NginX to make routing and static-serve decisions. Perl coroutines are not commonly used (for ideological reasons I think), but they work very well.
I don't use the sytem any more, but it was the nicest I've used.
I'm not saying Cloudflare workers are a better proposition now. (their ecosystem is a complete shitshow) but the idea has lots of potential; and will probably be the future of computing[!].
!: That is, if the decentralized web fails to take off in the next 5 years.