No, it's not.
CGI is Common Gateway Interface, a specific technology and protocol implemented by web servers and applications/scripts. The fact that you do a fork+exec for each request is part of the implementation.
"Serverless" is a marketing term for a fully managed offering where you give a PaaS some executable code and it executes it per-request for you in isolation. What it does per request is not defined since there is no standard and everything is fully managed. Usually, rather than processes, serverless platforms usually operate on the level of containers or micro VMs, and can "pre-warm" them to try to eliminate latency, but obviously in case of serverless the user gets a programming model and not a protocol. (It could obviously be CGI under the hood, but when none of the major platforms actually do that, how fair is it to call serverless a "marketing term for CGI"?)
CGI and serverless are only similar in exactly one way: your application is written "as-if" the process is spawned each time there is a request. Beyond that, they are entirely unrelated.
> A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...
> A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.
> It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.
> You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.
To be completely honest with you, I actually agree with your conclusion in this case. CGI would've been better than Django/FastCGI/etc.
Hell, I'd go as far as to say that in that specific case a simple PHP-FPM setup seems like it would've been more than sufficient. Of course, that's FastCGI, but it has the programming model that you get with CGI for the most part.
But that's kind of the thing. I'm saying "why would you want to fork+exec 5000 times per second" and you're saying "why do I care about fork+exec'ing 1000 times in the total lifespan of my application". I don't think we're disagreeing in the way that you think we are disagreeing...