Part of the Java rise was C/C++ being error prone and syntax similarity with such, but this was surely intermingled with a full scale marketing assault by Sun Microsystems who at the time had big multi-socket SMP servers they wanted to sell with Solaris/etc. and part of that was the Solaris/Java threading. Really for a decade or two prior to that the focus was on true MMU-based hardware-enforced isolation with OS kernel clean-up (more like CHERI these days) not the compiler-enforced stuff like Rust does.
I think you could have something more ergonomic than Perl/Python ever was and as practically fast as C/Rust with Nim (https://nim-lang.org/). E.g., I just copied that guy's benchmark with a Nim stdlib std/cgi and got over 275M CGI/day to localhost on a 2016 CPU doing only 2 requesters & 2 http server threads. With some nice DSL easily written if you don't like any current ones you could get the "coding overhead" down to a tiny footprint. In fairness I did zero SQLite whatever, but also he was using a computer over 4x bigger and probably a GHz faster with some IPC lift as well. So, IF you had the network bandwidth (hint - usually you don't!), you could probably support billions of hits/day off a single server.
To head off some lazy complaints, GC is just not an issue with a single threaded Nim program whose lifetime is hoped/expected to be short anyway. In many cases (just as with CLI utilities!) you could probably just let the OS reap memory, but, of course, it always "all depends" on a lot of context. Nim does reference counting anyway whereas most "fighting the GC" is actually fighting a "separate GC thread" (Java again, Go, D, etc.) trashing CPU caches or consuming DIMM bandwidth and so on. For this use, you probably would care more about a statically linked binary so you don't pay ld.so shared library set up overhead on every `exec`.