Python definitely does "use all cores" on a machine with the multiprocessing package, not sure what you mean?
This is one of the things I think why Elixir isn't as popular: people think that Nodejs or Python can do with Elixir or Erlang do, but they don't.
On BEAM, running 10,000 lightweight processes is normal. Phoenix is designed so that every incoming http request is its own lightweight process.
How does one manage that number of lightweight processes? The runtime’s scheduler keeps track of every single process so nothing is orphaned. It is preemptive, so no lightweight process can starve out another (though there are some exceptions).
They can also suspend cheaply, as such, works well with async io.
The closest thing to this are the new virtual thread feature of recent Java. I don’t think Java has the same kind of properties that will allow it to manage it as well as BEAM. There is a lot more to Elixir than being able to use all the cores.
That can NEVER happen in the Erlang Virtual Machine (the BEAM), because of the preemptive scheduler. This is only one of 100s of examples why the BEAM is the right choice for web systems where real-time can be soft real-time.
However, since Nodejs queues bits of execution, rather than messages, errors can get lost. So then you have unhandled promise rejections… and then relying on a linter to look for those times you did not write something to catch the error. I was told this is a good feature of the linter.
Contrast that with the BEAM languages. An unhandled error crashes the lightweight process. Any linked process (such as a supervisor) can decide what to do about the crash (restart, or crash as well). We don’t even need liveliness probe (like in Kubernetes) because the supervisor is immediately informed (push, not pull).
You don’t need a linter to make sure it has a sensible default, because this is handled by design.
Nodejs, on the other hand, is error prone, _by design_, even though it does not block on IO either.
No amount of typespecing is going to fix that.