So I agree with you that Linus is presenting a straw man and your comment shouldn’t have been downvoted.
Did this advantage play out in practice? If your filesystem module goes down then every module that talks to the file system module needs to gracefully handle the failure or it will still effectively crash the system.
Or the module core dumps and the system keeps chugging on, but everything is locked up because they're waiting for the return from the crashed module. Did MINIX have a way to gracefully restart crashed modules?
If the file system process crashes then in theory the OS would simply relaunch it.
But your core services should be stable, it’s more about extensions, for example you may want to have virtual file systems (ftp, sshfs, etc.), which until FUSE wasn’t possible in the non-microkernel world.
As for how it played out in practice: I think microkernels lost early on because of performance and things like FUSE were created to allow the most obvious extension mechanisms for the otherwise non-extendable monolithic kernels.
Also, for anything stateful, like a filesystem, simply relaunching it may not be sufficient. You need to make sure it hasn't lost any data in the crash and possibly rewind some state changes in related modules.
What is your basis for this claim?
I am only aware of QNX as a commercial microkernel (and real-time OS) and that is widely used in cars, medical devices, etc. with a strong reputation for reliability.
But for many tasks, Linux is good enough and free, which is hard to beat. But that does not mean that Linus is automatically correct in his statements.
At the same time that Windows NT was being claimed as a microkernel, Linux was outperforming and had a reputation as being more reliable. Ditto with Mach. And GNU Hurd famously was hard to get running at all.
QNX is highly reliable, but is also a specialized use case.
I am of course including supercomputers, embedded hardware, and hand-held phones. Admittedly Windows has greater support for is running consumer hardware for desktops. But that has to do with how small the Linux marketshare is. And is hardly an indictment of Linus' work.
1. Things like drivers and filesystems are usually written by a small handful of vendors, who already have rigorous engineering cultures (hardware is a lot less forgiving than say web design), and a large base of demanding users who will rapidly complain and/or sue you if you get it wrong. When was the last time you personally had a crash due to a driver or filesystem issue? It used to happen semi-frequently in the Win95 days, but there was a strong incentive for hardware manufacturers to Fix Their Shit, and so all the ones who didn't went out of business.
2. You pay a hefty performance price for that stability - since the kernel is mediating all interactions between applications and drivers, every I/O call needs to go from app to kernel to driver and back again. There's been a lot of research put into optimizing this, but the simplest solution is just don't do this, and put things like drivers & filesystems in the kernel itself.
3. The existence of Linux as a decentralized open-source operating system took away one of the main organizational reasons for microkernels. When the kernel is proprietary, then all the negotiations, documentation, & interactions needed to get code into an external codebase become quite a hassle, with everyone trying to protect their trade secrets. When it's open-source, you go look for a driver that does something similar, write yours, and then mail a patch or submit a pull request.