.NET uses threadpool with hill-climbing algorithm. Each worker thread has its own queue and can also perform work stealing. However, if that is not enough and the work queues fill up faster than the items in them get processed, the threadpool will spawn additional threads until the processing lag is counteracted (within reason).
This historically allowed it to take a lot of punishment but also led to many legacy codebases to be really shoddily written in regards to async/await, where it would take some time for threadpool to find the optimal amount of threads to run or devs would just set a really high number of minimum threads, increasing overhead.
In .NET 6, threadpool was rewritten in C# and gained the ability to proactively detect blocked threads outside of hill-climbing algorithms and inject new threads immediately. This made it much more resilient against degenerate patterns like for loop + Task.Run(() => Thread.Sleep(n)). Naturally it is still not ideal - operating system can only manage so many threads, but it is pretty darn foolproof if not the most foolproof amongst all threadpool implementations.
As of today, it is in a really good spot and with tuned hill-climbing and thread blocking detection you would see the .NET processes have thread counts that more or less reflect the nature of the work they are doing (if you don't do any "parallel" work while using async - it will kill most threads, sometimes leaving just two).