If you can come to my company and replace our 96-core SqlServer boxes with SQLite I'll pay you any salary you ask for.
Whether or not I could perform this miracle depends entirely on your specific use cases. Many people who have this sort of reaction are coming from a place where there is heavy use of the vendor lock-in features such as SSIS and stored procedures.
If you are ultimately just trying to get structured business data to/from disk in a consistent manner and are seeking the lowest latency and highest throughput per request, then SQLite might be what you are looking for.
The specific core counts or other specifications are meaningless. SQLite scales perfectly on a single box, and if you have some good engineers you might even be able to build a clustering protocol at the application layer in order to tie multiple together. At a certain point, writing your own will get cheaper than paying Microsoft for the privilege of using SQL Server.
I can think of a couple of places I've worked where they had simple problems that could have been solved by some thinking and coding but instead were solved* by more expensive hardware.
In my experience, SQLite is likely to be faster when you have lots of reading. Being in-process gives SQLite a natural advantage in read-heavy situations.
See: https://www.sqlite.org/wal.html
"Write transactions are very fast since they only involve writing the content once (versus twice for rollback-journal transactions) and because the writes are all sequential. Further, syncing the content to the disk is not required, as long as the application is willing to sacrifice durability following a power loss or hard reboot."
I also had a server with 96 cores until we realized a developer had inadvertently made a query happen every time a scroll event fired... it was a nice chunk of chance saved.
What people are conveniently leaving out is they wrote a serious wrapper around it that makes it very similar to other conventional large scale systems like MSSQL or MySQL: https://bedrockdb.com/