LLMs aren't really simulating intelligence so much as echoing intelligence.
The problem is when the hype machine causes the echoes to replace the original intelligence that spawned the echoes, and eventually those echoes fade into background noise and we have to rebuild the original human intelligence again.
I appreciate this, that is why I said, "LLMs and other models". Knowing the probability relations between words, tokens, or concepts/though vectors is important, and can be supplemented by smaller embedded special purpose models/inference engines and domain knowledge in those areas.
As I said, it is overhyped in some areas and underhyped in others.