The last time I had this discussion with people I pointed out how LLMs consistently and completely fail at applying grammar production rules (obviously you tell them to apply to words and not single letters so you don't fight with the embedding.)
LLMs do some amazing stuff but at the end of the day:
1) They're just language models, while many things can be described with languages there are some things that idea doesn't capture. Namely languages that aren't modeled, which is the whole point of a Turing machine.
2) They're not human, and the value is always going to come from human socialization.