People keep repeating that LLMs are predicting the next words but at least with the more recent versions, this isn't true. Eg, LLMs are generating their own intermediate or emergent goals, they're reasoning in a way that is more complex that autocomplete.
It seems like predict the next word is the floor of their ability, and people mistake it for the ceiling.