>I see LLM models not as the end of society,
This is because you're used to normal technological development scaling. Unfortunately with AI we are unsure at this point if our old paradigms still apply.
What happens with society really depends on where we are on the technology growth curve. If LLMs plateau for a while and we don't see significant growth in their abilities, then we'll just see pretty massive technological disruption and reshifting of human priorities, kind of like when the car or plane showed up. On the other hand, if we pop the AGI rabbit out of the top hat in short order it is going to surface deep fundamental problems that humans world wide have been ignoring because of the status quo has not demanded such rapid change.