It's really not true that the whole of generative linguistics is just some kind of self-referential parlor game. A lot of what we take for granted today as legitimate avenues of research in cognitive science were opened up as a direct consequence of Chomsky's critique of behaviorism and his insight that the mind is best understood as a computational system. Ironically, any respectable LLM will be perfectly happy to cover this in more detail if you probe it with some key terms like "behaviorism", "cognitive revolution" or "computational theory of mind".
> Pirahã
It's very unlikely that Everett's key claims about Pirahã are true (see e.g. https://dspace.mit.edu/bitstream/handle/1721.1/94631/Nevins-...). But anyway, the universality of recursive clausal embedding has never been a central issue in generative linguistics. Chomsky co-authored one speculative paper late in his career suggesting that recursion in some (vague) sense might be the core computational innovation responsible for the human language faculty. Everett latched on to that claim and the dispute went public, which has given a false impression of its overall centrality to the field.
> So what's his progress?
I don't see how we can discuss this question without getting into specifics, so let me try to push things in that direction. Here is a famous syntax paper by Chomsky: https://babel.ucsc.edu/~hank/On_WH-Movement.pdf It claims to achieve various things. Do you disagree, and if so, why?
> Japanese
A generative linguist studying Japanese wouldn't claim to be an expert on the structure of Japanese in your broad sense of the term. One thing to bear in mind is that generative linguistics is entirely opportunistic in its approach to individual languages. Generative linguists don't don't study Japanese because they give a fuck about Japanese as such (any more than physicists study balls rolling down inclined planes because balls and inclined planes are intrinsically fascinating). The aim is just to find data to distinguish competing hypotheses about the human language faculty, not to come to some kind of total understanding of Japanese (or whatever language).
> I guess most experts in LLMs are busy becoming billionaires right now; but if anything resembling Chomsky's universal grammar ever does get found to exist, then I'd guess it will be extracted computationally from models trained on corpora of different languages and not any human insight, in the same way that the Big Five personality traits fall out of a PCA.
This is a common pattern of argumentation. First, Chomsky's work is critically examined according to the highest possible scientific standards (every hypothesis must be strictly falsifiable, etc. etc.) Then when we finally get to see the concrete alternative proposal, it turns out to be nothing more than a promissory note.