This. Highly competent technical ICs in my circles continue to (metaphorically) scream at their Juniors submitting AI slop and being unable to describe what it's doing, why it's doing it that way, or how they could optimize it further, since all management cares about is "that it works".
Current models excel because of the corpus of the open internet they (stole from) built off of. New languages aren't likely to see as consistent results as old ones simply because these pattern matchers are trained on past history and not new information (see Rust vs C). I think the fact nobody's minting billions turning LLMs into trading bots should be pretty telling in that regard, since finance is a blend of relying on old data for models and intuiting new patterns from fresh data - in other words, directly targeting the weak points of LLMs specifically (inability to adapt to real-time data streams over the long haul).
AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that. Rather, we're at a crossroads like you say: stakeholders want more money and higher returns (which AI promises), while the people doing the actual work are trying to highlight that internal strife and politics are the holdups, not a lack of brute-force AI. Meanwhile both sides are trying to rattle the proverbial prison bars over the threats to employment real AI will pose (and the threats current LLMs pose to society writ large), but the booster side's actions (e.g., donating to far-right candidates that oppose the very social reforms AI CEOs claim are needed) betray their real motives: more money, less workers, more power.