But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.
You can believe all these things at once, and many of us do:
* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)
* Used judiciously, they are a big productivity boost for software engineers and many other professions.
* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.
* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
* AI will change the world in the next 20 years
* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.
* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)
* AGI isn't just around the corner. (There's still no way models can learn from experience.)
* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something
* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect
* AI has the potential to accelerate human progress in ways that really matter, such as medical research
* But anyone who claims to know the future is just guessing