I like this approach of setting a minimum constraint. But i feel adding more will just make people ignore the point entirely.
LLMs are cool and some of the things they can do now are useful, even surprising. But when it comes to AI, business leaders are talking their books and many people are swept up by that breathless talk and their own misleading intuitions, frequently parroted by the media.
The "but human reasoning is also flawed, so I can't possibly understand what you mean!" objection cannot be sustained in good faith short of delusion.
Vs
Total AI capex in the past 6 months was greater than US consumer spending
Or
AGI is coming
Or
AI Agents will be able to do most white collar work
——
The paper is addressing parts of the conversation and expectations of AI that are in the HYPE quadrant. There’s money riding on the idea that AI is going to begin to reason reliably. That it will work as a ghost in the machine.
What we have seen the last few years is a conscious marketing effort to rebrand everything ML as AI and to use terms like "Reasoning", "Extended Thinking" and others that for many non technical people give the impression that it is doing far more than it is actually doing.
Many of us here can see his research and be like... well yeah we already knew this. But there is a very well funded effort to oversell what these systems can actually do and that is reaching the people that ultimately make the decisions at companies.
So the question is no longer will AI Agents be able to do most white collar work. They can probably fake it well enough to accomplish a few tasks and management will see that. But will the output actually be valuable long term vs short term gains.