> [1] In traditional AI, agents are defined entities that perceive and act upon their environment, but that definition is less useful in the LLM era — even a thermostat would qualify as an agent under that definition.
I'm a huge believer in the power of agents, but this kind of complete ignorance of the history of AI gets frustrating. This statement belies a gross misunderstanding of how simple agents have been viewed.
If you're serious about agents then Minsky's The Society of the Mind should be on your desk. From the opening chapter:
> We want to explain intelligence as a combination of simpler things. This means that we must be sure to check, at every step, that none of our agents is, itself, intelligent... Accordingly, whenever we find that an agent has to do anything complicated, we'll replace it with a subsociety of agents that do simpler things.
Instead this write up completely ignores the logic of one of the seminal writings on this topic (and it's okay to disagree with Minsky, I sure do, but you need to at least acknowledge this) and immediately thinks the future of agents must be immensely complex.
Automatic thermostats existed in the early days of research on agents, and the key to a thermostat being an agent is it's ability to communicate with other agents automatically, and collectively perform complex actions.
Also the thermostat has been extremely usefully used in an exploration of agency by Daniel Dennett, in his famous "The Intentional Stance"
(I accidentally typed "Intensional Stance" which really would be a fascinating book to read!)
Ultimately, that effort failed but I don’t see any awareness of that considerable volume of work reflected in today’s use of the word “agent”. If nothing else, there was a lot of work on the use-cases and human factors.
It’s just a bit disheartening to know that so much work, by hundreds of researchers (at least), over 10+ years, has just slipped into irrelevance
A field retitles itself, and suddenly no one is aware of the still-applicable research from before the name change.
Which is probably more broadly to say that no modern courses teach surveys of previous material.
Side note, are there other similarly seminal books you can recommend on AI?
Most (if not all) agent frameworks use GPT-4, Claude Opus, etc. models which are heavily RLHF'd.
[0]: https://arxiv.org/abs/2406.05587 [1]: https://news.ycombinator.com/item?id=40702617
Non-technical stakeholders also get fixated on this idea of AI agents autonomously working together. Can we save money? Perhaps even replace some people? Without a solid base of reality and a wide imagination, we can see how that conclusion can be drawn.
While agents may have a place, we in the AI space will fall into a credibility loop if this is pushed as the answer. There are plenty of wins for an organization with no "AI" in place. Retrieval Augmented Generation (RAG) is hard in it's own right but there is a reasonable path to success now.
Otherwise, expect disappointment. Then the whole space will be lumped together as a failure.
Aka 'If you really understood this, you'd understand we build this in terms of smaller agents. That's why you should pay us big money to build the technically simplest part of this use case...'
Controversial opinion there, especially given the hand tuning that those two go through