According to who? Everyone who's anyone is trying to create highly autonomous systems that do useful work. That's completely unrelated to modeling them on humans or comparing them to humans.
If it had an autocomplete interface, you wouldn't be claiming that. Yet it would still be the same model.
(Nobody's arguing that Google Autocomplete is more human than software - at least, I hope they're not).
Backronym it to Advanced Inference and the argument goes away.
Nearly every component is based on humans
- neural net
- long/short term memory
- attention
- reasoning
- activation function
- learning
- hallucination
- evolutionary algorithm
If you're just consuming an AI to build a React app then you don't have to care. If you are building an artificial intelligence then in practice everyone who's anyone is very deliberately modeling it on humans.
Nothing in that list is based on humans, even remotely. Only neural networks were a vague form of biomimicry early on and currently have academic biomimicry approaches, of which all suck because they poorly map to available semiconductor manufacturing processes. Attention is misleadingly called that, reasoning is ill-defined, etc.
LLMs are trained on human-produced data, and ML in general shares many fundamentals and emergent phenomena with biological learning (a lot more than some people talking about "token predictors" realize). That's it. Producing artificial humans or imitating real ones was never the goal nor the point. We can split hairs all day long, but the point of AI as a field since 1950s is to produce systems that do something that is considered only doable by humans.
The earliest reference I know off the top of my head is Aristotle, which would be the 4th century BCE
> I can start with theorem provers
If you're going to talk about theorem provers, you may want to include the medieval theory of obligations and their game-semantic-like nature. Or the Socratic notion of a dialogue in which arguments are arrived at via a back and forth. Or you may want to consider that "logos" from which we get logic means "word". And if you contemplate these things for a minute or two you'll realize that logic since ancient times has been a model of speech and often specifically of speaking with another human. It's a way of having words (and later written symbols) constrain thought to increase the signal to noise ratio.
Chess is another kind of game played between two people. In this case it's a war game, but that seems not so essential. The essential thing is that chess is a game and games are relatively constrained forms of reasoning. They're modeling a human activity.
By 1950, Alan Turing had already written about the imitation game (or Turing test) that evaluated whether a computer could be said to be thinking based on its ability to hold a natural language conversation with humans. He also built an early chess system and was explicitly thinking about artificial intelligence as a model of what humans could do.
> Attention is misleadingly called that, reasoning is ill-defined,
None of this dismissiveness bears on the point. If you want to argue that humans are not the benchmark and model of intelligence (which frankly I think is a completely indefensible position, but that's up to you) then you have to argue that these things were not named or modeled after human activities. It's not sufficient that you think their names are poorly chosen.
> Producing artificial humans or imitating real ones was never the goal nor the point.
Artificial humans is exactly the concept of androids or humanoid robots. You are claiming that nobody has ever wanted to make humanoid robots? I'm sure you can't believe that but I'm at a loss for what point you're trying to make.
> 1950s is to produce systems that do something that is considered only doable by humans.
Unless this is a typo and you meant to write that this was NOT the goal, then you're conceding my point that humans are the benchmark and model for AI systems. They are, after all, the most intelligent beings we know to exist at present.
And so to reiterate my original point, talking about AI with the constraint that you can't compare them to humans is totally insane.
Neural networks are not like brains. They don’t grow new neurons. A “neuron” in an artificial neural net is represented with a single floating point number. Sometimes even quantized down to a 4 bit int. Their degrees of freedom are highly limited compared to a brain. Most importantly, the brain does not do back propagation like an ANN does.
LSTMs have about as much to do with brain memory as RAM does.
Attention is a specific mathematical operation applied to matrices.
Activation functions are interesting because originally they were more biologically inspired and people used sigmoid. Now people tend to use simpler ones like ReLU or its leaky cousin. Turns out what’s important is creating nonlinearities.
Hallucinations in LLMs have to do with the fact that they’re statistical models not grounded in reality.
Evolutionary algorithms, I will give you that one although they’re way less common than backprop.
> the brain does not do back propagation
Do we know this? Ruling this out is tantamount to claiming that we know how brains do learn. My suspicion is that we don't currently know, and that it will turn out that, e.g., sleep does something that is a coarse approximation of backprop.
I don't know where this idea that "the things haves similar names but they're unrelated" trope is coming from. But it's not from people who know what they're talking about.
Like I said, go back and read the research. Look at where it was done. Look at the title of Marvin Minksy's thesis. Look at the research on connectionism from the 40s.
I would wager that every major paper about neuroscience from 1899 to 2020 or so has been thoroughly mined by the AI community for ideas.
Next you'll tell me that Windows Hibernate and Bear® Hibernate™ have nothing in common?