Above-average intelligence isn't a high-quality standard. Intelligence is nowhere near sufficient to get to high quality on most things. As seen with the current generations of AGI models. People seem to be looking for signs of wild superintelligences like being a polymath at the peak of human performance.
When AI surprises any one of us, it's a good idea to consider whether 'better than me at X' is the same as 'better than the average human at X', or even 'good at X'.
An average human still has LLMs beat there, which might be distorting people's perceptions. But task length horizon is going up, so that moat holding isn't a given at all.
Imagine the conversations this guy must have with people IRL lol
Stating that easygoing people are not also intelligent conversationalist sounds like a _you_ problem dripping with ignorance.
Maybe get off the socials for a bit or something, you might need a change of perspective.
I’m of the opinion that AGI is an anthropomorphizing of digital intelligence.
The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves.
If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence.
We have bunch of tools for specific tasks. This doesn't again sound like general.
1. Learn/Improve yourself with each action you take 2. Create better editions/versions of yourself 3. Solve problem in areas that you were not trained for simply by trial and error where you yourself decide if what you are doing is correct or wrong
Simple - go through an on-boarding training, chat to your new colleagues, start producing value.
Are you serious or sarcastic? Do you really consider this empty type of sycophancy as empathy?
You got 200ms of round trip delay across your nervous system. Some of the modern AI robotics systems already have that beat, sensor data to actuator action.
What do LLMs have to do with this? You ever see a machine beat a speed cube? So we’ve had “AI” all along and never knew it?!
Oh right, comparing meatspace messaging speeds to copper or fiber doesn’t make sense. Good point.
Do you know actual people? Even literal sociopaths are a bit better at empathy than ChatGPT (I know because I have met a couple).
And as for conversation? Are you serious? ChatGPT does not converse in a meaningful sense at all.