I'm not even talking about self-awareness. I'd be happy to raise the bar to that level when (, if) we have mice-level AI.
However the bar is way below that at the moment, and masquerading as "intelligence".
Current machine learning (ie., mere statistical) approaches to AI, that do not explicitly aim to dynamically model environments/goals/behaviour/etc., aren't even meeting an extremely minimal notion of intelligence.
We have at the moment "smart rocks". Electrical current "tumbles down" a "digital mountain" and we all it's path "smart" because it has useful outcomes. Equally, a rock rolling down a hill finds an optimal path -- it aint "smart".
We should look at what the rock does when you start adpating its environment: eg., create a little dip in the mountain side; it gets trapped. A mouse doesnt get trapped in a dip, it continues to explore -- why?
Because animal behaviour is inherently exploratory of the enviornment. A mouse doesnt "solve" a maze, it intelligently navigates it -- so that when unexpected change occurs, it isn't "broken".
At the moment, all AI systems radically break when such changes occur -- because they are statistically trained on mere data. They arent dynamically model building. They aren't in an environment. They're just rocks rolling down a hill.