Intelligent agents need not mirror human psychology or emotions. The creation of something extremely powerful that doesn’t think like we do is a very real possibility.
In human beings, what we consider normal is actually a very fragile and delicate balance. Changes in chemical operation of the brain have outsized effects on emotion, perception, and even sanity.
With A[G]I, I think it’s helpful to think of code or architectural changes as analogous in some respects to chemical changes. In other words, if all it takes to spin up an AGI is 30,000 lines of code, then I bet rendering the thing psychotic intentionally or unintentionally would just take a few lines somewhere.
Agents capable of recursive self-improvement at silicon speeds that can easily be rendered psychotic or malevolent even by accident, is not something that I think the public should have access to, let alone anyone.
If it’s less than human, it can still have superhuman capability. The paperclip maximizer is the classic example of a tool AI run amok. Whether it counts as AGI is up for debate. Is tool AI a path to AGI? I think it is.