That doesn't seem particularly hard to add.
I agree the conscious AGI stuff is the tricky part. But, then maybe it's not. Maybe it's not as clever as we think it is, and if you have a good enough self-attention model the AGI just needs to be symbolic logic.
I'm thinking something that'd pass a turing test, btw. Not something that's hyper smart.