I think we need simulation of other agents outputs as primary tool for reasoning. That seems to be how intelligence emerged in evolution.
Something like this: choose desired action > simulate other agents outputs based on future state after performing action > check reward for this action after simulating outputs of others > perform action or not > update all agents models and relations in "world" graph model
I think world could be modeled as simple graph and each agent as NN.
Then based on graph we could conduct symbolic reasoning and very fast learning (by updating edges)
I think these models need also need good physical simulator and good understanding of competitivness.
Is anyone aware of such trials of building AGI as I described?
Humans have natural language as big competetive adventage (easy way to compress parts of world graph and pass it to others - ambiguous. I think with aftificial machiness can be done more efficient). Another advantage is knowledge storage - also easy to do with machiness.
If we can build insect AI building human AI should be easy.