I think the magic of LLMs, if any, is the fact that they can make these kind of "symbolic AI" systems to work.
AutoGPT / AgentGPT / TeenageGPT will inevitably start borrowing ideas from Marvin Minsky & other symbolic AI / cognitive science researchers.
The space of different configurations is much larger than simple connections. It's in fact possibly larger than our imagination. This is because it's hard to comprehend all inputs to a single GPT-4 instance, and the space of LLM-multiagents is essentially the space of *graph theoretical graphs* of GPT-4 instances, practically in four dimensions (if one needs to start thinking about the signals strictly).