I think that you have a serious misunderstanding of the capabilities of LLMs - they cannot reason out relationships among documents that easily. They cannot even tell you what they don't know to finish a given task (and I'm not just talking one-shot here, agent frameworks suffer from the same problem).