We have internal search. Finding things isn't the problem. It's contextualizing massive amounts of text and making it queryable with natural language.
The question I was trying to solve was -- "what is feature XYZ? How does it work in hardware & software? How is it exposed in our ABC software, and where do the hooks exist to interface with XYZ?"
The answers exist across maybe 30 different Confluence pages, plus source code, plus source code documentation, plus some PDFs. If all of that was indexed by an LLM, it would have been trivial to get the answer I spent hours manually assembling.