You can just attach 40k of context directly into the Gemini, ChatGPT and Claude web interfaces afaik. If someone is using an LLM as a tool to actually be of help in an area they are already professionals in, conjuring good books, research, etc, as attachments shouldn't be an issue.
But yes, the default mode of LLMs is usually a WikiHow and content farm style answer. This is also a problem with Google: The content you get back from generic searches will often be riddled with inaccuracies and massive generalizations.
Not being able / bothering to come up with relevant context and throwing the dice on the LLM being able to do this out of the box is definitely a serious issue. I really think that is where the discussion should be: Focused more on how people use these tools. Just like you can tell quite a bit about someone's expertise based on the specific way in which they interface with Google (or any information on the internet) while they work.