Technically correct, but the workarounds AI search engines use for grounding results could be a close enough approximation. Might not be accurate, but could be better than nothing.
Also Anthropic is doing interesting work in interpretability, who knows what could come out of that.
And could be snake oil, but this startup claims to be able to attribute AI outputs to ingested content: https://prorata.ai/
Not every LLM implementation can use RAG against a Google-sized knowledge base. This proposal essentially says LLMs have to be paired with Google to be legit.