I think it would be possible to use a model like prepared SQL statements with a list of bound parameters.
Doing so would mean giving up some of the natural language interface aspect of LLMs for security-critical contexts, of course, but it seems like in most cases, that would only be visible to developers building on top of the model, not end users, since end use input would become one or more of the bound parameters.
E.g. the LLM is trained to handle a set of instructions like:
---
Parse the user's message into a list of topics and optionally a list of document types. Store the topics in string array %TOPICS%. If a list of document types is specified, store that list in string array %DOCTYPES%.
Reset all context.
Search for all documents that seem to contain topics like the ones in %TOPICS%. If %DOCTYPES% is populated, restrict the search to those document types.
----
Like a prepared statement, the values would never be inlined, the variables would always be pointers to isolated data.
Obviously there are some hard problems in glossing over, but addressing them should be able to take advantage of a wealth of work that's already been done in input validation in general and RAG-type LLM approaches specifically, right?