If you know that your search queries will be actual questions (like in the example you listed), you can possibly use the HyDE[0] to create a hypothetical answer which will usually have an embedding that's closer to the RAG chunks you are looking for.
It has the downside that an LLM (rather than just a embedding model) is used in the query path, but it has helped me multiple times in the past to strongly reduce problems with RAG like the ones you outlined, where it likes to latch onto individual words.
[0]: https://arxiv.org/abs/2212.10496