I think you can do the same with data you store… summarize it to same number of tokens, then get an embedding for that to save with the original text.
Test! Different combinations of summarizing LLM and embedding generation LLM can get different results. But once you decide, you are locked in the summarizer as much as the embedding generator.
Not sure is this is what the parent meant though.