A few years ago everyone was using RAG, embeddings, databases on top of models. Now models with access to local markdown and memory files (like OpenClaw) seem to be readily outperforming these databases with grep and simple UNIX tools.
Is this an inherent issue in scaling LLMS? Does Obsidian work that much better for most people? It anyone finding anything that actually outperforms markdown?
At this point the main bottleneck in my adoption seems to be memory and persistent long term context, not quality or reliability of the models.
I'm curious if there are any technical or scaling metrics we could use to forecast where this will end up going.