> The improvements in programming are largely due to the adoption of “agentic” architectures.
Yes, I agree. But it's not just the cradles, it's cradles + training on traces produced with those cradles. You can test this very easily with running old models w/ new cradles. They don't perform well at all. (one of the first things I did when guidance, a guided generation framework, launched ~2 years ago was to test code - compile - edit loops. There were signs of it working, but nothing compared to what we see today. That had to be trained into the models.)
> will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers.
Strong disagree. They have to work together. This is basically why RL is gaining a lot of traction in this space.
Also disagree on llms not improving much. Whatever they did with gemini 2.5 feels like gpt3-4 to me. The context updates are huge. This is the first model that can take 100k tokens and still work after that. They're doing something right to be able to support such large contexts with such good performance. I'd be surprised if gemini 2.5 is just gemini 1 + more data. Extremely surprised. There have to be architecture changes and improvements somewhere in there.