the "Large" part of LLMs is probably done. We've gotten as far as we can with those style of models, and the next innovation will be in smaller, more targeted models.
> As costs have skyrocketed while benefits have leveled off, the economics of scale have turned against ever-larger models. Progress will instead come from improving model architectures, enhancing data efficiency, and advancing algorithmic techniques beyond copy-paste scale. The era of unlimited data, computing and model size that remade AI over the past decade is finally drawing to a close. [0]
> Altman, who was interviewed over Zoom at the Imagination in Action event at MIT yesterday, believes we are approaching the limits of LLM size for size’s sake. “I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said. [1]
[0] https://venturebeat.com/ai/openai-chief-says-age-of-giant-ai...
[1] https://techcrunch.com/2023/04/14/sam-altman-size-of-llms-wo...