All the AI founders (e.g., Dario Amodei) seem to believe that we're nowhere near the end of seeing performance improvements in LLMs as they are trained on more data (i.e., LLM scaling laws) - at least that's what they say publicly, but they obviously have skin in the game. Curious what knowledgeable people think who are not incentivized to make optimistic public statements?
What I really want to know is, assuming capital / compute is not a constraint, will be continue to see order of magnitude improvements in LLMs, or is there some kind of "technological" limit you think exists?