Do people assume that? I mean, I'm sure some people do, but I don't think I've encountered many people, at least not in the AI safety movement, that actually think it's a matter of more hardware power. Some people think it's possible that that's all that's necessary, but I don't think most will say that that's the most likely path to AGI (rather than, as you say, actual breakthroughs happening).