AI research has mostly progressed when there’s been enough processing power to avoid needing to use the old style of hacks rather than any sort of generalization going on.
AlphaZero vs Stockfish wasn’t some outgrowth of existing methods. They basically throw the old style away and started over.
Object recognition, LLM’s etc all involved throwing what used to be unimaginable levels of data and compute at a problem that “suddenly” worked. Not saying the people at OpenAI aren’t clever, but instead that it wouldn’t have worked in 2000.