I heard this echoed many before when pawing through the library stacks in my uni days looking through the littered corpses of AI trends in the past. I believe that we will eventually get to strong AGI. But after either reading or seeing in-person the hype machine sprout up and wither around symbolic programming, semantic programming, neural nets, fourth generation, expert systems, perceptrons, Connection Machine, etc., I'm gun-shy around any proclamation that achieving strong AGI is "just a matter of...<insert-single-solution-space>". The results so far seem to indicate pure cognitive processing is very amenable to the toolbox we have built up to-date in AI research, hence the breakthroughs in game playing.
Manipulating and interacting with the material world and humans however, and the results are a little patchier; I suspect we have lots more work and research ahead of us than we currently realize. When we do get some initial results like the laundry-folding machines, they're single-purpose and uneconomic for mainstream middle-class adoption (not to speak of working-class), and often with lots of attached caveats like Tesla AutoPilot. Instead of all these discussions of whether or not we will get strong AGI, I prefer to see everyone assume it will happen, and when we don't get the incremental result we were anticipating, say, hmm, that's interesting, I wonder why...
I want to see the hype tamped down to the point we can steadily chip away at the overall problem space, and accelerate AI research results and organically reach strong, economically-available AGI sooner than continue experiencing the disappointing two-steps-forward-one-step-back our industry seems to so far historically take in this field. The hype says we're a sprint away from unlocking all sorts of benefits promised by strong AGI, when we are better served accepting the organic incremental benefits as they occur during our acknowledged marathon, and using those incremental benefits as stepping stones to greater understanding.