I don't believe that you need an advanced degree to become a component ML engineer, but the math/stats is necessary pre-requisite and these pre-reqs are often poorly defined. At my college, the only pre-req to the graduate-level ML course was the freshman level intro to stats class and multivariable calculus. About 50% of the class dropped when they realized they didn't know how to construct Gaussian models or perform convex optimization.
The underappreciated parts of AI, in my experience, are more philosophical; about the nature of reasoning and approximating or beating human thought. About autonomous agents, non zero-sum games and ethical, non-maximizing functions. There's a huge overlap with logic (philosophical and mathematical) here, and I haven't seen that really broached at any of these big programs.
I am interested in the points you raise, but also realized that I would not find a good environment for it at MIT in EECS, for reasons that are rather obvious from the article's subtext. As such, the last year or so has been spent in a search for good alternatives in terms of research, and I am slowly finding answers. I am happy to discuss more over email.
Long story short: you are certainly not the only one who thinks that way.
EDIT: added a video link to Mikhail Gromov's actual views for better accuracy.
It must be noted, though, that "approximating human thought" is just one direction of investigation - and not the most important one at that; as interesting as it may be, it makes almost as much sense as trying to have computers resemble human brains. In other words, the true AI, when it arrives, will not think like us humans (even if at some level it might pretend that it does).
> ethical
The AI will be just as "ethical" as a computer or an assault rifle.
Did you mean 'competent ML engineer'