A lot of hopes seem (to me) to have been pinned on the notion that neural nets (as we currently understand them) are the one true algorithm. This notion seems to have been fueled by the significant success of DNNs for certain (highly specific) problems, and by a (shallow) analogy with the human brain. However, it's becoming increasingly clear that this is not the case -- that an artificial neural net is an artificial neural net, no matter how many GPUs you throw at it.
- The lack of good data. Machine learning and DNN's specifically perform best with large datasets, that are also labeled. Google has open sourced some, but they (supposedly) keep the vast majority of their training data private.
- Compute resources. Training these datasets (which can be over terabytes in size) takes a lot of computational power, and only the largest tech companies (e.g. Google, Facebook, Amazon) have the capital to invest in it. Training a neural net can take a solo developer weeks or months of time while Google can afford to do it in a day.
There are actually a lot of advances being made in the algorithms, but iteration cycles are long because of these two bottlenecks and only large tech companies and research institutions have the resources to spend overcoming those bottlenecks. Web development didn't go through a renaissance until web technology became affordable and accessible to startups and hobbyists from reduced server costs (via EC2 and PaaS's like Heroku).
By that analogy, I think we're still in the early days of machine learning and better developer tools and resources could spur more innovation.