Yes! The million dollar question is how much of that expressivity is actually required.
In many papers, the "baseline" logistic regression model is very stripped down: y~logit(.) but the neural network has had its expressiveness optimized in various ways. People aren't comparing against a 3 layer feedfoward network; there's augmentation and pre-training, architecture search and special learning schemes.
My claim is that if you want to claim that a problem needs the expressivity that (only) a neural network provides, you ought to be devoting a great deal of effort to the logistic regression model too. Make it a steelman, rather than a strawman, if you will.