Shallow feed-forward networks are "universal function approximators" [0] when the number of hidden neurons is finite but unbounded. Of course, the width of that layer grows exponentially in the depth of the deep network that you might wish to approximate [1].
The statement that "[i]t's just that deep neural networks are a lot more practical to train" (emphasis mine) sounds somewhat reductive; it's not only that depth is a nice trick or hack for training speed, but that depth makes the success of deep networks in the past decade at all possible. We live in a world with bounded computing resources and bounded training data. You cannot subsume all deep networks into shallow networks, and shallow networks into SVMs in the real world. So I am pretty sure of how important that distinction is.
And what's more, depth extracts a hierarchy of interpret-able features at multiple scales[2], and a decision surface embedded within that feature space, rather than a brittle decision surface in an extremely high dimensional space with little semantic meaning. One of these approaches generalizes better than the other to unseen data.
[0] https://en.wikipedia.org/wiki/Universal_approximation_theore... [1] https://pdfs.semanticscholar.org/f594/f693903e1507c33670b896... [2] https://distill.pub/2017/feature-visualization/
This is important. Flippantly said,discarding learnability and speed of convergence, you can get the power of any neural network by the following algorithm:
1. Randomly generates a sufficiently wide bit pattern 2. Interprets it as a program and run it on the test set 3. discard results until the desired accuracy is reached