Basically there are two approaches to sequence prediction.
The traditional style, linear regression, ARIMA, RNNs etc. where you directly predict the next element in a sequence. The output is on the same level of abstraction as the internal values used in the model.
There is also the new-ish style where you predict symbols instead of predicting the values directly. You can predict symbols representing numbers or you can also predict a symbolic formula that can be used to extrapolate the values perfectly. This is the way humans do it.
And my point is that when you look at the symbols embedding they do have interpretable structure that model can use to generalize. And experiments seems to suggest that DNNs models are indeed generalizing.