Now when predicting time series, an issue is that most model (like ARIMA, GARCH etc.) are short-memory processes. When you look at the full-series prediction of LSTMs, you observe the same thing.
So in terms of Time Series, Machine Learning is currently in the mid to late 80's compared to Financial Econometrics.
So if you are a CS, you should now probably take a look at fractional GARCH models and incorporate this into the LSTM logic. If the statistic issues are the same, then this may give you that hot new paper.
The Phineas Gage of applied quantitative Econ is demand estimation. You typically want to know the elasticity of quantities sold to price so to inform pricing policies. But the problem is that causality is cloudy -- low prices cause a decrease in supply -- so you never know what you're looking at.
People with a decent training in econometrics know how to treat this problem.
I'm pretty sure orgs like Amazon were trying to do naive demand estimation, fell flat on their noses and copped to having to hire people who have thought about the underlying conceptual issues before.
On one hand, it's almost a tautoloy that specific models should be better than general models, but I worked on some 2d time series classification with a statistician and afterwards, for kicks, I replaced the entire thing with a CNN+LSTM and it worked just as well as the whole complicated model he had come up with.
On the other hand, the "more ignorant CS approach" has produced impressive achievements in language tasks (e.g., translation), visual tasks (e.g., image generation), game playing tasks (e.g., Go), agent-in-virtual-world tasks (e.g., DOTA), and robot-in-real-world tasks (e.g., self-driving cars).
Academic statistics departments often seem to be "20 years behind" on all those fronts...
Also, with neural networks it's very easy and natural to build complex models where different "layers" perform different tasks. So an LSTM can very easily be extended to work bi-directionally (taking data from the beginning of the sequence, and the end of the sequence), adding things like attention, using word-vectors before the recurrent network or just using a character model.
What are the statistical equivalents for this? Because most of the papers on this topic seem to come from Computer Science. Take a look at the epilogue of [1] for a thorough discussion on where statistical theory needs to catch up.
[1] Computer Age Statistical Inference - Efron, Hastie.
That would be nonparametric statistics.
What you just stated is just a pipeline. You can just split the data and train it and automate with tree ensemble that aren't boosting that is if you're talking about doing in parallel.
If you're just saying split and do as batch process in different time interval you can do that with nonparametric bayesian.
CS contribution in creating Deep learning and having it be the best accurate algo for certain data domain is pretty nice. But again Stat care a lot more than prediction.
Basically, forecasting implies you have a good handle on all properties of the relevant distributions, which in my opinion is a lost cause in social sciences (think external validity).
Instead, econometrics is nowadays mainly concerned with the identification of causal effect using non-parametric or semi-parametric approaches. Basically, you can believably estimate the directionality of some mechanism, but you probably never have the data or model to make a good out of sample prediction. You can, but it's basically implied that approaches that consistently estimate some marginal of a conditional expectation will NOT be that useful to predict a whole stochastic process.
Also, using training and test sets kind of predicates that your process is very stable. Otherwise the "test" set is not really a good test, is it? Again, in social sciences these things are hard to argue. You usually wanna generalize some mechanism from this industry to that industry, not find a good predictor in the same industry. Test datasets still run on the same data!
ML is successful because in practice we DO care about prediction. This allows us to do all the cool things. Because econometrics/stats is so conservative and comes from a causal standpoint, people are just really shy to develop a model for prediction (not everywhere true, but that's the gist). For ML, the primary question is basically how good the thing predicts. When I first tried scikit learn way back, I was so confused it didn't offer standard errors or some other statistical measure. But then I saw how ingrained the in-sample, out-sample process is and I thought well - that's really useful.
tl;dr: Stats and ML have different objectives, but there is a lot to learn in stats for ML
"GARCH does not work out of sample. It is a good story, but I was unable to use it in predicting squared deviations or mean deviations"
I haven't found it in Rob J Hyndman's forecasting tutorial either.
How does it fare in the Makridakis competitions?
Don't forget that most econometrics models are also concerned with identification and causality, less with prediction.
I.e. places with defined risk where you will know if you’re wrong if it goes against you by x% while you expect a y% gain if you’re right AND y>x is worth more than the number of times you’re wrong.
The types of Algos that work well for this are edge identification ones - I know this because I am (not as well as I’d like) successfully doing it.
LSTMs haven’t performed so well for me in this task but non-NN algos have. CNNs however were promising but didn’t match what I’d come up with - still searching for the holy grail that’ll make me rich!
Which means, regardless of your philosophy, you are predicting a price change - a long signal is a prediction for positive price change; a short signal is a prediction for a negative price change. If that wasn’t true, your system would not be able to profit.
Predicting price change and predicting price are semantically equivalent, although a specific algorithm might be better at one than the other.
Source: hedge fund trader
It is semantically different to say: if price goes to Y then you have odds that it will then go to Target 1 and then slightly lower odds it goes to Target 2.
People, prediction is a general term. Many predictors come with accuracy estimates (and outside of finance, often prediction bounds). But even if it was only one number - if you have good prediction of the expected price change, that could be sufficient to trade as it encompasses, by definition, the sun of probability of different outcomes times their magnitude.
Either E[price] or E[log price] is a single predicted value you can successfully trade with as long as you are far from your margins, and depending of course on your utility functions.
But as I mentioned, in most fields, when you talk of a “predictor”, that’s not a single number but also accuracy estimates or even a full fledged probability distribution of future events.
I agree.
I've built many systems in this area, but it wasn't until I started working in the Indian market (>10 yrs ago) that it became abundantly clear that trying to calculate the long/shorts signals using historical (/time series) data was a waste of time. (And yet my primary role was to provide tools that did exactly that).
Back then, in the indian market, you could see that most of the stocks, although skyrocketing upwards, all followed the slow vs fast moving averages to buy and sell! Back then, they weren't looking at RSI, stochastics, support lines, etc, etc. It was crazily predictable...but over time it was really interesting to see it become more haphazard and like western stocks. That is, the fundamentals came into play and as you say, the traders began to use other metrics to buy and sell.
I'm currently working on building similar tools in my area of work for the Indian market and would really appreciate if you could shed some more light into the things you learned from your experience in working in this domain.
Chaotic systems can be deterministic, just that you will never be able to accurately measure all the variables to make a long term prediction accurately.
In the weather example, people know the equations that approximate how it works. Why value does a neural network bring? Knowing the equations is better understanding.
https://www.quantamagazine.org/machine-learnings-amazing-abi...
Talks about the Mean Shift algorithm described here: https://en.wikipedia.org/wiki/Mean_shift
I strongly think the system would be better to perform many predictions at once instead, using seq2seq neural networks. The problem is properly explained here at the beginning of this other post: https://github.com/LukeTonin/keras-seq-2-seq-signal-predicti... This other post is, in turn, derived from my original project here doing seq2seq predictions with TensorFlow: https://github.com/guillaume-chevalier/seq2seq-signal-predic...
OP also forgot to cite the image I made: https://en.wikipedia.org/wiki/Long_short-term_memory#/media/...
Well, glad to see that some similar work as mine can get this much traction on HN. I would have loved to get this much traction when I did my post, too. Anyway, I would suggest OP to take a look at seq2seq, as it objectively performs better (and without the "laggy drift" visual effect observed as in OP's figure named "S&P500 multi-sequence prediction").
In other words, using many-to-one neural architectures creates some kind of feedback which doesn't happen with seq2seq which doesn't build on its own accumulated error. It has a decoder with different weights than the encoder, and can be deep (stacked).
The aim of this post is to explain why sequence to sequence models appear to perform better than "many to one" RNNs on signal prediction problems. It also describes an implementation of a sequence 2 sequence model using the Keras API.
I deal with time series data a lot at work, I work in broadcasting/media and 99% of the time the data is fairly "predictable" and follows a regular daily pattern, peppered with the odd spikes during big, unpredicatble news events.
You can find the code repo on my Github link [2], but please bear with the code quality. I only have an economics background, so my coding experience is fairly limited :)
[1] http://www.jakob-aungiers.com/articles/a/LSTM-Neural-Network...
Typically these are triggered when e.g. 90% of a threshold has been crossed.
It connected with what i've heard Chomsky say about trying to develop laws of physics by filming what's happening outside the window. We need to do experiments and interventions to learn the dynamics of a system
"What do you think the role is, if any, of other uses of so-called big data? [...]
NOAM CHOMSKY: It’s more complicated than that. Let’s go back to the early days of modern physics: Galileo, Newton, and so on. They did not organize data. If they had, they could never have reached the laws of nature. You couldn’t establish the law of falling bodies, what we all learn in high school, by simply accumulating data from videotapes of what’s happening outside the window. What they did was study highly idealized situations, such as balls rolling down frictionless planes. Much of what they did were actually thought experiments.
Now let’s go to linguistics. Among the interesting questions that we ask are, for example, what’s the nature of ECP violations? You can look at 10 billion articles from the Wall Street Journal, and you won’t find any examples of ECP violations. It’s an interesting theory-determined question that tells you something about the nature of language, just as rolling a ball down an inclined plane is something that tells you about the laws of nature. Scientists use data, of course. But theory-driven experimental investigation has been the nature of the sciences for the last 500 years.
In linguistics we all know that the kind of phenomena that we inquire about are often exotic. They are phenomena that almost never occur. In fact, those are the most interesting phenomena, because they lead you directly to fundamental principles. You could look at data forever, and you’d never figure out the laws, the rules, that are structure dependent. Let alone figure out why. And somehow that’s missed by the Silicon Valley approach of just studying masses of data and hoping something will come out. It doesn’t work in the sciences, and it doesn’t work here."
- https://www.rochester.edu/newscenter/conversations-on-lingui...
It is actually a really interesting subject, marketing people doing a/b tests for ads/features seem at least a little closer to the experimental ideal, not just fitting curves to data
For further reading, I'd recommend the epilogue of Casuality (Pearl 2000), it's from a 1996 lecture at UCLA:
Nobody is interested in having a machine discover the theory behind parabolic trajectories. That was solved science 400 years ago.
What is interesting, is having a machine that can estimate a parabolic trajectory, not deductively, but inductively, based only on visual observation, for a variety of different shaped and sized objects. The way a human does.
Galileo was a great scientist, and discovered many natural laws relating to motion, but that wouldn’t have made him a great dodgeball player.
https://news.ycombinator.com/item?id=17808349
First edition: http://www.uokufa.edu.iq/staff/ehsanali/Tan.pdf
Also see "mining of massive datasets" usually available at this link, but it seems to be down: http://infolab.stanford.edu/~ullman/mmds/book.pdf
Which leads me to another point: Many of these books cost $100+. If you don't have those kind of resources, try Library Genesis. It's been very helpful for getting started.