Come on, there's nothing to be seen here. Hyperparameters were handpicked to result in 100% profit on a certain timeframe.
And the title is misleading.
In that sense, the math and "indicators" are just an obfuscated, custom-fit version of another algorithm that backtests 100% - if price[now] > price[now+1] -> sell else -> buy.
Rather than looking up the prices via array in the data, clearly "cheating", a set of parameters is hunted down that encodes that relationship in a number space. It doesn't mean anything more than the meme: 79 beers - your age + 40 dollars = the year you were born (which backtests perfectly as long as you were born after 1900).
That said, by all means, keep running the algorithm on new data every day and see how long it stays perfect. I'd be interested in the results.
I mean - the strat was published 8th of November, and you could say it was overfitted. But it continues to run. Since 8th of November it closed only 1 trade, but also $500+ profit. (second is still going, but currently profitable).
> Backtesting results [of the backtesting strategy referenced] look absurd: 100% profitable. But if you change any of the many parameters in the Settings popup, they will turn into disaster. It means, the rules of this strategy are very fragile. Don't trade this! Remember about backtesting rule #1: past results do not guarantee success in the future.
All sliding the window does is discover the parameters that work for the whole data set in chunks - it is an artificial distinction. It still regresses to: you've found some number space generated by some function that matches some percentage of the numerical relationships (correlations) present in the data.
It's circular reasoning because during creating the parameters you're testing it on the "future" data. It only "guarantees" success in the "future" because you discarded all the parameters that didn't work in the "future". No different from writing a model that uses the S&P 500 price "parameters" between 250 and 1000 and back-testing it on data from 1950-1996.
The only way to prove your algorithm's robustness is to generate random data and test it on that. Once you've tested against every one of the infinite possible realities of a single time window, then you can rightly assert that past results have guaranteed success in the future. Hint: it's impossible, but the random data testing is actually the correct technique to test algorithms at scale.
Back-testing on historical data is like a footnote compared to the thesis simulation can generate - the only value it contains is correlating relationships between market data and external variables not present in the numbers. Back-testing to tune an algorithm based purely on the numbers in the data is just an exercise in quantified hind-sight bias.