tl;dr A false technique can be described and it can be hard or impossible to detect the technique is flawed by using it
For example the paper at http://www.jstor.org/stable/222500 describes a method of using the stationary bootstrap to eliminate data snooping bias in studies of "technical analysis" in finance.
Published in the Journal of Finance in 1999 at the time I worked with it in 2010 it had been cited over 500 times.
The proof in the paper is inscrutable. I could find no one who could explain or verify the proofs at my institution.
I attempted to reproduce the results in the paper which is where the problems started. The authors did not give enough information to do this in all cases, but in some cases I was able to reconstruct the algorithms.
They did not perform as described. Of the five algorithms I could reproduce (from memory) one of them worked roughly as described and two others were not completely hopeless.
Looking more closely I realised that the original authors completely disregarded important factors in the implementation of the techniques described by the algorithms. Transaction costs. Allowing for transaction costs (a difficult but not impossible task) the effects noticed by the authors disappeared completely.
Looking even closer I examined the assumptions behind "White's Reality Check" that the paper relied on the Stationary Bootstrap by Politis and Romano from 1994. But financial returns are not stationary. Not at all.
So dodgy logic, misuse of statistics, irreproducible experiments, ignoring important aspects of the data and I suspect wishful thinking add up to a paper that is comprehensively false. Cited hundreds of times and used many times to verify other results.
taking a huge risk here is where my research is: https://ourarchive.otago.ac.nz/bitstream/handle/10523/4346/T... Section 5.12.1 "Bootstrapping and Other False Discoveries"
Non-stationarity is a problem though. Still, you need a bit more to call complete bullshit on this imo -- e.g. say something like "after switching to a different bootstrap method that works in presence of stochastic volatility the result suddenly disappears". Perhaps that is what you do in your paper :) I should take a look.
All that being said, not many people believe in technical indicators working in equities these days, or anywhere really (FX was a bit of a holdout -- not sure if it still is?), so perhaps science has kinda sorted itself out in this case :) I've seen far worse cases of snooping and non-replicability though, and some are still going strong.
Whether this "works" or not depends on how the previous projects enter into it. If the results of previous projects are used as assumptions to justify the methods, data, etc. of the subsequent project, there is no check and we risk the research becoming a house of cards which could collapse due to faultly, untested assumptions that were used.
If the subsequent projects are performed in such a way that they also test the previous results/assumption, this can be avoided. I can't tell from your wording which you are suggesting, though it seems to lean towards the former.