Isn't it crazy how the branch predictor is something like 99% percent correct. Which means a computer is almost deterministic, it almost knows the future. A tiny bit better and we wouldn't need to show up in office.
Of course multiply this by the sheer number of calculations and even that little misprediction results in huge differences. The reality is actually quite sobering: a computer mostly calculates the same thing over and over.
That’s a realization that made me a better programmer.
I think when I was younger, I thought of programming as very open ended. I.e. I wanted to build abstract, general solutions which would be able to handle any future case.
Over time I realized the problem space is mostly quite well defined, and when I started thinking about programming as defining an assembly line for computations my results and time to solution improved.
When you execute a loop 1000 times it is only going to change branches once the loop finishes. If you always predict that the loop didn't finish, your branch predictor will correctly predict 99.9% of branches.