partialBlock = (unsigned int)(dataByteLen - i);
Where both `dataByteLen` and `i` where actually `size_t`.Assuming this is close enough to C, what happens is that we're converting a difference between `size_t` into a mere `unsigned`, and since they're not the same sizes on 64-bit platforms this can give `partialBlock` the wrong value, and the whole thing then snowballs into a catastrophic error that is not trivial to test because it only happens with huge buffer sizes.
The biggest mistake here is having written `(unsigned int)` instead of `(size_t)`. But the reason it happened in the first place is because they tried to do the right thing: writing the cast as a precaution, even though the following would have worked:
partialBlock = dataByteLen - i;
I really can't fault them: because it was a difference it could theoretically yield a "negative" result, and therefore intuitively the type of a difference should be signed, so we should cast it back to unsigned to be crystal clear. I knew C was dangerous, but to be honest I didn't expect such a wicked mind game.Now I'm going to have to take a look at my code.