But I'm talking specifically about cryptographic threat models. No reasonable threat model says, conducting this attack takes $100,000, and since most people don't have $100,000 in savings it's safe, because defending against "most people" isn't meaningful. A reasonable threat model says either, conducting this attack takes $100,000 so we're going to add an additional layer of security because it's a realistic attack, or conducting this attack takes $100,000,000,000,000. In such a threat model, if the numbers change by a factor of two in either direction (either through a one-bit error like this, or through macroeconomic trends, or whatever), it doesn't change the analysis.
And in particular the claims here are in fact about exact amounts: a factor of two, or one bit. Cryptographers tend to measure things very precisely in bits. There's usually no good reason for a particular choice (64 is not a magic number here, it's just a convenient number for computers), but the analysis is still done with that particular choice. You can measure the difficulty of attacking a problem with N bits of entropy, and then add a heavy margin on top, and be very clear about what that margin is. Once you've done that, N-1 becomes probably reasonable, and you can argue precisely about why it's reasonable; you can argue equally precisely that N-5 is questionable and N-10 is not reasonable, and that the arguments are not recursive.