COBOL has a built in fixed point integer type, which makes defining a 4 digit decimal and doing math on it easy. (IBM designed it from the ground up to cater to people with a lot of money, who spend a lot of money, to work with lots of money, ie banks) Java has the BigDecimal type, which is a class in the class library, which means you need to import it. And because Java lacks operator overloading, doing calculations is tedious.
In the 90s, there was a huge push to replace COBOL with <something else>, and Java was the Rust of its day, so that's what everyone got behind. However, 4 digit COBOL decimals apparently round differently than 4 digit Java BigDecimals, so all the tests failed. And all the stuff like a\x+b had to be written like BigDecimal.add(BigDecimal.multiply(a,x),b) so development was taking forever.
Eventually they said "fuck it" and 20 years later we're still stuck with COBOL and everyone who remembers the original death march says "never again".
I have a feeling a lot of the problems came down to computer science people thinking money has two decimal digits but domain knowledge people knowing it has four. We programmers, as a group, make a lot of assumptions about other peoples' domains and we're wrong a lot*.
What do you mean this person has no surname? That's unpossible, surname is never null, error error.
http://blogs.reuters.com/ben-walsh/2013/11/18/do-stocks-real...
I guess it's time for someone to write an "Assumptions Programmers make about money" post.
Additionally, fractional cents are often presented to the consumer when purchasing gas/fuel.
i.e. they track your account balance to more than 2 digits, they just only show you 2 digits.
That is not correct. Stock settlement transactions often list four decimal places.
That's not a significant difference compared to two decimal places, so brundolf's point still stands. There's no need for arbitrary precision.
Just store all dollars in PIPs so 5$ will be stored as 50000.
EDIT: Just a note, there's nothing special about the number 100000; pick the largest exponent of 10 that you can get away with a reasonable assurance that no overflow is possible. For a vast majority of money applications, I seriously doubt you're going to be hitting the limits of int64, so you could probably even get away with something like 1000000000.
Edit: And they forbid equality comparisons for rationals. For some reason even >= is not allowed.
For instance if one needs to apply discounts, add taxes, split in equal parts, all of the above one after the other, there will be a more precise intermediary representation before rounding everything in a way that keeps the total amount consistent with the original amount.