Once we finally move away from binary floats, these nasty binary-exponent-decimal-exponent lossy conversions will end (as will the horribly complicated nasty scanf and printf algorithms).
This is a solved problem. Our actual problem now is momentum of adoption.
I've anticipated that this will eventually happen (hopefully sooner rather than later), and proactively marked binary floats as "legacy formats" [2].
[1] https://en.wikipedia.org/wiki/Decimal64_floating-point_forma...
[2] https://github.com/kstenerud/concise-encoding/blob/master/ce...
From what I've seen, there's just simply no demand for decimal floating point; there's basically one or two people who ask for it, and this carries up all the way through to C committee standardization. Far more people care about things like non-default rounding mode or exception support than care about decimal floating point, and that's an area of floating point that itself is pretty damn low on people's priority lists.
The advantage of decimal floating points is that it fixes the rounding error when converting to/from decimal strings... and that's it. It doesn't fix non-associativity, it doesn't fix gradual precision loss, it doesn't fix portability issues because different libraries have different precision targets. And you get this for the slight cost of making every single floating point operation slower.
And here's the other truism about floating-point: most people who care about it value speed over correctness. There's strong demand for operations like approximate division that don't do the refinement steps, and consequently throw away half the bits. Fast-math flags are similarly pretty common, because users already aren't expecting bit-precise results, so the resulting changes in precision are quite often acceptable for them.
But I don't think it would have mattered here anyway. With finite precision floating point addition is not going to be associative, decimal or not.
As someone who's worked in science and finance (modeling, not accounting), floats work just fine, thank you very much. The modeling/accounting split in finance is a legit point of confusion, though.
This is a good link to send to people: https://floating-point-gui.de/
Decimal floats would operate just as we're used to, and would solve 95% of our floating point problems.
It does not in fact solve the problem identified in this blog post (a non-associative issue). It does not solve any of the problems I generally see mentioned in topics like multiplayer video game desyncs (math libraries on different platforms don't return the same results). It's a pretty bold assertion that "95% of our floating point problems" would be solved.
Actually I wonder if anybody did a performance/power comparison of using floating point vs fixed point math in some common tasks using modern CPUs with extensive FP support.
Using equality to compare 2 floating point numbers doesn't get you anywhere, specially if you are using mathematical functions (the implementation of `sin` or `exp` could vary with operating systems or software versions)
Do we ever need to store floats? using them in flight is one thing, but stored data so often needs to be some fixed precision.
You'd think everyone would know the basics, but even in stuff like finance, people routinely fuck up decimals.
There is your problem
Calculating an average is fairly painless, you can create an upper bound on the possible error that isn't usually dangerous. This isn't the case for all computations.
Consider the number 1/3. In ternary (base 3), you write that as 0.1, whereas in decimal (base 10) you write it as 0.3333... recurring. If you try to represent that number with a fixed number of decimal places, you have precision issues. E.g. decimal 0.333 converts to 0.02222220210 in ternary.
Now, the thing with that example is that we treat decimal as a special, privileged representation, so we accept that 1/3 doesn't have a finite representation in decimal as an entirely natural fact of life, while we treat that same problem converting between decimal and binary as a fundamental deficiency of binary.
Let's talk about why some numbers have finite representations while others don't. 53/100 is written as 0.53 in decimal. The general rule is that if the denominator is a power of ten, you just write the numerator, then put the decimal mark have as many digits from the right as the power of ten. If the denominator is not a power of ten, you make it one. 1/2 turns into 5/10, which is 5 with one decimal place, or 0.5. Obviously, you can't actually do this for 1/3. There's no integer `a` where 3a is a power of ten. The general rule here is that, if the denominator has any prime factors not present in the base, you don't have a finite representation. 4/25 = 16/100 = 0.16 has a finite representation, but 5/7, 13/3 don't.
Now, because 3 is coprime with 10, no number with a finite ternary representation can have a finite decimal representation (and vice versa), and we're used to it. Where binary vs decimal becomes tricky and confuses people is that 10 = 2 * 5, so numbers that can be expressed with a power-of-two denominator have finite representations in both binary and decimal, so you can convert some numbers back and forth with no loss of precision. Numbers with a factor of 5 somewhere in their denominator can have finite decimal represntations (1/5 = 2/10 = 0.2), but can't have finite representations in binary. 0.1 = 1/10 = 1/(2 * 5) and you can't get rid of that five. And that's why everybody gets bitten by 0.1 seeming to be broken.
By the way, integral BCD (what COBOL did most of the time) is useless.
$ python3 -c 'print(.1 + .2)'
0.30000000000000004https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...