Also, there's a group of people who have been running tests on common libms, reporting their current accuracy states here: https://members.loria.fr/PZimmermann/papers/accuracy.pdf (that paper is updated ~monthly).
The 2012 discovery of the Higgs boson at the ATLAS experiment in the LHC relied crucially on the ability to track charged particles with exquisite precision (10 microns over a 10m length) and high reliability (over 99% of roughly 1000 charged particles per collision correctly identified).
In an attempt to speed up the calculation, researchers found that merely changing the underlying math library (which should only affect at most the last bit) resulted in some collisions being missed or misidentified.
I was a user contributing to the LHC@Home BOINC project[2], where they ran into similar problems. They simulated beam stability, so iterated on the position of the simulated particles for millions of steps. As normal in BOINC each work unit is computed at least three times and if the results don't match the work unit is queued for additional runs.
They noticed that they got a lot of work units that failed the initial check compared to other BOINC projects. Digging into it they noticed that if a work unit was computed by the same CPU manufacturer, ie all Intel CPUs, then they passed as expected. But if the work unit had been processed by mixed CPUs, ie at least one run on Intel and one run on AMD, they very often disagreed.
That's when they discovered[3] this very issue about how the rounding of various floating point functions differed between vendors.
After switching to crlibm[4] for the elementary functions they used the mixed-vendor problem went away.
[1]: https://www.davidhbailey.com/dhbtalks/dhb-icerm-2020.pdf
[2]: https://en.wikipedia.org/wiki/LHC@home
[3]: https://accelconf.web.cern.ch/icap06/papers/MOM1MP01.pdf
[4]: https://ens-lyon.hal.science/ensl-01529804/file/crlibm.pdf
Notable omission are crlibm/rlibm/core-math etc libs which claim to be more correct, but I suppose we can already be pretty confident about them.
https://github.com/J-Montgomery/rfloat/blob/8a58367db32807c8...
It is possible to read the standard in the way that they still remain compliant. The standard, as of IEEE 754-2019, does not require recommended operations to be implemented in accordance to the standard in my understanding; implementations are merely recommended to ("should") define recommended operations with the required ("shall") semantics. So if an implementation doesn't claim that given recommended operation is indeed compliant, the implementation remains complaint in my understanding.
One reason for which I think this might be the case is that not all recommended operations have a known correctly rounded algorithm, in particular bivariate functions like pow (in fact, pow is the only remaining one at the moment IIRC). Otherwise no implementations would ever be complaint as long as those operations are defined!
Or when you're bottlenecked on memory and want to store each number in four bytes instead of eight.
Though, a potentially-useful note is that for two-argument functions is that a correctly-rounded implementation means that it's possible to specialize certain constant operands to a much better implementations while preserving the same result (log(2,x), log(10,x), pow(x, 0.5), pow(x, 2), pow(x, 3), etc; floor(log(int,x)) being potentially especially useful if an int log isn't available).
1. It correctly quotes the IEEE754-2008 standard:
> A conforming function shall return results correctly rounded for the applicable rounding direction for all operands in its domain
and even points out that the citation is from "Section 9. *Recommended* operations" (emphasis mine). But then it goes on to describes this as a "*requirement*" of the standard (it is not). This is not just a mistype, the post actually implies that implementations not following this recommendation are wrong:
> [...] none of the major mathematical libraries that are used throughout computing are actually rounding correctly as demanded in any version of IEEE 754 after the original 1985 release.
or:
> [...] ranging from benign disregard for the standard to placing the burden of correctness on the user who should know that the functions are wrong: “It is following the specification people believe it’s following.”
As far as I know, IEEE754 mandates correct rounding for elementary operations and sqrt(), and only for those.
2. All the mentions of 1 ULP in the beginning are a red herring. As the article itself mentions later, the standard never cares about 1 ULP. Some people do care about 1 ULP, just because it is something that can be achieved at a reasonable cost for transcendentals, so why not do it. But not the standard.
3. The author seems to believe that 0.5 ULP would be better than 1 ULP for numerical accuracy reasons:
> I was resounding told that the absolute error in the numbers are too small to be a problem. Frankly, I did not believe this.
I would personally also tell that to the author. But there is a much more important reason why correct rounding would be a tremendous advantage: reproducibility. There is always only one correct rounding. As a consequence, with correct rounding, different implementations return bit-for-bit identical results. The author even mentions falling victim to FP non-reproducibility in another part of the article.
4. This last point is excusable because the article is from 2020, but "solving" the fp32 incorrect-rounding problem by using fp64 is naive (not guaranteed to always work, although it will with high probability) and inefficient. It also does not say what to do for fp64. We can do correct rounding much faster now [1, 2]. So much faster that it is getting really close to non-correctly-rounded, so some libm may one day decide to switch to that.
> I would personally also tell that to the author. But there is a much more important reason why correct rounding would be a tremendous advantage: reproducibility.
This is also what the author want from his own experiences, but failed to realize/state explicitly: "People on different machines were seeing different patterns being generated which meant that it broke an aspect of our multiplayer game."
So yes, the reasons mentioned as a rationale for more accurate functions are in fact rationale for reproducibility across hardware and platforms. For example going from 1 ulp errors to 0.6 ulp errors would not help the author at all, but having reproducible behavior would (even with an increased worst case error).
Correctly rounded functions means the rounding error is the smallest possible, and as a consequence every implementation will always return exactly the same results: this is the main reason why people (and the author) advocates for correctly rounded implementations.
The key thing is there are only 2*32 float32s so you can check all of them. It sounds to me like the author did this, and realized they needed some tweaks for correct answers with log.