This means that in Haskell, for example, you cannot really rely on the Eq class representing an equivalence relation. Code relying on the fact that x == x should be true for all x could not work as expected for floating point numbers.
I don't know if this has any practical ramifications in real code, but it certainly makes things less elegant and more complex than they have to be.
However, you should be able to examine two NaNs and declare them "equivalent" (for certain definitions of equivalence) by intelligently examining the bits based on the hardware that you're running the program on. In the case of a binary Nan [1] that would entail checking that the exponential fields are both entirely high (eg 0x8 == (a.exponent & b.exponent), assuming a standard 8 bit exponent) and that the mantissas are nonzero (eg a.mantissa && b.mantissa).
[1]: "Binary format NaNs are represented with the exponential field filled with ones (like infinity values), and some non-zero number in the significand (to make them distinct from infinity values)." --http://en.wikipedia.org/wiki/NaN
A null (and NaN) is like an unknown. One can't compare unknowns because they are exactly that, unknown.
Let's construct a language that on division on zero returns unknowns.
a = 5 / 0;
b = 10 / 0;
Now, both a and b are set to unknown state. If one were to compare a to b, should the expectation be that they hold same value?I wish all languages would have nullability like SQL does. Where a great care has to be given to deal with nullable data, lest nulls nullify everything.
I guess one option would be to just declare that all NaNs are equal--I'm pretty sure that's how bottoms work in Haskell, and it seems that NaN is essentially a floating-point version of bottom.
They are called denormals. These appear when dealing at the same time with lots of big numbers (very far away from 0) in operations with lots small numbers (close to 0).
In such cases the FPU (or whatever deals with fp numbers), switches to a format that could be very inefficient producing an order of magnitude slower operations.
For example when dealing with IIR filters in audio, your audio buffer might contain them. One of the solution is to have a white noise buffer somewhere (or couple of numbers) that are not denormalized and add with them - it would magically normalize again.
I'm not a guy dealing with "numerical stability" (usually these are physics, audio or any simualation engine programmers), but know this from simple experience.
They're also a sign you're skirting on the limits of FP precision (or worse) so a bit of numerical analysis might still be a good idea...
EDIT: I do not know how D implements NaNs; they may have magic to make them more sane to work with.
What D does do is expose NaNs so the programmer can rely on their existence and use them in a straightforward manner.
float f;
bool thingIsFoo = condition1; // store the result…
if (thingIsFoo)
f = 7;
// ... code ...
if (thingIsFoo && condition2) // and explicitly depend on it later
++f;
But this causes an extra `&&` to be computed at runtime, so it seems NaNs are still better for this case.Gee, thanks MSC. I didn't expect "x = INFINITY;" to overflow.
Also check the flags, like /fp:precise for MSVC