I am also interested in your statement that certain algorithms depend on subnormals as defined in the floating point spec. Can you provide an example of such an algorithm? I can intuit how it might be desirable to have a single "too small/epsilon" value, but I do not see offhand how you can leverage the full range of subnormals in any reasonably generic way that is not extremely dependent on the specific problem and scale (i.e. multiply all numbers by 2^30 and increase the maximum exponent by 30, do you get the same output multiplied by 2^30), so I would like to see how it is done.
In terms of my two possible interpretations, the first is that subtracting two unequal floating point numbers yield 0. I am pretty sure this is not the case. It may yield no change, but I am pretty sure it can not yield 0. The other is that subtracting two unequal arbitrary precision numbers represented as floating point numbers yield 0. This is true, but is a known limitation of emulating arbitrary precision arithmetic using limited precision and must always be accounted for. If this is what you meant, then we can just choose numbers too small to be expressed by subnormals to cause the problem to occur again, so all it does is allow handling of a few more cases at the cost of complexity and non-uniformity. If you did not mean either of these two interpretations, can you explain what you meant, preferably with a concrete example?