If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.
Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.
I'd really appreciate it if Javascript had a native decimal number type like that.
When handling money, we care about faithfully reproducing the human-centric quirks of decimal numbers, not "being more accurate". There's no reason in principle to regard a system that can't represent 1/3 as being fundamentally more accurate because it happens to be able to represent 1/5.
DaysInYear = 366
InterestRate = 215
DayBalanceSum = 0
for each Day in Year
DayBalanceSum += Day.Balance
InterestRaw = DayBalanceSum * InterestRate
InterestRaw += DaysInYear * 5000
Interest = InterestRaw / (DaysInYear * 10000)
Balance += Interest
Balance should always be expressed in the smallest fraction of currency that we conventionally round to, like 1 yen or 1/100 dollar. Adding in half of the divisor before dividing effectively turns floor division into correctly rounded division.The value of floating point is that it can represent extremely huge or extremely infinitesimal values.
If you're working with currency / money, floating point is the wrong thing to use. For the entire history of human civilization, currency has always been an integer type, possibly with a fixed decimal point. Money has always been integers for as long as commerce has existed, and long before computers.
If you're building games, or AI, or navigating to Pluto, then floating point is the tool to use.
True but irrelevant. The problem isn't with the math fundamentals, it's the programmers.
The issue is if you get your integer handling wrong it usually stands out. Maybe that's because integers truncate rather than round, maybe it's because the program has to handle all those fractions of cents manually rather than letting the hardware do it so he has to think about it.
In any case integer code that works in unit tests usually continues to work, but floating point code passing all unit tests will be broken on some floating point implementations and not others. The reason is pretty obvious: floating point is inexact, but the implementations contain a ton of optimisations to hide that inexactness so it rarely raises it's ugly head.
When it does it's in the worst possible way. In a past day job I build cash registers and accounting systems. If you use floating point where exact results are required I can guarantee you your future self will be haunted by a never ending stream of phone calls from auditors telling you code that has worked solidly in thousands of installations over a decade can not add up. And god help you if you ever made the mistake of writing "if a == b" because you forgot a and b are floating point. Compiler writers should do us all a favour and not define == and != for floating point.
Back when I was doing this no complier implemented anything beyond 32 bit integer arithmetic, in fact there was no open source either. So you had to write a multi precision library and all expression evaluation had to be done using function calls. Despite floating point giving you hardware 56 bit arithmetic (which was enough), you were still better off using those clunky integers.
As others have said here: if you need exact results (and, yes currency is the most common use case), for the love of god do it using integers.
Um... that really depends. If you have an algorithm that is numerically unstable, these errors will quickly lead to a completely wrong result. Using a different type is not going to fix that, of course, and you need to fix the algorithm.
Now, when you’re talking about central bank accruals (or similar sized deposits) that’s a bit different. In these cases, you have a very specific accrual multiple, multiplied by a balance in the multiple hundreds of billions or trillions. In these cases, precision with regards to the interest accrual calculation is quite significant, as rounding can short the payor/payee by several millions of dollars.
Hence the reason bond traders have historically traded in fractions of 32.
A sample bond trade:
‘Twenty sticks at a buck two and five eights bid’ ‘Offer At 103 full’ ‘Don’t break my balls with this, I got last round at delmonicos last night’ ‘Offer 103 firm, what are we doing’ ‘102-7 for 50 sticks’ ‘Should have called me earlier and pulled the trigger, 50 sticks offer 103-2’ ‘Fuck you, I’m your daughter’s godfather’ ‘In that case, 40 sticks, 103-7 offer’ ‘Fuck you, 10 sticks, 102-7, and you buy me a steak, and my daughter a new dress’ ‘5 sticks at 104, 45 at 102-3 off tape, and you pick up bar tab and green fees’ ‘Done’ ‘You own it’
That’s kinda how bonds are traded.
Ref: Stick: million Bond pricing: dollar price + number divided by 32 Delmonicos: money bonfire with meals served
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Floats are a digital approximation of real numbers, because computers were originally designed for solving math problems - trigonometry and calculus, that is.
For money you want rational numbers, not reals. Unfortunately, computers never got a native rational number type, so you'll have to roll your own.
Prior to the IBM/360 (1964), mainframes sold for business purposes generally had no support for floating point arithmetic. They used fixed-point arithmetic. At the hardware level I think this is just integer math (I think?), but at a compiler level you can have different data types which are seen to be fractions with fixed accuracy. I believe I've read that COBOL had this feature since I-don't-know-how-far-back.
This sort of software fixed-point is still standard in SQL and many other places. Some languages, and many application-specific frameworks, have pre-existing fixed-point support. So it's also not accurate to say that you necessarily need to roll your own, though certainly in some contexts you'll need to.
And for money, you very much do not want arbitrary rational numbers. The important thing with money is that results are predictable and not fudgable. The problem with .1 + .2 != .3 is not that anyone cares about 4E-17 dollars, it's that they freak out when the math isn't predictable. Using rationals might be more predictable than using floats, but fixed-point is better still. And that's fixed-point base-10, because it's what your customers use when they check your work.
The type of any numeric literal is any type of the `Num` class. That means that they can be floating point, fractional, or integers "for free" depending on where you use them in your programs.
`0.75 + pi` is of type `Floating a => a`, but `0.75 + 1%4` is of type `Rational`.
Is there a way to exploit the difference between numeric precision underlying the neural network and the precision used to represent the financial transactions?
Was proposed in the late 90's Mike Cowlishaw but the rest of the standards committee would have none of it.
Proposal: https://github.com/littledan/proposal-bigdecimal
Slides: https://docs.google.com/presentation/d/1qceGOynkiypIgvv0Ju8u...
Give it =0.1+0.2-0.3 and it will see what you are trying to do and return 0.
Give it anything slightly more complicated such as =(0.1+0.2-0.3) and this won't trip, in this example displaying 5.55112E-17 or similar.
https://people.eecs.berkeley.edu/~wkahan/Mind1ess.pdf
(and plenty of other rants...:
There's also BCD (binary coded decimal) that can solve some problems by avoiding the decimal-to-binary conversions if you're mainly dealing with decimal values. For instance 0.2 can't usually be represented in binary but of course it poses no problem in BCD.
It is more common these days to use base-1000, instead, when you need exact decimal representations. You can fit three base-1000 "digits" in a 32-bit word, with two bits left over for sign plus any other flag you find useful. (One such use could be to make a zero in the second place indicate that the rest of the word is actually binary; then regular arithmetic works on such words.) Calculations in base-1000 are quite a lot faster than BCD.
Almost always when people think they need decimal, binary -- even binary floating-point, if the numbers are small enough -- is much, much better. Just be sure to represent everything as an integer number of the smallest unit, say pennies; and scale (*100, /100) on I/O.
BCD is/was super common in measurement equipment for internal calculations for this reason, and also because it is trivial to format for display (LED/LCD/VFDs) or text output (bus system, printer/plotter).
Whenever I see someone handling currency in floats, something inside me wither and die a small death.
Meh. When used correctly in the right circumstances it is acceptable to use floats.
Here's an example. Suppose you are pricing bonds, annuities, or derivatives. All the intermediate calculations make essential use of floating point operation. The Black–Scholes model for example requires the logarithm, the exponential, the square root, and the CDF of the normal distribution. None of that is doable without floats.
Even for simpler examples it is sometimes okay to use floats. If you only ever need to store an exact number of cents, you can totally store the number of cents in a double. Integer operations are exact using IEEE-754 double operations when they are smaller than 2^53-1 or so. There's usually no benefit of doing so, but hey it's possible.
That said, yeah, when working with money in situations where money matters, some sort of decimal or rational datatype should be the rule, not the exception.
In trading, it's super common to use floating point arithmetic for decision logic since it's very fast and straightforward to write. The actual trade execution, however, almost always relies on integer arithmetic because then money is actually being used (and hence must be tracked properly).
It's not therefore inherently incorrect to do currency conversions with floats in some situations provided that the actual transaction execution relies on fixed precision or decimal arithmetic.
He had decades of experience in the software development industry and I got the feeling that he'd seen the effect of this issue personally.
I still remember that warning well.
Hm, are you sure? I don't believe "rational" types which encode numbers as a numerator and denominator are typically used for currency/money.
If they were, would the denominator always be 100 or 1000? I guess you could use a rational type that way, although it'd be a small subset of what rational data types are intended for. But I guess it'd be "safe"? Not totally sure actually, one question would be if rounding works how you want when you do things like apply an interest percentage to a monetary amount. (I am not very familiar with rational data types, and am not sure how rounding works -- or even if they do rounding at all, or just keep increasing the magnitude of the denominator for exact precision, which is probably _not_ what you'd want with currency, for reasons not only of performance but of desired semantics).
You are correct an IEEE-754 floating point type is inappropriate for currency. I believe for currency you would generally use a fixed-point type (rather than floating point type), or non-IEEE "arbitrary precision floating point" type like ruby's BigDecimal (ruby also offers a Rational type. https://ruby-doc.org/core-2.5.0/Rational.html . This is a different thing than the arbitrary-precision BigDecimal. I have never used Rational or seen it used. It is not generally used for money/currency.) Or maybe even a binary-coded decimal value? (Not sure if that's the same thing as "arbitrary-precision floating point" of ruby's BigDecimal or not).
There are several possible correct and appropriate data encodings/types for currency, that will have the desired precision and calculation semantics... I am not sure if rational data type is one of them, and I don't believe it is common (and it would probably be much less performant than the options taht are common). Postgres, for instance, does not have a "rational" type built in, although there appears to be a third-party extension for it. Yet postgres is obviously frequently used to store currency values! I believe many other popular rdbms have no rational data type support at all.
I'm not actually sure what domains rational data types are good for. Probably not really anything scientific measurement based either (the IEEE-754 floating point types ARE usually good for that, that is their use case!) The wikipedia page sort of hand-wavily says "algebraic computation", which I'm not enough about math to really know what that means. I have never myself used rational data types, I don't think! Although I was aware of them; they are neat.
Which specific places have you seen it used in?
And then of course there have been several other "x.abs() < 0.01" cases for various purposes. So I could definitely see that being an interesting experiment.
https://en.m.wikipedia.org/wiki/Arbitrary-precision_arithmet...
and gets harder when you want exact irrationals too https://www.google.com/search?q=exact+real+arithmetic
But maybe that just results in all the floating point weirdness again, just not for small rationals.
Let's say we need to do a comparison. Set
a = 34241432415/344425151233
and b = 45034983295/453218433828
Which is greater?Or even more feindish, Set
a = 14488683657616/14488641242046
and b = 10733594563328/10733563140768
which is greater?By what algorithm would you do the computation, and could you guarantee me the same compute time as comparing 2/3 and 4/5?
(Shameless Common Lisp plug: http://clhs.lisp.se/Body/t_ration.htm)
2015.000000000000: https://news.ycombinator.com/item?id=10558871
What I’m trying to show here is that both integers and floating point are not suitable for doing ‘simple’ financial math. But we get used to this Bresenhamming in integers and do not perceive it as solving an error correction problem.
I have written a test framework and I am quite familiar with these problems, and comparing floating point numbers is a PITA. I had users complaining that 0.3 is not 0.3.
The code managing these comparisons turned out to be more complex than expected. The idea is that values are represented as ranges, so, for example, the IEEE-754 "0.3" is represented as ]0.299~, 0.300~[ which makes it equal to a true 0.3, because 0.3 is within that range.
Maybe the creator's theory is that people will search for 0.30000000000000004 when they run into it after running their code.
FWIW - the only way I can ever find my own website is by searching for it in my github repositories. So I definitely agree, it's not a terribly memorable domain.
That's why we need regular expressions support in every search box, browser history, bookmarks and Google included.
"simple" discussion: https://floating-point-gui.de/errors/comparison/
more advanced: https://randomascii.wordpress.com/2012/02/25/comparing-float...
Also the "field" of floating point numbers is not commutative†, (can run on JS console:)
x=0;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; }; x+=1
--> 1.000000000000001
x=1;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };
--> 1
Although most of the time a+b===b+a can be relied on. And for most of the stuff we do on the web it's fine!††
† edit: Please s/commutative/associative/, thanks for the comments below.
†† edit: that's wrong! Replace with (a+b)+c === a+(b+c)
What is failing is associativity, i.e. (a+b)+c==a+(b+c)
For example
(.0000000000000001 + .0000000000000001 ) + 1.0
--> 1.0000000000000002
.0000000000000001 + (.0000000000000001 + 1.0)
--> 1.0
In your example, you are mixing both properties,
(.0000000000000001 + .0000000000000001) + 1.0
--> 1.0000000000000002
(1.0 + .0000000000000001) + .0000000000000001
--> 1.0
but the difference is caused by the lack of associativity, not by the lack of commutativity.
[1] Perhaps you must exclude -0.0. I think it is commutative even with -0.0, but I'm never 100% sure.
(Well, it's a big document. I searched for the string "addition", which occurs just 41 times.)
I failed, but I believe I can show that the standard requires addition to be commutative in all cases:
1. "Clause 5 of this standard specifies the result of a single arithmetic operation." (§10.1)
2. "All conforming implementations of this standard shall provide the operations listed in this clause for all supported arithmetic formats, except as stated below. Unless otherwise specified, each of the computational operations specified by this standard that returns a numeric result shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then rounded that intermediate result, if necessary, to fit in the destination’s format" (§5.1)
Obviously, addition of real numbers is commutative, so the intermediate result produced for addition(a,b) must be equal to that produced for addition(b,a). I hope, but cannot guarantee, that the rounding applied to that intermediate result would not then depend on the order of operands provided to the addition operator.
3. "The operation addition(x, y) computes x+y. The preferred exponent is min(Q(x), Q(y))." (§5.4.1). This is the entire definition of addition, as far as I could find. (It's also defined, just above this statement, as being a general-computational operation. According to §5.1, a general-computational operation is one which produces floating-point or integer results, rounds all results according to §4, and might signal floating-point exceptions according to §7.)
4. The standard encourages programming language implementations to treat IEEE 754 addition as commutative (§10.4):
> A language implementation preserves the literal meaning of the source code by, for example:
> - Applying the properties of real numbers to floating-point expressions only when they preserve numerical results and flags raised:
> -- Applying the commutative law only to operations, such as addition and multiplication, for which neither the numerical values of the results, nor the representations of the results, depend on the order of the operands.
> -- Applying the associative or distributive laws only when they preserve numerical results and flags raised.
> -- Applying the identity laws (0 + x and 1 × x) only when they preserve numerical results and flags raised.
This looks like a guarantee that, in IEEE 754 addition, "the representation of the result" (i.e. the sign/exponent/significand triple, or a special infinite or NaN value - §3.2) does not "depend on the order of the operands". §3.2 specifically allows an implementation to map multiple bitstrings ("encodings") to a single "representation", so it's possible that the bit pattern of the result of an addition may differ depending on the order of the addends.
5. "Except for the quantize operation, the value of a floating-point result (and hence its cohort) is determined by the operation and the operands’ values; it is never dependent on the representation or encoding of an operand."
"The selection of a particular representation for a floating-point result is dependent on the operands’ representations, as described below, but is not affected by their encoding." (both from §5.2)
HOWEVER...
6. §6, dealing with infinite and NaN values, implicitly contemplates that there might be a distinction between addition(a,b) and addition(b,a):
> Operations on infinite operands are usually exact and therefore signal no exceptions, including, among others,
> - addition(∞, x), addition(x, ∞), subtraction(∞, x), or subtraction(x, ∞), for finite x (§6.1)
OK.
>> x = 0;
0
>> for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };
1.0000000000000924e-15
>> x + 1
1.000000000000001
>> 1 + x
1.000000000000001
You've identified a problem, but it isn't that addition is noncommutative.1.0 + 1e-16 == 1e-16 + 1.0 == 1.0 as well as 1.0 + 1e-15 == 1e-15 + 1.0 == 1.000000000000001
however (1.0 + (1e-16 + 1e-16)) == 1.0 + 2e-16 == 1.0000000000000002, whereas ((1.0 + 1e-16) + 1e-16) == 1.0 + 1e-16 == 1.0
0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.
When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.
Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.
Yes, there are some exceptions where you can reliably compare equality or get exact decimal values or whatever, but those are kind of hacks that you can only take advantage of by breaking the abstraction.
printf("%.4g", 1.125e10) --> 1.125e+10
printf("%.4f", 1.125e10) --> 11250000000.0000Edit: Oh wait, it's listed in the main article under Raku. Forgot about the name change.
The other (and more important) matter, — that is not even mentioned, — is comparison. E. g. in “rational by default in this specific case” languages (Perl 6),
> 0.1+0.2==0.3
True
Or, APL (now they are floats there! But comparison is special) 0.1+0.2
0.3
⎕PP←20 ⋄ 0.1+0.2
0.30000000000000004
(0.1+0.2) ≡ 0.3
1In Raku, the comparison operator is basically a subroutine that uses multiple dispatch to select the correct candidate for handling comparisons between Rat's and other numeric objects.
And the length (but not value) winner is GO with: 0.299999999999999988897769753748434595763683319091796875
The explanation then goes on to be very complex. e.g. "it can only express fractions that use a prime factor of the base".
Please don't say things like this when explaining things to people, it makes them feel stupid if it doesn't click with the first explanation.
I suggest instead "It's actually rather interesting".
But this isn't a sales pitch. Some people are just bad at things. The explanation on that page require grade school levels of math. I think math that's taught in grade school can be objectively called simple. Some people suck at math. That's ok.
I'm very geeky. I get geeky things. Many times geeky things can be very simple to me.
I went to a dance lesson. I'm terribly uncoordinated physically. They taught me a very 'simple' dance step. The class got it right away. The more physically able got it in 3 minutes. It took me a long time to get, having to repeat the beginner class many times.
Instead of being self absorbed and expect the rest of the world to anticipate every one of my possible ego-dystonic sensibilities, I simply accepted I'm not good at that. It makes it easier for me and for the rest of the world.
The reality is, just like the explanation and the dance step, they are simple because they are relatively simple for the field.
I think such over-sensitivity is based on a combination of expecting never to encounter ego-dystonic events/words, which is unrealistic and removes many/most growth opportunities in life, and the idea that things we don't know can be simple (basically, reality is complicated). I think we've gotten so used to catering to the lowest common denominator, we've forgotten that it's ok for people to feel stupid/ugly/silly/embarrassed/etc. Those bad feelings are normal, feeling them is ok, and they should help guide us in life, not be something to run from or get upset if someone didn't anticipate your ego-dystonic reaction to objectively correct usage of words.
The idea that you care about the growth of people you are actively excluding doesn't make a whole lot of sense. In this example we're talking about word choice. The over-sensitivity from my point of view is in the person who takes offense that someone criticized their language and refuses to adapt out of some feigned interest for the disadvantaged party. The parent succinctly critiqued the word choice of the author and offered an alternative that doesn't detract from the message in the slightest.
The lowest common denominator is the person who throws their arms up when offered valid criticism.
On the other hand, people say "it's actually pretty simple" to encourage someone to listen to the explanation rather than to give up before they even heard anything, as we often do.
Yep, I've thrown 10,000 round house kicks and can teach you to do one. It's so easy.
In reality, it will be super awkward, possibly hurt, and you'll fall on your ass one or more times trying to do it.
I read the rest of your reply but I also haven’t let go of the possibility that we’re both (or precisely 100.000000001% of us collectively) are as thick as a stump.
My take is that this sentence is badly worded. How do these fractions specifically use those prime factors?
Apparently the idea is that a fraction 1/N, where N is a prime factor of the base, is rational in that base.
So for base 10, at least 1/2 and 1/5 have to be rational.
And given that a product of rational numbers is rational, no matter what combination of those two you multiply, you'll get a number rational in base 10, so 1/2 * 1/2 = 1/4 is rational, (1/2)^3 = 1/8 is rational etc.
Same thing goes for the sum of course.
So apparently those fractions use those prime factors by being a product of their reciprocals, which isn't mentioned here but should have been.
Did the text change in the last 15 minutes?
Most languages have classes for that, some had them for decades in fact. Hardware floating point numbers target performance and most likely beat any of those classes by orders of magnitude.
And in case anyone's wondering about handling it by representing the repeating digits instead, here's the decimal representation of 1/12345 using repeating digits:
0.0[0008100445524503847711624139327663021466180639935196435803969218307006885378
69582827055488051842851356824625354394491697043337383556095585257189145402997164
84406642365330093155123531794248683677602268124746861077359254759011745646010530
57918185500202511138112596192790603483191575536654515998379910895099230457675172
13446739570676387201296071283920615633859862292426083434588902389631429728635074
92912110166059133252328878088294856217091940056703118671526933981368975293641150
26326447954637505062778452814904819765087079789388416362899959497772377480761441
87930336168489266909680032401782098015390846496557310652085864722559740785743215
87687322802754151478331308221952207371405427298501417577966788173349534224382341
02875658161198865937626569461320372620494127176994734710409072498987444309437019
03604698258404212231672742]See also binary coded decimals.
That is true, but most humans in this world expect 0.1 to be represented exactly but would not require 1/3 to be represented exactly. Because they are used to the quirks of the decimal point (and not of the binary point).
This is a social problem, not a technical one.
https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_75...
But it's still not much used. E.g. for C++ it was proposed in 2012 for the first time
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n340...
then revised in 2014:
http://open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3871.ht...
...and... silence?
https://www.reddit.com/r/cpp/comments/8d59mr/any_update_on_t...
Seeing the occasional 0.300000000000004 is a good reminder that your 0.3858372895939229 isn't accurate either.
Not really. It would be really cool if fixed point number storage were an option... but I'm not aware of any popular language that provides it as a built-in primitive along with int and float, just as easy to use and choose as floats themselves.
Yes probably every language has libraries somewhere that let you do it where you have to learn a lot of function call names.
But it would be pretty cool to have a language with it built-in, e.g. for base-10 seven digits followed by two decimals:
fixed(7,2) i;
i = 395.25;
i += 0.01;
And obviously supporting any desired base between 2 and 16. Someone please let me know if there is such primitive-level support in any mainstream language!With many operations this trade off makes sense, however its critical to understand the limitations of the model.
Pretty much all languages have some sort of decimal number. Few or none have made it the default because they're ignominiously slower than binary floating-point. To the extent that even languages which have made arbitrary precision integers their default firmly keep to binary floating-point.
You can strike the "none". Perl 6 uses rationals (Rat) by default, someone once told me Haskell does the same, and Groovy uses BigDecimal.
The opposite.
Decimal floating points have been available in COBOL from the 1960s, but seem to have fallen out of favor in recent days. This might be a reason why bankers / financial data remains on ancient COBOL systems.
Fun fact: PowerPC systems still support decimal-floats natively (even the most recent POWER9). I presume IBM is selling many systems that natively need that decimal-float functionality.
> 0.1 + 0.2;
< 0.30000000000000004
> (0.1 + 0.2).toPrecision(15);
< "0.300000000000000"
From Wikipedia: "If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string." --- https://en.wikipedia.org/wiki/Double-precision_floating-poin...I never heard anyone complain that it would be simple to fix. But complaining? Yes - and rightfully so. Not every webprogrammer need to know the hw details and don't want to, so it is understandable that this causes irritation.
?- A is rationalize(0.1 + 0.2), format('~50f~n', [A]).
0.30000000000000000000000000000000000000000000000000
A = 3 rdiv 10.Fun times.
https://tools.ietf.org/html/rfc1123#page-13
One aspect of host name syntax is hereby changed: the restriction on the first character is relaxed to allow either a letter or a digit. Host software MUST support this more liberal syntax.
The mandate of a starting letter was for backwards compatibility, and mentions it in light of keeping names compatible with email servers and HOSTS files it was replacing.
Taking a numeric label risks incompatibility with antiquated systems, but I doubt it will effect any modern browser.
That said, it's one of my favorite trivia gotchas.
It used to be widespread because floating point processors were rare and any floating point computation was costly.
That's not longer the case and everyone seems to immediately use floating point arithmetic without being fully aware of the limitations and/or without considering the precision needed.
I just use Zarith (bignum library) in OCaml for decimal calculation, and pretty content with performance.
I don't think much domains needs decimal floating point that much, honestly, at least in finance and scientific calculations.
But I could be wrong, and would be interested in cases where decimal floating-point calculations are preferable over these done in decimal fixed-point or IEEE floating-point ones.
1. Constants have arbitrary precision 2. When you assign them, they lose precision (example 2) 3. You can format at as a arbitrary precision in a string (example 3)
In that last example, they are getting 54 significant digits in base 10.
The status quo is that even Excel defaults to floats and wrong calculations with dollars and cents are widespread.
This is why you should never do “does X == 0.1” because it might not evaluate accurately
Take that Rust and C ; )
My mental model of floating-point types is that they are useful for scientific/numeric computations where values are sampled from a probability distribution and there is inherently noise, and not really useful for discrete/exact logic.
https://discourse.julialang.org/t/posits-a-new-approach-coul...
I use integer or fixed-point decimal if at all possible. If the algorithm needs floats, I convert it to work with integer or fixed-point decimal instead. (Or if possible, I see the decimal point as a "rendering concern" and just do the math in integers and leave the view to put the decimal by whatever my selected precision is.)
But again, there are clearly plenty of use cases where it's insufficient, as you can vouch. I still don't think you can call it "disgusting", though.
It’s kind of a hallmark of bad design when you have to go into a long-winded explanation of why even trivial use-case examples have “surprising” results.
Fixed point is perfectly OK, if all your numbers are within a few orders of magnitude (e.g. money)
The way people rely on assumptions.