Incidentally: the fact that 0.1+0.2 does not equal 0.3 is not something you could quibble with, that is absolutely reasonable for a floating point standard. It would be insanity to base it off of base 10 instead of base 2.
- Keep rational numbers; you also have an unbounded integer so that's to be expected. (Well, unbounded integers are visible, while unbounded denominators are mostly invisble. For example it is very rare to explicitly test denominators.)
- Use decimal types with precision controlled in run time, like Python `decimal` module. (That's still inexact, also we need some heavy language and/or runtime support for varying precision.)
- Use interval arithmetic with ordinary floating point numbers. (This would be okay only if every user knows pitfalls of IA: for example, comparison no longer returns true or false but also "unsure". There may be some clever language constructs that can make this doable though.)
- You don't have real numbers, just integers. (I think this is actually okay for surprisingly large use cases, but not all.)
[1] https://python-history.blogspot.com/2009/02/early-language-d... (search for "anecdote")