I would argue that what "number" implies depends on who you are. To a mathematician it might imply "real" (but then why not complex? etc), but to most of us a number is that thing that you write down with digits - and for the vast majority of practical use cases in modern programming that's a perfectly reasonable definition. So, basically, rational numbers.
The bigger problem is precision. The right thing there, IMO, is to default to infinite (like Python does for ints but not floats), with the ability to constrain as needed. It is also obviously useful to be able to constrain the denominator to something like 10.
The internal representation really shouldn't matter that much in most actual applications. Let game devs and people who write ML code worry about 32-bit ints and 64-bit floats.