> If you have a lot of control of the scale of the numbers you’re computing with, you could use fixed point instead of floating point.
You could, but why would you go to the extra complication (and ensuing bugs) of adding an implied fixed point to integers when a decimal float type gives it to you for free?
As I said, it's not a silver bullet; it's just for the majority case (much like 32 and 64-bit integers are for the majority case when dealing with whole numbers).