Tloewald's point was that microseconds and nanoseconds are not SI base units, so standardizing on base units eliminates the ambiguity (the second alone is the SI base unit of time). Of course additional mechanisms of double and triple checking are probably still warranted to account for human fallibility.
That's fine until you start using floating point values in order to satisfy this SI fetish. (I guess you like farads and henries too.) It's much better to use fixed point and keep track of your multiplier. A sufficiently advanced type system could do this, but at some point it will require careful thinking about precision.
Floating point is treated differently by compilers, involves different components of the processor, generates different types of errors, and has different performance. Perhaps there is some perspective from which that proposition is true?
Yes but you eliminate errors when working within a type of unit. NASA's problem came from converting one kind of length to another. You are clutching at straws here.