If your calculation turned out to be incorrect it doesn't matter if it's efficient. Correct FP calculation requires error analysis, which is a concrete definition of "how to use it". If you mostly use packaged routines like LAPACK, then you don't exactly need FP; you need routines that internally use FP.
No, it does not. Please, don't make it seem harder than it needs to.
99% applications, if you don't do anything stupid you are completely fine.
If you care for precision so much the last digit make difference for you you are probably one of very few cases. I remember somebody giving an example circumference of solar system showing uncertainty of the value of Pi available as FP to cause couple centimeters of error at the orbit of Pluto, or something like that.
(Edit: found it: https://kottke.org/16/03/how-many-digits-of-pi-does-nasa-use)
Most of the time floating point input is already coming with its own error, you are just adding calculation error to the input uncertainty. But the calculation error is so much smaller than in most cases it is safe to ignore it.
For example, if you program a drone, you have readings from IMU which have not nearly the precision of the double or even float you will be working on.
There is also various techniques of ordering the operations to minimize the resulting error. If you are aware which kinds of operations in which situations can cause huge resulting error it is usually very easy to avoid it.
Only very special case is if you try to subtract two values calculated separately and matching almost exactly. This should be avoided.
This is a common misconception. Or "anything stupid" is quite broader than you think.
The main issue with FP arithmetic is that effectively every calculation incurs an error. You seem to aware of catastrophic cancellation, but it is problematic not because of the cancellation but because the error is proportional to the input, which can wildly vary after the cancellation. Non-catastrophic cancellation can still cause big errors in a long run for a large range of magnitudes. A long-running process thus requires some guard against FP errors, even if it's just a reality check (like, every object in the game should be in a skybox).
Alternatively you do not run a long-running process. I don't think you fly a drone for days? Probably you won't fly a drone that long even without FP, but using FP does prohibit many things and thus affects your decision. For example game simulations tend to avoid FP because it can accumulate errors and it is also hard to do FP calculation in a repeatable way (in typical environments). If I chose to use FP for simulations I effectively give up running simulations both in servers and clients---that kind of things.
My controller running moving horizon estimator to simulate thermal system of the espresso machine and Kalman filters to correct model parameters against measurements runs 50 times a second and works fine for days, thank you, using floats on STM32.
I have written huge amount of embedded software over past 20 years and I am yet to do any error analysis with focus on actual floating point inaccuracy. Never saw any reduced performance of any of my devices due to FP inaccuracy. That's because I know what kind of things can cause those artifacts and I just don't do that stupid shit.
What you describe are just no-noes you should not be doing in software.
There exists huge amount of software that works for years at a time, heavy in FP math, and it works completely fine.
If drones were not limited in flight time they would also work fine because what you describe is just stupid errors and not a necessity when using FP math.
Game simulations don't use FP mostly because it is expensive. Fixed point is still faster and you can approximate results of most calculations or use look up tables to get the result that is just fine.
Some games (notably Quake 1 on the source code which I worked after it became available) do require that the simulation results between client and server are in lockstep but this is in no way necessity. Requiring lockstep between client and server causes lag.
What newer games do, they allow client and server run simulations independently even when they don't agree exactly, and then reconcile the results from time to time.
If you ever saw your game partner "stutter" in flight, that was exactly what happened -- the client and the server coming to an agreement on what is actual reality and moving stuff into its right place.
In that light, there is absolutely no problem if after couple of frames the object differs from official position by 1 tenth of a pixel, it is going to get corrected soon.
Go do your "error analysis" if you feel so compelled, but don't be telling people this is necessary or the software isn't going to work. Because that's just stupid. Maybe NASA does this. You don't need to.
In this case, "doing anything stupid" includes equality checks. This is the sort of footgun that a huge number of people are going to run into.