That's exactly my point: when you internalize the diagram, you'll be able to reason confidently about what happens:
• In the case of "2.3 + 2.3", each "2.3" is “snapped” to the nearest representable value (green line in the diagram), then their sum snapped to the nearest green line. In this case, because the two summands are equal and we're using binary floating-point, the result will also be a green line. If you knew more about the details of binary64 aka float64, you could confidently say that "2.3" means 2.29999995231628417969 (https://float.exposed/0x40133333) and be sure of what "2.3 + 2.3" would give (4.59999990463256835938 = https://float.exposed/0x40933333) and that this is indeed the closest representable value to 4.6 (so yes, we can rely on 2.3 + 2.3 giving the same value as what “4.6” would be stored as, i.e. "2.3 + 2.3 == 4.6" evaluating to True), but even without learning the details you can go pretty far. For instance, you know you can rely on "x + x" and "2 * x" giving the same value for any (non-NaN) value x.
• I already gave the example of 100000000000000000000000.0 + 200000000000000000000000.0 ≠ 300000000000000000000000.0 above, but for the specific case of "x + x" and "2 * x" yes we can rely upon them evaluating to the same value (unless 2*x is Infinity or NaN), though of course the large integer x may itself not be representable exactly. Again, with the mental model, you'll be in a better position to state what you expect by "exactly".