Considered by whom?
The example gjulianm gave, where == is safe if nulls are flying around but calling .equals() on a null expression results in an exception, seems a clear counterexample.
I’d also argue that if your code is mathematical in nature, it is both quicker and more reliable to write, review, and maintain concise, mathematically natural syntax like a+b*c than verbose alternatives like a.add(b.multiply(c)).
Your last point actually highlights the problem - it is not more reliable because you are assuming what "+" and "*" do.
EDIT: An old blog post about the issue: http://cafe.elharo.com/programming/operator-overloading-cons...
How is that any different to assuming what add() and multiply() do?
In many ways, I’d say the latter is worse, because it’s not immediately obvious from just the function names whether the operation mutates the object it’s called on or returns a new object with the result while the originals remain unchanged. Both behaviours can be useful. Both are widely used in libraries, with those exact function names. You just have to read the documentation and learn how each library you use works, which is particularly... entertaining... if your project uses multiple libraries and they don’t all follow the same convention.
For + and * operators, someone already decided what the convention should be. If you just make them work like the built-in versions, the semantics are already intuitive to anyone who studied basic arithmetic at school.
Edit: I did take a look at the article you cited. It appears to be an elaborate troll, actually consisting of little more than a set of unlikely examples (I wouldn’t write a function called divide() to “divide” two matrices, so why would I suddenly feel the urge to write a / operator?), a set of assertions without proof (some of which are just begging the question), and a final ad hominem that conveniently discards one of the most obvious demonstrations that operator overloading can be useful on the basis that anyone making that argument just doesn’t understand (and ignores counterexamples like OCaml, where basic arithmetic operators really are different for basic numerical types, which is a significant pain point in the language).
I think it's exactly this that is misleading - your intuition may not be the same as developers. I think people assume more from "+" than they would from "plus()" and if you don't think about it you may falsely assume you are using a language built-in.
I'm not saying that operator overloading can't be useful, but in C++ developers did create horrible code by using /, *, + in their classes to 1) show off that they could and 2) to save typing. If they had to name their methods, I hope there is a good chance they would have done better than divide(), multiply() etc.
Regarding the article, I agree it is somewhat guilty of hyperbole, but I don't think calling it a troll is remotely fair.
BTW I don't understand what you mean by "can be useful on the basis that anyone making that argument just doesn’t understand", which makes me feel like I've wandered into a double joke ;)
> it is not more reliable because you are assuming what "+" and "" do.
You also assume what "add" and "multiply" do.
To be fair, I think a lot my prejudice against overriding operators is summed up by Bruce Eckel (who explains why it is arguably misguided in the case of Java):
"[Java designers] thought operator overloading was too hard for programmers to use properly. Which is basically true in C++, because C++ has both stack allocation and heap allocation and you must overload your operators to handle all situations and not cause memory leaks. Difficult indeed. Java, however, has a single storage allocation mechanism and a garbage collector, which makes operator overloading trivial -- as was shown in C# (but had already been shown in Python, which predated Java). But for many years, the partly line from the Java team was "Operator overloading is too complicated."