Though, it looks like Zig doesn't do function overloading either [1]. That's a disappointment. So you end up with `array_count`, `map_count`, etc instead of just `count`. In my way of thinking that's more work reduces readability. It's one of the paint points of C vs C++ to need `array_list_count` and `hash_map_add` instead of just saying `vec.insert(...)`.
The biggest ones for me in Rust is that it disallows extending traits for types you don't own, and the lack of function overloading. Neither of those are required for the borrow checker or safety, but it's a philosophical design decision.
Another issue with function overloading is that it can make code more difficult to debug. If a bug is found in one of the overloaded functions, it can be difficult to determine which function is causing the issue. This can make it more time-consuming to fix the bug and can lead to frustration for the developer. I remember debugging an issue at OkCupid and we lost many hours due to debug information being collapsed for overloads, making it look like the wrong function was being called in the debugger.
Finally, function overloading can lead to code that is more prone to errors. When the same function name is used for multiple different purposes, it can be easy to accidentally call the wrong function with the wrong arguments, which can lead to unintended consequences or runtime errors.
In conclusion, good riddance. This is what makes Zig a great language, that it doesn't have garbage like function overloading.
This is a similar argument to Hungarian encoding, IMHO. A decent LSP makes it trivial to see which function is being called. The other issue you mention is a problem with the debugger/compiler, not function overloading.
> Even finding what file the function is in can be a non-trivial task.
Not really, control-click and you’re at the function def.
> it can be difficult to determine which function is causing the issue. This can make it more time-consuming to fix the bug and can lead to frustration for the developer.
Not any harder than “method” overloads, which it sounds like Zig does have.
Personally I find the opposite. Having 20 names for functions that all equate to `len` requires more mental overhead.
We're talking about replacing C here. Requiring everyone depend on a hefty LSP to make sense of source code is a big ask.
Source code is text. I firmly believe that all semantic information of a piece of source code should be expressed as text in said piece of code. I already see code where the language allows the programmer to omit types from variables for 'brevity', and the assumption then is that everyone working on that code is using a fancy enough text editor that can stick in extra labels to show the missing type information. Absolutely baffling to me how people find that acceptable in any way.
The orphan rules are definitely necessary for coherence: otherwise, you could end up with a situation where two different crates try to implement the same trait for the same type, and there would have to be some (likely unwieldy) mechanism to resolve that.
Also, it's not that you can't implement any traits for types you don't own, it's that you can't implement traits you don't own for types you don't own. So you can still, e.g., create your own extension trait and implement it for whatever type you want. (But you can't do that while also creating a blanket impl for types implementing the original trait, which is a bit of a pain.) And, of course, if you need an object to implement a trait you don't own, you can define a newtype wrapper over it, but that can also be difficult to work with sometimes.
Perhaps this situation could be improved by one of the "crate-local impl" proposals that have been floating around. I'm not entirely sure how those would interact with existing implementation from the defining crates.
I think Julia, D, Nim, and others show its possible and generally easy to work with open ended type systems. I think Haskell does as well?
Though yes those come at the cost of possible conflict or user confusion, which is why I consider it a philosophical decision. It matches with the decision to not allow user code to use trait specializations in stable despite the stdlib having it.
Imports generally seem fine for controlling what gets used. Want a trait impl, import it into a module. Cargo crate features might also be a route to enforce package level decisions.
> And, of course, if you need an object to implement a trait you don't own, you can define a newtype wrapper over it, but that can also be difficult to work with sometimes.
Unfortunately that means you can't define 'default' or 'clone' traits for a type. That prevents you from using derives on your newtypes as well. That means manually implementing clone, or serde which is a PITA.
> Perhaps this situation could be improved by one of the "crate-local impl" proposals that have been floating around.
At least that'd make it somewhat easier to work with. It'd still not let end users / programmers to mix and match types and traits from different libraries without a lot of unnecessary work.
Sorry, could you give an example of this? I can't find any way to extend existing types in those languages with some brief Googling.
> It matches with the decision to not allow user code to use trait specializations in stable despite the stdlib having it.
Keeping specialization unstable is much more a practical decision than a philosophical position: if they could, they would've stabilized it years ago. The problem is that specialization very quickly becomes unsound in combination with lifetimes. The compiler erases all lifetimes on types after checking them, since monomorphizing a new type for each lifetime would lead to an exponential explosion (this can't be changed at this point without redesigning the language). Therefore, users must not be able to specialize a trait impl on certain lifetime combinations (or certain lifetimes like 'static), since the compiler would have absolutely no way to tell which impl to use. And in turn, completely barring lifetime specialization becomes a daunting challenge with the existence of associated type projections and blanket impls. Specialization definitely isn't kept unstable just because they think users can't be trusted with it.
> Imports generally seem fine for controlling what gets used. Want a trait impl, import it into a module.
I don't think this would be compatible with blanket impls, since you couldn't just import every single potential impl in existence (and if you could, you'd run into conflicts). I suppose you could have a system of exporting impls, where to use a blanket impl you have to pass it another impl as input, but at that point you have new idiosyncratic system that would scare away users and would likely be far more noisy than a good newtype system.
> Unfortunately that means you can't define 'default' or 'clone' traits for a type. That prevents you from using derives on your newtypes as well. That means manually implementing clone, or serde which is a PITA.
If a foreign crate doesn't implement Default or Clone for its types, then how is the compiler supposed to derive it for your local newtypes? It can't just look into the foreign type's fields, if they aren't all public. Are you often having to work with fully public foreign types?
Overall, I get that the type system can be pretty frustrating as it stands today, but I don't see any better alternative than building better tools for defining newtypes.
Zig doesn't have function overloading but it does have namespaced functions, so you can define your types and your "methods" on them.