fn foo<T: Trait>(_: T){}
fn foo(_: impl Trait) {}
fn foo(_: &Trait) {}
These three different fn definitions have two different behaviors and affect both the speed of the code and the speed of compilation and it depends entirely on how they are called.The first one is what the language calls generics: they are always monomorphized, which means that if you have three calls to `foo` with different types (that implement Trait) the compiler will expand three different functions with different types (code expansion).
The second one is a separate syntax level feature (impl Trait) which was mainly added to introduce a new feature which is static opaque types, where the function determines what the underlying the return type will be, but the caller can only interact with it using the trait's API.
[Aside] This is useful for cases like the following:
fn it() -> impl Iterator<Item = i32> {
vec![1, 2, 3].into_iter()
}
where you would otherwise have to specify the specific type: fn it() -> std::vec::IntoIter<i32> {
vec![1, 2, 3].into_iter()
}
This example doesn't seem like much, but if you want to add a `map()` call to this you start to see the benefit: fn it() -> impl Iterator<Item = i32> {
vec![1, 2, 3].into_iter().map(|x| x * x)
}
fn it() -> std::iter::Map<std::vec::IntoIter<i32>, fn(i32) -> i32> {
vec![1, 2, 3].into_iter().map(|x| x * x)
}
The more types you nest the more the benefits come into play. [end of aside]Now, with that out of the way, the type of an impl Trait in argument is decided by the caller (not the function), so they are implemented internally exactly the same as type generics. The only difference is arguable nicer syntax in the definition and not being able to specify a type using the turbofish. For all intents and purposes, those two are the same feature.
The third function is different, it uses a virtual table, with everything that implies: there's type erasure, there's only a single function in the expanded code (which makes compilation faster because the compiler doesn't need to do work), calling this function can be slower because the final executable has to perform some pointer chasing to call methods, instead of directly knowing where to call them.
All of this to say: if you use `fn foo<T: Trait>(_: T)` or `fn foo(_: &Trait)` affects compilation and execute time, so you have to be aware of their distinction. This means that if you're not aware you might have slower code than you would with a compiler (like Swift, for example) which relies on heuristics to decide to do static or dynamic dispatch, but it also means that your code's performance characteristics won't change all of a sudden because you modified a tangentially related part of the code and suddenly crossed some threshold.
Another example can be `.clone()`: is it slow? The answer is always "it depends". You might be cloning an `Arc`, which is cheap, you could be cloning a 10MB string, which is slow. But because we train ourselves to see clone as slow we might be worried or annoyed by `Arc`. We could make it `Copy`, but if we did that then you have less control over where the `Arc` gets copied which would make it harder to keep track of where the RC gets incremented. The language also doesn't automatically implement `Copy` for small structs, even though it could, which would make it easier to learn that part of the language (you don't learn to add derives early on), at the cost of baffling behavior (you might add a field and suddenly your struct isn't considered "small" anymore).
Yet another example, you also have access to `Cow<'_, str>`, which lets you deal with both static and heap allocated strings in the same way in your code, but it pollutes your code, where the naïve thing to do would be to use `String` everywhere.
My personal wish is for Rust to remain explicit as much as possible, but use lints to emit suggestions for the cases where a more "magic" language would change the emitted code. That way the code documents its behavior with fewer surprises.