I think that’s true. Expensive copies should never have been implicit. There was a story some time ago about a single keypress in the address bar of Chrome causing thousands of memory allocations. The culprit: lots of std::string arguments up and down the call stack.
Rust gets this right, with the hindsight of C++’s example: “a = b” is a move operation by default and clone() is always explicit, except for plain data types where copying is literally memcpy — and those are clearly marked as such by the type system.
But allocations aren't the same as copies, and the argument for reference semantics has always been that implicit copies are problematic. In your std::string example, having that many String copies in a Java program would be similarly terrible (and this sometimes happens by accident because of abstraction layers that hide all the copying going on under the covers).
I do think Rust gets a lot of stuff right, but Rust's cognitive load is broadly recognized. I tend to see it as C++ with a lot fewer foot guns. ;-)
Java programs make "ridiculous amounts of implicit allocations" because allocations are cheap in Java. And they need to be cheap because Java doesn't have value semantics so it leans hard on escape analysis + cheap allocations.
I agree with the rest of your comment, although I think most of Rust's "cognitive load" amounts to borrow-checker-vs-garbage-collection. You could envision a Rust with explicit allocations and a GC, and that language would have a "cognitive load" approaching that of Go while also being a fair bit more performant insofar as people can much more easily reason about allocations and thus performance.
Yes, but that's kind of the point, right? Implicit allocation isn't really a problem because a runtime that optimizes the allocations magically for you is a lot easier to build than a runtime that optimizes whether you really need to be copying objects as much as you do.
Java has a tremendously good GC, so can cope with lots of allocations. Go has an OK one, so needs some help (but mollifying it often pays dividends elsewhere in locality and memory usage too). C++ has your default system heap, good luck.
Note that a move can still do a copy; in fact, Rust is kinda notorious for generating more on-stack memory copy operations than C++. It’s slowly improving, but it can still be surprisingly bad in some cases.
what do you mean by this? If I say `let x = 5; let y = x;` in rust, that's a "plain data type copy" of a stack value, but memcpy is usually used to copy heap memory. What connection between copying of primitive simple stack values and memcpy are you suggesting here?
Try playing with memcpy on Godbolt and you'll find that the compiler will compile the memcpy to a single mov instruction when the size is small, and some movdqu/movups when the size is slightly large, and only a function call when the size is huge.
> memcpy is usually used to copy heap memory
memcpy is often used in low-level serialization / deserialization code since you can't just cast a buffer pointer to a uint32_t pointer and dereference that; the solution is memcpy between variables that are often both on the stack.
They're just using 'memcpy' as a shorthand for saying the bitpattern is blitted. Semantically, that's like a memcpy. The point is, there are no expensive allocations, nor do any embedded pointer fields need adjusted, etc.
I think the confusion here is that there isn't always a literal call to memcpy for copying small types like ints in the emitted code, but it's always doing something with the same effect and maybe sometimes using an actual memcpy (probably when copying arrays?).
Also something interesting is that memcpy is used for copying data between stack variables in C sometimes when you need to convert some type to another one without using a cast.