The reason Rust doesn't have a defined ABI is basically that it wouldn't buy the same benefits it does in C. Specifying an ABI requires a lot of per-platform work (which the C community has already done), and, because of the importance of cross-crate inlining (all generic functions get inlined into call-sites by default), would not be sufficient to provide the benefit of in-place library updates. If you rewrite generic code in libfoo, and libbar depends on it, you can't get around recompiling libbar.
This is basically because Rust is a higher-level language where you use iterators, iterator adaptors, and higher-order functions in the course of writing libraries and applications. In C, you would manually inline things like iteration, writing for loops and populating intermediate data structures yourself. In Rust, this is something that can be factored out into libraries, but that means your code's meaning depends more deeply on the meaning of library code. To optimize away these abstractions and provide good performance, the compiler needs to inspect and make decisions based on library source code when compiling code that calls it. To permit efficiency, Rust basically has to be compiled from leaf dependencies upward.
In general, this is probably worth it, but it means we do need to rethink the C/UNIX style of packaging, which doesn't work very well when a libstd update implies every other package must also update. Some form of compiler middle/backend in the package manager (think Android's ART compiler), or a specialized form of binary or IR diffs (like Chrome uses) would probably go a long way. If we want to solve UNIX's problems, there will be a need for some cascading changes across the OS ecosystem.
Say you have a function like
int foo_do(struct foo *, int);
but to fix a bug and/or tweak the API you change it to long foo_do(struct foo *, int, int);
then ELF symbol versioning allows you to do __asm__(".symver foo_do_v1_1,foo_do@@v1.1");
long foo_do_v1_1(struct foo *F, int arg1, int arg2) {
...
}
__asm__(".symver foo_do_v1_0,foo_do@v1.0");
int foo_do_v1_0(struct foo *F, int arg1) {
long rv = foo_do_v1_1(F, arg1, 0);
assert(rv >= INT_MIN && rv <= INT_MAX);
return rv;
}
where the runtime linker will link foo_do_v1_0 as foo_do for programs originally compiled against the v1.0 release, while programs built against v1.1 will be linked to foo_do_v1_1. You can do this as often as you want, though you can't generally go back further than when you first began using ELF symbol versioning to compile and release libfoo. You only need to add an ELF .symver alias for functions that have multiple aliases, but you do need to at least enable versioning (usually by specifying a version file with a catchall "*" entry which tag functions not explicitly aliased) at the point you begin maintaining a stable ABI.glibc is pretty much the only major library that makes use of this capability, despite the fact that it's been around for well over a decade. Most developers simply don't have the foresight or interest in providing rigorous forward and backward ABI and API compatibility. Partly that's because in the open source world, recompiling packages is much easier than in the proprietary world. And especially in the Windows world (where the CRT was never forward or backward compatible) you often packaged dependencies with your software, even if dynamically linked. And so newer languages like Go and Rust are being built with the presumption that both recompiling and bundled dependencies are the norm--it's what people are doing anyhow, and it simplifies the compiler and its runtime. That it's sad that this is the norm is beside the matter.