[dependencies]
foo_v1 = { package = "foo", version = "1" }
foo_v2 = { package = "foo", version = "2" }For example, the compiler error in this example:
note: perhaps two different versions of crate `smithay_client_toolkit` are being used?
(I've seen cases where that happens with C and C++ software, and things seem to compile and run... until everything explodes. Fun times.)
The author correctly contrasts Rust (and NPM's) behavior with that of Python/pip, where only one version per package name is allowed. The Python packaging ecosystem could in theory standardize a form of package name mangling wherein multiple versions could be imported simultaneously (akin to what's currently possible with multiple vendored versions), but that would likely be a significant undertaking given that a lot of applications probably - accidentally - break the indirect relationship and directly import their transitive dependencies.
(The more I work in Python, the more I think that Python's approach is actually a good one: preventing multiple versions of the same package prevents dependency graph spaghetti when every subdependency depends on a slightly different version, and provides a strong incentive to keep public API surfaces small and flexible. But I don't think that was the intention, more of an accidental perk of an otherwise informal approach to packaging.)
I've come to the opposite conclusion. I've "git cloned" several programs in both python and ruby (which has the same behaviour) only to discover that I can't actually install the project's dependencies. The larger your gemfile / requirements.txt is, the more likely this is to happen. All it takes is a couple packages in your tree to update their own dependencies out of sync with one another and you can run into this problem. A build that worked yesterday doesn't work today. Not because anyone made a mistake - but just because you got unlucky. Ugh.
Its a completely unnecessary landmine. Worse yet, new developers (or new teammembers) are very likely to run into this problem as it shows up when you're getting your dev environment setup.
This problem is entirely unnecessary. In (almost) every way, software should treat foo-1.x.x as a totally distinct package from foo-2.x.x. They're mutually incompatible anyway, and semantically the only thing they share is their name. There's no reason both packages can't be loaded into the package namespace at the same time. No reason but the mistakes of shortsighted package management systems.
RAM is cheap. My attention is expensive. Print a warning if you must, and I'll fix it when I feel like it.
(One of the ways I have seen this happen in this past is people attempting to use multiple requirements sources without synchronizing them or resolving them simultaneously. That's indeed a highway to pain city, and it's why modern Python packaging emphasizes either using a single standard metadata file like pyproject.toml or a fully locked environment specification like a frozen requirements file.)
For web dev and something like requests, it's just not as big of a deal to have a bunch of versions installed. You don't typically use/debug that kind of functionality in a way that would cause confusion. That said, it would be definitely be great sometimes to just be like "pip, I don't care, just make it work".
This! I have transitive conflicts almost every time I clone an exsting Python repo. It's one of the main reasons why Python == PITA^3 in my head.
Maybe it's the kind of Python repos I clone. Mostly ML/diffusion & computer graphics stuff.
As a Rustacean I had hoped that the Rhye + uv combo would 'fix' this but I now understand they won't.
The end result of this is that you end up with some random library in your stack (4 transitive layers deep because of course it is) holding back stuff like chokadir in a huge chunk of your dep tree for... no real good reason. So you now have several copies of a huge library.
Of course new major versions might break your usage! Minor versions might as well! Patch versions too sometimes! Upper bounds pre-emptively set help mainly in one thing, and that's reducing the number of people who would help "beta-test" new major versions because they don't care enough to pin their own dependencies.
The worst spaghetti comes from hard dependencies on minor versions and revisions.
I will die on the hill that you should only ever specify dependencies on “at least this major-minor (and optionally and rarely revision for a bugfix)” in whatever the syntax is for your preferred language. Excepting of course a known incompatibility with a specific version or range of versions, and/or developers who refuse to get on the semver bandwagon who should collectively be rounded up and yelled at.
In Rust, Cargo makes this super easy: “x.y.z” means “>= x.y.z, < (x+1).0.0”.
It’s fine to ship a generated lock file that locks everything to a fixed, known-good version of all your dependencies. But you should be able to trivially run an update that will bring everything to the latest minor and revision (and alert on newer major versions).
With the added special case of `0.x.y` meaning `>= 0.x.y, < 0.(x+1).0`, going beyond what semver specifies.
You would need:
A function v_tree_install(spec) which installs a versioned pypi package like “foo=3.2” and all its dependencies in its own tree, rather than in site-packages.
Another pair of functions v_import and v_from_import to wrap importlib with a name, version, and symbols. These functions know how to find the versioned package in its special tree and push that tree to sys.path before starting the import.
To cover the case for when the imported code has dynamic imports you could also wrap any callable code (functions, classes) with a wrapper that also does the sys.push/pop before/after each call.
You then replace third party imports in your code with calls assigning to symbols in your module:
# import foo
foo = v_import(“foo==3.2”)
# from foo import bar, baz as q
bar, q = v_from_import(
“foo>=3.3”,
“bar”,
“baz”,
)
Finally, provide a function (or CLI tool) to statically scan your code looking for v_import and calling v_tree_install ahead of time. Or just let v_import do it.Edit: …and you’d need to edit the sys.modules cache too, or purge it after each “clever” import?
You depend on two packages, each with a function that returns a “requests.Request” object. These packages depend on different versions of “requests”.
How would you implement “isinstance(return_value, requests.Request)” on each of these calls?
Or, the indirect case of this: catching a “requests.HttpException” from each of these calls?
Importing the right thing isn’t hard, but doing things with it is the hard bit.
from m1 import T1
from m2 import T2
from m3 import f
x = f()
assert isinstance(x, T1, T2)
Perhaps you could have a v_import that imported all versions of a symbol used throughout your project? Ts = v_from_import_all(
“foo”,
“T”,
)
assert isinstance(x, *Ts)
For static analysis, your type checker could understand what v_import does, how it works, and which symbols will be actually be there at runtime but yes, it’s starting to seem extremely complicated!What you do with the return_value defines the behaviour you expect from it so to that extent you can rely on that instead of using isinstance:
x: Union[T1, T2] = f()
print(x.foo() ** 12.3)
Perhaps some function could build that Union type for you? It would be a pain to make it by hand if you had 50x different third-party dependencies each pulling in a slightly different requests (but which as far as you are concerned all return some small part of that package that are all compatible.)If you’re importing a module to use it in some way you’re also declaring some kind of version dependency / compatability on it too, so that’s another thing your static analysis could check for you. That would actually be incredibly useful:
1/ Do your dependencies import an older version of requests than you do?
2/ Does it matter, and why? (eg x.foo() only exists in version 4.5 onwards, but m1 imports 4.4.)
But the main issue here is somewhat designed around a "scripts and folder of scripts from a package" design principle while such a loading system would fundamentally need to always work in terms of packages. E.g. you wouldn't execute `main.py` but `package:main`. (Through this is already the direction a bunch of tooling moved to, e.g. poetry scripts, some of the WSGI and especially more modern ASGI implementations etc.)
Another issue is that rust can reliable detect type collisions of the same type of two different versions and force you to fix them.
With a lot of struct type annotations on python and tooling like mypy this might be possible (with many limitations) but as of today it in practice likely will not be caught. Sometimes that is what you want (ducktyping happens to work). But for any of the reflection/inspection heavy python library this is a recipe for quite obscure errors somewhere in not so obvious inspection/auto generation/metaclass related magic code. Python can't, escept it can
Anyway technically it's possible, you can put a version into __qualname__, and mess with the import system enough to allow imports to be contextual based on the manifest of the module they come from. (Through you probably would not be fully standard conform python, but we are speaking about dynamic patching pythons import system, there is nothing standard about it)
It sucks when there is a vulnerability in a particular library, and you're trying to track all of the ways in which that vulnerable code is being pulled into your project.
My preference is to force the conflict up front by saying that you can't import conflicting versions. This creates a constant stream of small problems, but avoids really big ones later. However I absolutely understand why a lot of people prefer it the other way around.
cargo tree -i log@0.3.9
will show which dependencies require this particular version of log, and how they are transitively related to the main package. In this case, you would clearly see that the out-of-date dependency comes from package "b".There are equivalents for must other package managers that take this approach, and I've never found this a problem in practice.
Of course, you still need to know that there's a vulnerability there in the first place, but that's why tools like NPM often integrate with vulnerability scanners so that they can check your dependencies as you install them.
Also forces people to actually take backwards compatibility seriously.
I import a@1 and b@1 a@1 transitively depends on c@1 b@1 transitively depends on c@2
Even with different import paths, I still have two different versions of c in my codebase. It'll just be that one of them is imported as "c" and the other will be imported as "c/v2" - but you don't need to worry about that, because that's happening in transitive dependencies that you're not writing.
You still have the same issue of needing to keep track of all the different versions that can exist in your codebase.
Or if I depend transitively on two versions of a library (e.g. a matrix math lib) through A and B and try to read a value from A and send it into B. Then presumably due to type namespacing that will fail at compile time?
So the options when using incompatible dependencies are a) it compiles, but fails at runtime, b) it doesn't compile, or c) it compiles and works at runtime?
If the log endpoint is internal to your process, how did you end up with two independent mutexes guarding (or not guarding) access to the same resource? It should be wrapped in a shared mutex as soon as you create it, and before passing it to the different versions of the logging crate. And unless you use unsafe, Rust's ownership model forces you to do that, because it forbids having two overlapping mutable references at the same time.
It's an in memory counter doing an atomic increment that returns the next ID. Two of my projects in depend on it when they create new items. Both want to generate process wide unique IDs. But if they depend on two versions of the crate then there would be two memory locations, and thus two sequences of IDs generated, so two of the frogs in my game will risk having the same ID?
There is no sharing problem here, the problem is the opposite: that there are two memory locations instead of one?
I guess you could slap a `#[used]` attribute on your exported functions, and use their mangled name to call them with dlopen, but that would be unwieldy and guessing the disambiguator used by the compiler error prone to impossible.
Other than that, you cannot. What you can do is define the `#[no_mangle]` or `#[export_name]` function at the top-level of your shared library. It makes sense to have a single crate bear the responsibility of exporting the interface of your shared library.
I wish Rust would enforce that, but the shared library story in Rust is subpar. Fortunately it never actually comes into play, as the ecosystem relies on static linking
Yes, exactly.
> Other than that, you cannot.
so, to the question "Can a Rust binary use incompatible versions of the same library?", then the answer is definitely "no". It's not yes if it cannot cover one of the most basic use cases when making OS-native software.
To be clear: no language targeting OS-native dynamic libraries can solve this, the problem is in how PE and ELF works.
Rust/Cargo have been designed for it from the start.