Python is a big fat conda-docker-shitshow because it doesn't provide a way to do
import tornado==5.1.2
import torch==2.1.0
etc. while coexisting in the same shell environment as something else that wants different versions.Fortunately the Python community is much more serious about making deps that work together than the JS community, and the fact it works at all given the cartesian products of all the python modules is kind of a miracle and a testament to that.
Unfortunately, that's a problem that is unlikely to be solved in the next decade, so we all live with it.
The reverse problem is true for JS, and I see many projects shipping very heavy frontend code because despite all the tree shaking, they embed 5 times the same module with different versions in their bundle. That's one of the reasons for the bloated page epidemic.
I guess it's a trade-off for all scripting languages: choosing between bloat or compat problem. Rust and Go don't care as much, and on top of that they can import code from 10 years ago and it sill works.
However, and while I do know how hard it is to ship python code to the end user (at least if you don't use a web app), I don't think the version problem is the reason. We have zipapp and they work fine.
No the main reason iscompiled extensions are very useful and popular, which means packaging is solving more than packaging python, but a ton of compiled languages at one. Take scipy: they have c, pascal and assembly in there.
This can and will be improved though. In fact, thanks to wheels and indygreg/python-build-standalone, I think we will see a solution to this in the coming years.
I'm even betting on astral to providing it.
/usr/lib/python3.12/torch/2.1.0/
/usr/lib/python3.12/torch/2.1.1/
/usr/lib/python3.12/torch/2.1.2/
When a package requests 2.1.1 it fetches it right out of there, installing from PyPI if it doesn't.The same should be true of JS and even C++. When a C++ app's deb package wants libusb==1.0.1 it should NOT overwrite libusb-1.0.0 that is on the system, it should coexist with it and link to the correct one so that another app that wants libusb-1.0.0 should still be able to use it.
> Fortunately the Python community is much more serious about making deps that work together
This is very not true at least in ML. I have to create a new conda environment for almost every ML paper that comes out. There are so many papers and code repos I test every week that refuse to work with the latest PyTorch, and some that require torch<2.0 or some bull. Also, xformers, apex, pytorch3d, and a number of other popular packages require that the cuda version that is included with the "torch" Python package matches the cuda version in /usr/local/cuda AND that your "CC" and "CXX" variables point to gcc-11 (NOT gcc-12), or else the pip install will fail. It's a fucking mess. Why can't gcc-12 compile gcc-11 code without complaining? Why does a Python package not ship binaries of all C/C++ parts for all common architectures compiled on a build farm?
> I have to create a new conda environment for almost every ML paper that comes out
That's how it's supposed to work: one env per project.
As for the rest, it's more telling about the C/C++ community building the things bellow the python wrappers.
But what I really hope is that they'll tackle the user app shipping problem eventually.
Pick one package source. Stick with it. And don't import every 0.0.x package from that package source either.
There are obviously reasons to use more than one package source, but those reasons are far rarer than a lot of inexperienced devs think they are. A major version number difference in one package isn't a good reason to complicate your build system unless there are features you genuinely need (not "would be nice to have", need).
So say you want to add pytorch, with GPU acceleration if it's possible on a platform. You want to make it multiplatform to some extent. You can't add another index if you want to use vanilla build, as that's not allowed. You can add a direct link (that's allowed, just not an index) but that's going to be specific to a platform+python version. Pytorch doesn't even provide CUDA packages on pypi anymore (due to issues pypi), so you need to be able to use another index! You'd need to manually create requirements.txt for each platform, create a script that packages your app with the right requirement.txt, and then do it again whenever you update. Otherwise, I think the most recent advice I've seen was to just make... the user download the right version. Mhmmmm.
The other option is to use poetry or something like that, but I just want to use "python build . "...