I don't know if you've tried to get someone else's Python running recently, but it has devolved into a disaster effectively requiring containers to accurately replicate the exact environment it was written in.
Core system applications should be binaries that run with absolutely minimal dependencies outside of default system-wide libraries. Heck, I would go as far as to say applications in the critical path to repairing a system (like apt) should be statically linked since we no longer live in a storage constrained world.
Please show me a project where you believe you "effectively require containers" just to run the code, and I will do my best to refute that.
> since we no longer live in a storage constrained world.
I think you do care about the storage use if you're complaining about containers.
And I definitely care, on principle. It adds up.
For reasons I can only assume have to do with poorly configured CI, pip gets downloaded billions of times annually (https://pypistats.org/packages/pip), and I assume those files get unpacked and copied all the time since there would be no good reason to use uv to install pip. That's dozens of petabytes of disk I/O.
I guess GP meant "containers" broadly, including things like pipx, venv, or uv. Those are, effectively, required since PEP 668:
https://stackoverflow.com/questions/75608323/how-do-i-solve-...
This statement makes no sense. First off, those are three separate tools, which do entirely different things.
The sort of "container" you seem to have in mind is a virtual environment. The standard library `venv` module provides the base-line support to create them. But there is really hardly anything to them. The required components are literally a symlink to Python, a brief folder hierarchy, and a five-or-so-line config file. Pipx and uv are (among other things) managers for these environments (which manage them for different use cases; pipx is essentially an end-user tool).
Virtual environments are nowhere near a proper "container" in terms of either complexity or overhead. There are people out there effectively simulating a whole new OS installation (and more) just to run some code (granted this is often important for security reasons, since some of the code running might not be fully trusted). A virtual environment is... just a place to install dependencies (and they do after all have to go somewhere), and a scheme for selecting which of the dependencies on local storage should be visible to the current process (and for allowing the process to find them).
Of all the languages, python in the base system has been an unmitigated garbage fire.
It was not their action, nor is it hacked, nor is the message contained within pip.
The system works by pip voluntarily recognizing a marker file, the meaning of which was defined by https://peps.python.org/pep-0668/ — which was the joint effort of people representing multiple Linux distros, pip, and Python itself. (Many other tools ignore the system Python environment entirely, as mine will by default.)
Further, none of this causes containers to be necessary for installing ordinary projects.
Further, it is not a problem unique to Python. The distro simply can't package all the Python software out there available for download; it's completely fair that people who use the Python-native packaging system should be expected not to interfere with a system package manager that doesn't understand that system. Especially when the distro wants to create its tools in Python.
You only notice it with Python because distros aren't coming with JavaScript, Ruby etc. pre-installed in order to support the system.
My understanding of the reasoning is that python-based system packages having dependencies managed through pip/whatever present a system stability risk. So they chose this more conservative route, as is their MO.
Honestly if there is one distribution to expect those kinds of shennanigans on it would be Debian. I don't know how anybody chooses to use that distro without adding a bunch of APT sources and a language version manager.