I don't suppose it could be as fast as uv is, but it
could be much closer to that than where it is now.
One immediate speed-up that requires no code changes: when uv creates a venv, it doesn't have to install Pip in that venv. You can trivially pass `--without-pip` to the standard library venv to do this manually. On my system:
$ time uv venv uv-test
Using CPython 3.12.3 interpreter at: /usr/bin/python
Creating virtual environment at: uv-test
Activate with: source uv-test/bin/activate
real 0m0.106s
user 0m0.046s
sys 0m0.021s
$ time python -m venv --without-pip venv-test
real 0m0.053s
user 0m0.044s
sys 0m0.009s
For comparison:
$ time python -m venv venv-test
real 0m3.308s
user 0m3.031s
sys 0m0.234s
(which is around
twice as long as Pip actually takes to install itself; I plan to investigate this in more detail for a future blog post.)
To install in this environment, I use a globally installed pip (actually the one vendored by pipx), simply passing the `--python` argument to tell it which venv to install into. I have a few simple wrappers around this; see https://zahlman.github.io/posts/2025/01/07/python-packaging-... for details.
In my own project, Paper, I see the potential for many immediate wins. In particular, Pip's caching strategy is atrocious. It's only intended to avoid the cost of actually hitting the Internet, and basically simulates an Internet connection to its own file-database cache in order to reuse code paths. Every time it installs from this cache, it has to parse some saved HTTP-session artifacts to get the actual wheel file, unpack the wheel into the new environment, generate script wrappers etc. (It also eagerly pre-compiles everything to .pyc files in the install directory, which really isn't necessary a lot of the time.) Whereas it could just take an existing unpacked cache and hard-link everything into the new environment.