Virtualenv allows you to sandbox all your packages in a directory local to your development/deployment environment. You no longer need to install anything on the base system, and you can have multiple virtualenvs side-by-side. In addition, you can copy a virtualenv from your build environment to your production environment by just copying the directory.
This gives you huge wins in versioning packages, testing out different versions alongside each other, and being able to strongly validate that what you tested in your staging environment is the same thing that got deployed to your production systems.
In general, I'm a very strong believer of pushing _everything_ your production environment relies on to a single directory on your production system, rather than installing it on the base system. It's actually rather surprising more people haven't moved to this model, and that there aren't better tools to support it.
1) We develop on macs 2) Our deploy environments are a mix of 32-bit/64-bit machines running Ubnutu 3) It wasn't clear if each deployment required its own virtualenv to start from scratch OR if we should reuse a virtualenv (which seemed to defeat the whole point of using virtualenv)
Got any ideas on what we could do?
- We deployed tarballs to production machines which unpacked to a single directory that included a virtualenv that had both all our code and all the modules we relied on, and anything else that the code needed to function that wasn't installed in a very basic RHEL install.
- We used the --relocatable option in virtualenv to remove all references to absolute paths, which meant we could copy the virtualenv around to various directories and machines and still have it work.
- We had a series of makefiles that would make/update a virtualenv, and which worked on both mac and linux. We would use this both in development and when deploying. For several packages, we had to hand tune this to work on both mac and linux, but for most things, it Just Worked.
- When we deployed to production, we would unpack the tarball to a directory whose name included a version number for the push. We then had a symlink that pointed at the currently running version. This meant all we had to do to rollback a push was flip the symlink to the previous deployment directory, and restart. This rollback included any modules that changed (since they were in the virtualenv). I haven't seen any other way to reliably do this.
The one thing you mention that we didn't have to deal with was mixed 32/64-bit environments. One nasty solution is to have two build machines, one 32-bit and one 64-bit, but there's probably a better way...
Not if you're developing on a different platform/architecture than you're deploying to (e.g. OS X vs. Linux). Unless you're only using pure Python modules, which excludes PIL and most database drivers.
Other than that, you're very right.
Not sure if pip lets you run configuration scripts during various stages of install and un-install?
Does pip let you depend on C libraries and other system features?
How does pip handle obsoletes and transitive dependencies?
As far as I can tell, you can run config scripts during installation. A good example of this would be Cython. An even crazier example of this would be uwsgi. As for uninstall I'm not sure. A lot of this has more to do with what setuptools/distribute can do not necessarily pip. Pip just makes it easy to enumerate/install/uninstall with the aid of PyPi, setuptools, distribute.
> Does pip let you depend on C libraries and other system features?
Pip isn't smart enough to track down C libraries. This is the part that kind of sucks but I don't know how pip would even do this. Python is cross-platform by nature. Being able to hook into what Windows DLLs, RPMs, debian packages you have to see what C dependencies you've satisfied seems like an ambitious endeavor. Its already hard enough to get a SINGLE package management system working well on its own :) Usually if I know I find myself missing a C library, I read the python package's installation notes, do apt-get or whatever to get all those dependencies I need, then run "pip install" I've had to do this for MySQL-python, pycrypto, and matplotlib thus far.
> How does pip handle obsoletes and transitive dependencies?
By obsolete you mean you need a newer version or do you mean you don't need that package at all anymore? As for transitive, do you mean like a chain of dependencies? I haven't run into any issues just yet... I think pip just goes out and installs everything for you. This primarily depends on how well people implement their setup.py's though.
By transitive dependencies I mean that it will intall C when installing package A if A depends on B and B depends on C. It looks like it would do that.
As for 'obsoletes' I meant the case (this is from the RPM 'world') where on package changes names or 2 packages are combined into one, so let's say you maintain package A. Then you change its name to B because Oracle's lawyers have threatened to break your knees over a trademark ;-). Now everyone in the world depends on A so you declare that B obsoletes A so not their package will still install and corectly pull in your new package B. The same would happen if say you got together with another project and merged their project into yours. So you would 'obsolete' their old package name. Yeah, this probably doesn't apply to pip as much as RPM, it is more of feature of the package index server rather than pip itself perhaps.
It is still good, but nowhere near as great as gem or apt. But maybe that is just me being unlucky...
Forking these projects and fixing the setup.py is probably the best solution. If they're active projects and you'll want their updates, they're likely to accept your patch. If they're inactive, it doesn't really matter that you're using a fork anyway.
pip install tornado==1.1
OR if you know you ONLY want tornado 2.0
pip install tornado==2.0
From my experience, pip has been able to fetch PYTHON dependencies if the author of the package wrote their setup.py correctly. I agree that things could be better with regards to C dependencies (MySQL-Python requiring libmysql5dev being particularly annoying) but this is hard to do as every environment has their own way of naming dependent libs.