Furthermore, ZFS ARC should treat each read operation of the same files as reading the same thing, while a sandbox made the traditional way would treat the files as unique, since they would be full copies of each other rather than references. ZFS on the other hand should only need to keep a single copy of the files cached for all environments. This reduces memory requirements dramatically. Unfortunately, the driver has double caching on mmap()’ed reads, but the duplication will only be on the actual files accessed and the copies will be from memory rather than disk. A modified driver (e.g. OSv style) would be able to eliminate the double caching for mmap’ed reads, but that is a future enhancement.
In any case, ZFS clones should have clear advantages over the more obvious way of extracting a tarball every time you need to make a new sandbox for a Python execution environment.
That said, if you really want to use block devices, you could use zvols to get something similar to LVM2 out of ZFS, but it is not as good as using snapshots on ZFS' filesystems. The write amplification would be lower by default (8KB versus 4MB). The page cache would still duplicate data, but the buffer cache duplication should be bypassed if I recall correctly.