wsl2 mode is super impressive -- none of the overhead of docker on os x
Are there any good docs/blogs that go into detail on how they’ve managed to avoid that overhead? Would be awesome to learn about
In contrast, yes, wsl2 has 5% CPU hits (hyperv, ...),but a sane FS mapping, so the total overhead is imperceptible for a windows dev box.
I was pleasantly surprised to see wsl2 Just Work. Our only issue preventing wsl2 from being the official team rec over native Linux has been wsl2's lack of opencl, and that's just specific to our use of GPUs. As someone whose preferred dev box has been osx for ~20 years, even when at MS, I was biased against Windows for most dev... but no longer.
Nvidia punts to IBM RHEL8 docs for GPU podman, which is unusual and risky to see. We officially recommend against it for HA environments due to this kind of lack and overall low relative confidence. I think k8s envs may be moving to something here, so maybe in a year or two? I'd be curious of folks doing stock rhel8 podman with tensorflow/torch on nvidia, which should be as vanilla as you can get for enterprise ai. We generally see more interesting GPU envs here (ex: DGX with advanced networking hw/sw), but we don't have confidence for the simple case, which is the starting point..