WSL1 was pushing the boundaries of OS research: - a method for having multiple syscall interfaces in a mainstream OS - processes in WSL1 were real NT processes (even if lacking some of the NTOS environment) - direct integration with the rest of the OS without an awkward VM separation layer.
In comparison WSL2 is basically an optimized VM with some fancy guest additions. Color me underwhelmed.
I understand the argument that WSL2 is faster than WSL1 in file system operations. I expect this will only be true for their root file system ("VolFs") and that performance will remain same or suffer for Windows drives ("DrvFs"). I am certain that they could fix "VolFs" performance by moving the file system of NTFS and into a raw disk partition or VHD. (Note: I write file systems both in and out of kernel.)
Finally WSL2 will be distributed with Windows which raises some licensing questions (IANAL) if not in the letter of the GPL license at least in spirit. I write GPL'ed software myself and I would be somewhat miffed if I saw my software used in a similar manner (i.e. "via a VM", but still distributed with non-GPL code).
I think they found the boundary of OS research in this case, and a better product is using the actual Linux kernel.
I think this comment may have been disingenuous on their part. The reason is that this problem more than likely still exists in WSL2 for the /mnt/c, /mnt/d file systems (i.e. what they used to call "DrvFs" in WSL1).
WSL1 comes with (at least) 2 file systems. "VolFs" which is the file system that they use for the Linux root file system and "DrvFs" which is the file system that they use to access Windows drives (C:, D:, ...).
In WSL1 VolFs was implemented as a layer on top of NTFS, so it comes with all the Windows file system and NTFS baggage. In WSL2 they will replace this file system with a native ext4 formatted partition on a VHD file, thus eliminating the Windows I/O stack (except for READ/WRITE I/O to the VHD file).
My contention is that they could have instead replaced VolFs with a native WSL1 file system that uses a disk partition or VHD as its backend storage, thus eliminating the Windows I/O stack in the same way. They could then have implemented proper Linux file system semantics without any baggage.
> I think they found the boundary of OS research in this case
Unlikely. It would not surprise me if the changes we are seeing are less technical and more political.
Mere aggregation of non-GPL code with (“distributed with”) GPL code expressly is consistent with the GPL, it is contrary to neither letter nor spirit of the license.
https://www.gnu.org/licenses/gpl-faq.html#MereAggregation https://www.gnu.org/licenses/gpl-faq.html#AggregateContainer...
I do not know, but I can see arguments on both sides. This is why I would love to hear the opinion of the FSF on this.
Has it? WSL1 and WSL2 seem to be parallel alternatives, the latter isn't replacing the former now, and it's not clear that it is intended to.
Switching to a virtualization model feels like a step backwards. If I wanted a virtualized Linux on Windows, I’d run a virtual Linux on Windows. WSL is special because it’s a middle ground.
In WSL1 they share the same IP addresses and TCP/UDP port space, while WSL2 has an separated IP address. I suppose there is some NAT to make the network working in WSL2.
At the end of Q&A part they mentioned that sharing localhost, IP addresses, and port number space (which is a WSL1 feature) may be done in future, but they have no roadmap for it right now.
Aside from not have some Linux pseudo-filesystems the bigger issue has been the speed of file operations. I dread having to run `dep ensure` and `yarn install`.
Why not just Hyper-V? Every few weeks I try to figure out how to set up a static IP but I cannot for the life of me. So it takes 1-2 minutes every time I want to reconnect because not only does it get a dynamic IP by default, it is reset every day or so. Need to go into the VM via Hyper-V, get the current IP address, reset /etc/hosts within WSL, reset /etc/hosts within Windows, SSH into VM within WSL. It drives me nuts.
Really looking forward to WSL2 for faster file operations and being able to run all my programs.
Regarding the networking issue, I sort of solved it by using VMBus between my VM and Windows. Shameless plug: https://github.com/bganne/hvnc
WSL is what makes it somewhat bearable. I look forward to Windows Terminal and WSL2.
Basically a driver in the Linux guest (hv_balloon for HyperV, but you have the same things for KVM, VMWare etc.) can artificially "inflates" its memory use when detecting too much unused memory and give it back to the host. When the guest needs more memory, the balloon driver will give the memory back. Couple with hotplug memory support and things can be pretty dynamic.
Not sure if they do something more sophisticated for WSL2 though.
This is similar to how running a userland application on Linux doesn’t require the application to be GPL’d because it’s not directly linked to the kernel. While it interacts with the kernel it is not derived from it.
Similarly, your program might statically link against a GPL'd library, but only pass bulk data through a programming interface in a coarse manner, and the result may not be considered linking. The FSF FAQ even explicitly addresses this case.
GPL leaves these kinds of mechanisms blurry and ill-defined (I presume intentionally), it's just that as engineers we are commonly only taught how violations manifest in the usual case.
Even then there are still more exceptions for 'runtimes' and suchlike where while in a literal sense, the final assembled program when executing is linked against GPL source the result is not considered to be covered by the GPL.
None of this stuff is actually part of the license text -- it's built on precedence and common understanding. These things are a lawyer's job, we're just engineers
GPL is a clever hack of copyright law. It grant derivative works a copyright permission under the condition of accepting the same license.
Now, Windows kernel and Nvidia's binary blob driver exist without relying on Linux kernel at all. How could they be considered as a derivative work in terms of copyright law? Since GPLv2 relies on copyright law, it has no effect on situation where copyright law doesn't allow exclusive right.
For Nvidia's driver, Nvidia released a thin shim wrapper code as GPL which interface between Linux API and Nvidia binary blob driver API. But the core Nvidia binary blob driver existed independently from Linux kernel, it can't be a derivative work of Linux kernel.
For WSL2 case, Microsoft may take the same approach. They may release a thin shim wrapper which interface Linux kernel and Windows as GPL. But Windows kernel itself cannot be a derivative work of Linux kernel if it's separated well.
At least, that's my conclusion. I'm not a lawyer.
Many kernel developers are of a different opinion, and consider Nvidia in breach of their license. The fact that nobody has sued does not mean that they are in the clear. Don't consider the use of a thin shim to be some sort of license firewall.
Anyone other than a select few multinationals probably shouldn't consider legal disagreements with their partners a valid business strategy.
https://wpdev.uservoice.com/forums/266908-command-prompt-con...
"It'll be available on all SKUs that run WSL currently including windows home!"
Without that, then the main reason that I'd even be using the subsystem - GPU compute - is unavailable, and I'll need to actually boot into Linux if I want to do anything useful.
But other people seem to have a reason, based on the comments here and in every submission about WSL.
What are your needs?
Macs work really well out of the box but with some tweaks WSL makes Windows a quite decent development workstation.
Currently I'm using Windows + WSL and I'm quite satisfied. Looks like with the new version some of the tweaks to use docker are not necessary anymore. Great work, Microsoft, keep the pace.
But Windows is the absolute last place I find that. All I get are updates that break my setup, constant inane interruptions from Cortana or the desktop or wherever, advertisement tiles in the fucking start menu, forced updates that can't be done in the background, Windows Activation disappearing after hardware upgrades, etc.
I feel like I'm in the Stepford Wives with all these people coming out of the woodwork to proclaim how majestic the Windows experience is.
A few years back I switched from Mac to Ubuntu for the faster, cheaper, diverse hardware and I have to say, it's pretty much perfect in the "Just Works" department on the 3 laptops + 2 desktops I've installed it on. But I'm also one of those people that actually liked Unity so I don't have to mess with it much after installing.
That is not required to use a Linux desktop.
Really? I don't know your definition of tweaks, but I've always found macOS requires a bunch of third party apps (often only paid alternatives) to be useful for a power user. Last I checked even its window management support was awful, and in a lot of situations it's impossible to get to where you want without touching the mouse.
KDE works much better out the gate, and is actually an advanced desktop environment in terms of possible customization if that is your bag.
it's called GNOME
I use Linux exclusively on all my systems, and have not had any problems at all. So I always wonder what hardware is being used that does not work...
For my laptops I use Think pads and various Dell systems. The only thing I always try to make sure when buying a laptop though is it is a Intel CPU with integrated inlet GPU -- just doing that and I have never really had any problems.
In any case, I am looking for the exact model you are using that has the problem -- so I can maybe find a cheap one on ebay and mess around with it.
I like to record audio, and use DAWs and plugins/ VSTs.
What about artists who use graphic design or video editing programs? Is your use case the only valid one, for whatever reason?
And the real trouble is that you can't just go find the one that works. If your co-workers all use webex then you're stuck.
Linux on desktop is a bad joke.