Before docker came, Linux had LXC, which wasn't as popular as docker is now, but it was certainly known and used by people. So when docker came, LXC had more users than docker, and yet docker surpassed LXC in popularity in weeks, so the "X has more users than Y" state can be changed by various factors and it's not enough to keep the system in equilibrium.
So yes, the fact that Linux is more widely used than FreeBSD and illumos in the developer community is certainly an important factor, but I don't see anyone ever saying "FreeBSD is great but we want to use something supported by a larger community", or "illumos is great, but we don't have expertise with it", which are certainly important arguments to consider when making a decision.
But I hardly see anyone making these arguments, or any other arguments really. It's like these systems don't even exist. At first I attributed this to the "X has more users than Y" factor, but then I see people having particular problems with Linux container technology, in areas such as security, virtual networking, etc. And these are problems already solved by FreeBSD and illumos. Surely when you have a problem you look for alternative solutions that don't have these problems?
But I don't see people looking over the alternatives at all. As I said, there are many valid reasons not to use these other technologies, but I am very perplexed that people refuse to even acknowledge the existence of them.
And now that illumos can run Linux inside a zone (and FreeBSD did this too 15 years ago, and still does for 32-bit binaries, I believe work is well underway to extend this to 64-bit as well), I think the "I only know Linux" argument loses some potency, you can run Linux after all...
Switching to a different OS, however, requires different development-time technologies. Some third-party dependencies like libraries you're used to using might not even exist on the other platform.
Effectively, developers are locked into a pretty tiny development-time ecosystem: all the devs I know develop on Ubuntu (or on OSX with testing on Ubuntu, if they can get away with it.) They depend on the apt package graph, including PPAs.
Half of the renewed enthusiasm behind containers isn't about security; it's about the fact that a lot of operations people prefer RPM-based distros, and it was always really annoying to try to keep a given piece of Linux software portable between deb-based and RPM-based distros. You needed to figure out how to specify operation-time dependencies against at least two package graphs, and also compile-time dependencies using autotools or similar.
In contrast, Docker and similar ops software are interesting (from one perspective) precisely because they let devs learn fewer things: you develop your software on your Ubuntu machine, create a container that basically replicates your development environment, "install" your software in there, and then distribute that. Now your software can be run on some other dev's (Ubuntu, OSX) machine, or deployed to a production (RHEL) machine. The other deployment scenarios are pretty minor in comparison.
Or, in short: devs and ops are separate jobs. Containers make ops people do more work/learn more things, but let devs do less work/learn fewer things. That's why devs are enthusiastic about them: it pushes the work of packaging their software (or writing autoconf scripts) for various platforms off their plate.
Devs are interested in learning one thing—e.g. how to write a Dockerfile—that lets them drop an entire stream of continuous work/learning/keeping-up they have to do, e.g. handling changes in the multiple platforms their software supports.
What about dependencies between user-space and the host kernel? Aren't all containers forced to use the same kernel as the host?
The packaging scenario thst you describe has existed for years with VMs, where the VM can have a kernel or even OS version that is different from the host.