What's the new thing here? If I understood correctly, it seems that you can connect your Docker host to an overlay network, so your containers can access other containers and resources through it. Am I correct to think this facilitates orchestration of the containers' network?
Disclosure: I am behind https://wormhole.network which could be seen as some sort of Weave competitor, but it's not. It covers other use cases, even though there's some overlapping e.g. overlay multi-host networking for containers https://github.com/pjperez/docker-wormhole - it doesn't require changes on the host itself, but can't be orchestrated.
I think both are a bit different. My project is based on SoftEther and using an external server as a pivotal point, so all the members of the network only need outbound 443/TCP access. It's not point to point unfortunately. The idea is to make sure it works on as many scenarios as possible.
I'm just adding all the server management and simplification layer, but both server and clients are 100% SoftEther.
[1] https://blog.docker.com/2015/11/docker-multi-host-networking...
What they've done now if I've understood it correctly, is that they've effectively leveraged that to allow them to intercept Docker API calls, and if that call requests a network provided by a CNI plugin, they call CNI "on behalf of Docker" and then pass on a modified API call to Docker, so you can have Docker/Kubernetes/Rocket on the same overlay network.
> Can containers interoperate regardless of the choice of Docker plugin, or will they only work on a plugin based on the weave proxy?
Containers don't care what network you configure. Basically Docker will just use a bridge interface, assign IP addresses for a container on that bridge, and optionally expose ports on the host. The Docker networking support lets Docker query an external plugin API to obtain the details to use for a container. Kubernetes and Rocket implements a different plugin API for the same purpose. But in both all of this happens before a container is started.
Once it's started, the container just sees a an interface bound to a suitable IP, so your containers should not need to care.
Weave Net as a product ships with support for the former; the blog post describes work towards support for the latter.
No they do not interoperate; there is a detailed argument at http://blog.kubernetes.io/2016/01/why-Kubernetes-doesnt-use-...
[1] Scare-quotes because all of these things are only a few months old.
The developers are friendly, helpful and responsive. But in my testing, Weave simply wasn't adequate. I hope they'll get there.
Without encryption, Weave Net uses the same VXLAN technology as Docker Overlay; goes at close to native speed.
(note I am a friendly Weave Net developer)
Full disclamer: I work at Weaveworks.
While there is an ease of understanding of bridged and/or overlayed networks, native end-to-end routing between containers with regular IP datagrams and container-level addressability has been on my wishlist for a long time.
Occasionally it makes me feel left behind.
Then I remember most of these "new hotness" technologies get abandoned as quickly as they get adopted, so I'm not missing much.