Most importantly of all though, I have found it a great project to use, and a really great project to contribute to. I think that contributions are the lifeblood of open-source projects, and I give Kubernetes 10/10 for their community and processes, which I think bodes very well for the future.
This is just another example, an open source version of Borg that will always be a few years behind.
Have standards failed ?
There are plenty of people now who need to solve the container problem, but Googlers been working on this shit for years, before it was really on anyone else's horizon. Google employees incepted the cgroup feature way back in 2006, to solve problems that were already being felt acutely at that time within Google. Folks have been working on this stuff a long time before it mattered to anyone else, and that's why what's coming out is software rather than standards. There is no way a big company is going to delay solutions to an urgent strategic problem in order to be part of a democratic process for the sake of a few people's ideals. Maybe if they'd seen it coming five or ten years in advance, to give enough time for the standardization process to occur, but Google was far too small and the future far too uncertain in 2001 to predict what might be needed in 2006.
Compiler toolchain, devops, sysadmin, languages, runtimes, etc don't really fit that picture.
TLDR: we are in a stage where we don't know yet what functionality needs to be supported, it is not a good time to form standards, it is a time for checking possibilities and finding best technical solutions.
That said, there are times when the business model behind an open-source project is ostensibly at odds with standardization. The situation between Docker and CoreOS' app container spec comes to mind. The Docker container spec was defined by the implementation, not the other way around, and CoreOS took that opportunity to define an actual spec (and an implementation). Heated debates erupted.
In the area of cloud orchestration which Kubernetes seems to fill, I think it's still in a "discovery" stage. Early on the Kuberbetes devs said they wanted to focus first on identifying the right abstractions. I imagine standardized specs might come out of it once things stabilize.
Instead, let anyone work on it and benefit.
While people are raving about containers, there are still security issues with containers no?
I think VM will be here to stay for a long while and while we might have to pay a performance and memory hit for them, they offer better isolation.
So you've removed a major barrier of deploying more than one app per server. You no longer need to worry about dependancy hell and you've made moving services around super easy. You decide you can save a crap ton of money by sharing resources; this is where the problem lies. If you run multiple app on the same server without a container layer, you'll still have the same app isolation concerns, only attackers now don't have a container to escape from, and you might have dependancy problems.
So the point is, you can't rely on docker isolation instead of vm's from a security pov, but if you stick with using a single docker per VM, you'll still have the deployment benefits such as the ability to create idempotent binaries and deploy these. This is, in my opinion, an improvement over trying to reproduce builds on different platforms or scp'ing your builds hoping all the required packers are in your vendor, etc. Maybe not a big deal if you're deploying go, but a really nice thing when working with php, ruby, python, etc.
In Linux, perhaps. However FreeBSD jails and illumos zones are rock-solid. There's this crazy hype around containers these days and people just ignore the stable, secure, and tried technology, I don't understand it at all!
FreeBSD and illumos are not Linux, but their still Unix-like, it's not like you'd have to use OpenVMS. Plus you'd get other benefits too, like DTrace and ZFS. And on illumos now you can even run Linux binaries in a zone.
So why do people simply pretend these secure technologies don't exist? Can someone explain?
If you generalise from Docker. There are other container models on Linux -- LXC, lmctfy, Rocket, Garden etc have different security tradeoffs.
In any event google compute is a terrible user experience compared to the likes of AWS and other cloud providers. Heck, even the shittiest VPS providers tend to be better than google compute. So open sourcing their "secret sauce" as the article puts is still missing key bits so I don't know how many people actually fall for the good will part.
I am not sure which projects you have looked at from Google in terms of Open Source, but in the case of Kubernetes we have worked pretty hard to engage a community outside of Google and work with the community to make sure that Kubernetes is solid. One of the things that I like about the it is that many of the top contributors don't work at Google. People like Red Hat have worked very closely with us to make sure that (1) Kubernetes works well on traditional infrastructure (2) that it is a comprehensive system that meets enterprise needs, (3) that the usability is solid. People like Mirantis are working to integrate Kubernetes into the OpenStack ecosystem. The project started as a Google thing, but is bigger than a single company now.
Another thing worth noting: building a hosted commercial product (Google Container Engine) in the open by relying exclusively on the Kubernetes code base has helped us ensure that what we have built is explicitly not locked into Google's infrastructure, that the experience is good (since our community has built much of the experience), and that the product solves a genuinely broad set of problems.
Also consider that many of our early production users don't run on Google. Many do, but many also run on AWS or on private clouds.
-- craig
Disclaimer: I work for Pivotal, in Pivotal Labs.
I'm sorry you seem to have had a bad experience with GCE, but please know that Kubernetes runs on several other clouds, too, with no crippleware or anything. It is 100% open.
Yes, sometimes development/testing for new kubernetes features 'feels' like it's focussed-first on GCE functionality (before other platforms) and earlier on, it had some hooks that weren't great (like GCE-only external load balancers and storage). But hey, it's not even v1.0 yet - and all those things are either fixed or being worked on already.
And as a non-GCE user, you aren't a second-class citizen. It works everywhere.
We've deployed successfully in AWS, vagrant and bare-metal (in the garage), so far. All with 'one-command' automated deployment and re-use of our pod & service specs throughout.
Roadmap/Architecture-wise, it would be good to see a more 'pluggable' approach for 3rd party integration (more like an Open Stack model), but again, we're still pre-v1.0...
Also, I think the google-folk here are being very 'reasonable' in their replies. Your comment was mis-directed & ill-informed. Go do some reading or watch Kelsey Hightower's presentation from a couple of months ago:
http://chariotsolutions.com/screencast/philly-ete-2015-16-ke...
Google have dedicated developers who are hacking on a lot of open source projects - not just Kubernetes - which takes significant amount of time.
After all - this is for all open source users out there - it's all Open Source - you don't have to use it.