If there's on area that is in dire need of improvement, though, it's the documentation. If you look around, there is essentially no documentation that starts from first principles, going through the different components (and their lifecycle, dependencies, requirements and so on) one by one, irrrespectively of the cloud environment. There is a "Kubernetes from scratch" [1] document, but it's just a bunch of loose fragments that lacks almost all the necessary detail, and has too many dependencies. (Tip: ask the user to install from source, and leave out how to use images, cloud providers and other things that obscure the workings of everything.)
Almost all of the documentation assumes you're running kube-up or some other automated setup, which is of course convention, but hides a huge amount of magic in a bunch of shell scripts, Salt config and so on that prevents true understanding. If you run it for, say, AWS, then you'll end up with a configuration that you don't understand. It doesn't help that much of the official documentation is heavily skewed towards GCE/GKE, where certain things have a level of automatic magic that you won't benefit from when you run on bare metal, for example. kube-up will help someone get it up and running fast, but does not help someone who needs to maintain it in a careful, controller manner.
Right now, I have a working cluster, but getting there involved a bunch of trial and error, a lot of open browser tabs, source code reading, and so on. (Quick, what version of Docker does Kubernetes want? Kubernetes doesn't seem to tell us, and it doesn't even verify it on startup. One of the reefs that I ran aground on was when 1.11 didn't work, and had to revert to 1.9, based on a random Github issue I found.)
[1] http://kubernetes.io/docs/getting-started-guides/scratch/
Likely if I had to choose today or this quarter we would go the empire route and build on top of ECS. Though, our model and requirements are a bit different so we'd have to heavily modify or roll our own.
All you need to do, in broad strokes, is:
* Set up a VPC. Defaults work.
* Create an AWS instance. Make sure it has a dedicated IAM role that has a policy like this [1], so that it can do things like create ELBs.
* Install Kubernetes from binary packages. I've been using Kismatic's Debian/Ubuntu packages [2], which are nice.
* Install Docker >= 1.9 < 1.10 (apparently).
* Install etcd.
* Make sure your AWS instance has a sane MTU ("sudo ifconfig eth0 mtu 1500"). AWS uses jumbo frames by default [3], which I found does not work with Docker Hub (even though it's also on AWS).
* Edit /etc/default/docker to disable its iptables magic and use the Kubernetes bridge, which Kubelet will eventually create for you on startup:
DOCKER_OPTS="--iptables=false --ip-masq=false --bridge=cbr0"
* Decide which CIDR ranges to use for pods and services. You can carve a /24 from your VPC subnet for each. They have to be non-overlapping ranges.* Edit the /etc/default/kube* configs to set DAEMON_ARGS in each. Read the help page for each daemon to see what flags they take. Most have sane defaults or are ignorable, but you'll need some specific ones [4].
* Start etcd, Docker and all the Kubernetes daemons.
* Verify it's working with something like: kubectl run test --image=dockercloud/hello-world
Unless I'm forgetting something, that's basically it for one master node. For multiple nodes, you'll have to run Kubelet on each. You can run as many masters (kube-apiserver) as you want, and they'll use etcd leases to ensure that only one is active.
[1] https://gist.github.com/atombender/3f9ba857590ea98d18163e983...
[2] http://repos.kismatic.com/debian/
[3] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_m...
[4] https://gist.github.com/atombender/e72c2acc2d30b0965543273a2...
This is the best infrastructure I've ever used in twenty years of doing ops and leading ops teams.
also, working with k8s will probably spoil you, it's pretty annoying to "go back" to other environments, where you're confronted with problems which would be effortlessly solvable in kubernetes.
I know Silicon Valley folks are infinitely pessimistic and/or grandiose, but this is LITERALLY the reason I got into this job.
Disclosure: I work at Google on Kubernetes
If you are interested in some of the things that we helped get into this release see our "preview" blog post from a few weeks ago, RBAC, rkt container engine, simpler install, and more: https://coreos.com/blog/kubernetes-v1.3-preview.html
Can't wait to continue the success with v1.4!
Watching issues like https://github.com/kubernetes/kubernetes/issues/23478 , and https://github.com/kubernetes/kubernetes/issues/23174 .. I'm not super interested in "kicking the tires"; I'm evaluating replacing all our environment automation with a version built around Kubernetes. Easy-up scripts that hide a ton of nasty complexity won't do the trick.
Following the issues I'm getting the impression that too much effort is being put into CM style tools vs making the underlying components more friendly to setup and manage. Did anyone see how easy it is to get the new Docker orchestration running?
Then there is the AWS integration documentation.. I'm following the hidden aws_under_the_hood.md updates, but I'm still left with loads of questions; like how do I control the created ELB's configuration(cross zone load balancing, draining, timeouts,etc)?
I re-evaluate after every update and there are some really nice features being added in, but at the end of the day ECS is looking more and more the direction to go for us. Sure, it's lacking a ton of features compared to Kubernetes and it's nigh but impossible to get any sort of information about roadmaps out of Amazon... But it's very clear how it integrates with ELB and how to manage the configuration of every underlying service. It also doesn't require extra resources(service or human) to setup and manage the scheduler.
"Federation" in this context is across clusters, which is not something other systems really do much of, yet. You certainly don't want to gossip on this layer.
"evaluating replacing" really does imply "kicking the tires". Put another way - how much energy are you willing to invest in the early stages of your evaluation? If a "real" cluster took 3 person-days to set up, but a "quick" cluster took 10 person-minutes, would you use the quick one for the initial eval? Feedback we have gotten repeatedly was "it is too hard to set up a real cluster when I don't even know if I want it".
There are a bunch of facets of streamlining that we're working on right now, but they are all serving the purposes of reducing initial investment and increasing transparency.
> how easy it is to get the new Docker orchestration running
This is exactly my point above. You don't think that their demos give you a fully operational, secured, optimized cluster with best-perf networking, storage, load-balancing etc, do you? Of course not. It sets up the "kick the tires" cluster.
As for AWS - it is something we will keep working on. We know our docs here are not great. We sure could use help tidying them up and making them better. We just BURIED is things to do.
Thanks for the feedback, truly.
Whatever you may think of my level of knowledge or weak knees for consensus and gossip protocols, these problems(perceived or otherwise) with setup, documentation, and management seem pretty widely reported.
EDIT: I hope this doesn't sound too negative. Kubernetes IS getting better all the time. I only write this to give a perspective from somebody who would like use Kubernetes but has reason for pause. Our requirements are likely not standard; our internal bar for automation and ease of use is quite high. We essentially have an internal, hand-rolled, docker-based PaaS with support for ad-hoc environment creation(not just staging/prod). We would like to move away from holding the bag on our hand-rolled stuff and adopt a scheduler :) Deciding to pull the trigger on any scheduler though would be committing us to a rather large amount of integration effort to reach a parity that doesn't seem riddled with regressions over the current solution.
The debate between automation vs simplification is one that has gone on since k8s 1.0 and likely will continue to be had. But I think to an extent it is a false choice: I created a new k8s installation/ops tool (i.e. did work on the "automation" side), and out of that I got a fairly clear road-map for how we could simplify the installation dramatically in 1.4. In other words, some of the simplification you ask for comes by embedding the right pieces of the automation. k8s is open-source, so I have to persuade others of this approach, but I think that's a good thing, and I'd hope you'd join in that discussion also (e.g. #sig-cluster-lifecycle on the k8s slack).
To be clear, nothing is "masterless" - please go check out the production deployments for other container management solutions, they all require a separate control plane when running in production with a cluster of any reasonable (>64 nodes) size. FYI, it's a best practice when running a cluster of any size to separate the control plane.
To your direct question, with the other orchestration tools, how would you manage your ELB? Wouldn't you have your own management? They don't (to the best of my knowledge) do any sort of integration - not even the minimum level that Kubernetes does.
Disclosure: I work at Google on Kubernetes
As others in the thread mentioned, this was the cut of the binary, we'll be talking a lot more about it, updating docs and sharing customer stories in the coming weeks.
Thanks, and please don't hesitate to let me know if you have any questions!
Disclosure: I work at Google on Kubernetes.
Disclosure: I do not work at Google
The experience, definitely something I’m looking forward to, needs a lot of improvement if your laptop has an Apple logo on it. Hopefully some part of the team is working on that :)
https://github.com/kubernetes/minikube
Disclosure: I work at Google, on minikube.
Sounds pretty interesting, especially all the part about service discovery & node health/replacement.
Anyone using it for production?
Otherwise, there's a list at http://kubernetes.io/community/, including: New York Times, eBay, Wikimedia Foundation, Box, Soundcloud, Viacom, and Goldman Sachs, to name a few.
A final build of 1.3 was tagged with an accompanying changelog and announcement post. I found it weird that it had no more ceremony, nor any prior submission on hn, and as it had been announced through the kubernetes-announce mailing list for 17 hours, I figured its existence would be interesting to the community, so I submitted it in good faith.
In any case, kudos to everybody working on it and congratulations on the release, whether it's this week or the next.
[0]: https://groups.google.com/forum/#!topic/kubernetes-announce/...
My understanding is that with the timing of the US holiday, it made more sense to hold off on the official announcement for a few days. So that's why there aren't more announcements / release notes etc; and likely there won't be as many people around the community channels to help with any 1.3 questions this (long) weekend.
You should expect the normal release procedure next week! And if you want to try it out you can, but most of the aspects of a release other than publishing the binaries are coming soon.
I'm the new executive director of the Cloud Native Computing Foundation, which hosts Kubernetes. We have end user members like eBay, Goldman Sachs and NCSoft, but we're in need of startup end users (as opposed to startup vendors, of which we have many).
Please reach out to me at dan at linuxfoundation.org if you might like to be featured in a case study.
Great to see an openstack provider's been added, too.
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation-high-level-arch.png
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federated-api-servers.md
Though there are many issues in discussion. Anything in particular you want to work on?
Disclosure: I work at Google on KubernetesDoes anyone have examples of how they are managing deployments? I.e. deploying app update, running db migrations perhaps?
http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cl...
Disclosure: I work at Google on Kubernetes
Disclosure: I work at Google on Kubernetes.
Not that it's very exciting to anyone who is familiar with Services + Pod networking, but there's a video demo: https://asciinema.org/a/48294
Kudos to them, and awesome to see people working to get Kubernetes to work on Azure.
> AWS
> Support for ap-northeast-2 region (Seoul)
What does this mean? How can K8S be tied into something as specific as an AWS region?Just set
export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=ap-northeast-2a
kube-up.sh
Here's the PR
https://github.com/kubernetes/kubernetes/pull/24464In addition, different regions support different AWS features/products and being a newer region usually means the least amount of support. So any setup tooling or infrastructure integration needs to account for those differences and use alternatives if certain services aren't available.