What I'm waiting for, though, is for a big player to do a modern, clean "Kubernetes first" cloud offering. We're currently on Google Kubernetes Engine, and I'm disappointed in the lack of integration throughout Cloud Platform. GCP is great in many areas, but the Kubernetes offering is clearly an afterthought. As an example, if you create an ingress, this creates a bunch of load balancer objects (forwarding rules, URL maps, backends). But these are all given obscure names (k8s-um-production-zippediboop-f8da5e6c92f38300), and none of the Kubernetes labels transfer over. Same with disks; GCE disks don't even have labels, so they put a bunch of JSON in the "description", which of course cannot be filtered or queried. Similar things happen with VMs and other things; the "mapping" of Kubernetes to GCP concepts is basically either lossy or non-existent. Many other similar weaknesses exist: Cloud SQL (if you want to use Cloud SQL, you have to manually run the Cloud SQL Proxy as a sidecar), GCP service accounts (different from Kubernetes service accounts!), etc. GKE is solid, but everything seems like it's been put together without any over-arching design; GKE is just another thing buried in a somewhat ramshackle jungle of different services.
There's an opportunity for a vendor to come in and offer a platform where you start with Kubernetes. In particular, this means that the platform should be manageable from Kubernetes, through the use of CRDs and pluggable controllers. For example, I should be able to define a persistent disk using a Kubernetes manifest, instead of having to go through some web UI or CLI ("gcloud compute disk create" or whatever).
That said, it's hard to compete with GCP at this point. Whoever wants to compete in this space have to compete not just with GKE, but with the technical awesomeness of GCP.
I wouldn't be so sure about that. Consulting companies are drawn to operational complexity, of which Kubernetes has a lot of, like bears are to honey.
https://kubernetes.io/partners/
https://www.google.com/search?q=kubernetes+consulting&ie=utf...
Thing is, can you point to alternatives which are simpler? I cannot think of any, unless you only have a handful of containers to manage.
Even things like AWS's own ECS become difficult to manage as you grow.
To me this is just another hype train, like TensorFlow/Caffe before it, and like Hadoop/Spark before that.
These tools all have their uses in specific, non-common cases, but for your average business they are a net loss in time and money.
In my experience building and running Kubernetes clusters for Fortune 500 companies is that actually it's much simpler, it's just different from traditional infrastructure.
Different != Complex.
After years of using all the major clouds - Google Cloud has the best performance and primitives, but the worst support and operational/business features. If you want the fastest VMs/storage/networking, and are fine with non-standard APIs and beta SDKs, then it's a great platform and has super simple billing.
AWS and Azure are much better suited for most companies and startups though in terms of actually getting your business running with standard tooling, large ecosystems, and a massive portfolio of services.
What do you mean? You can do exactly that today, with persistent volume claims. K8s will create the storage volume for you and make it available for containers. This is "dynamic provisioning".
https://kubernetes.io/docs/concepts/storage/persistent-volum...
What is missing is being able to create other resources. For instance, define a EC2 instance as a k8s object and have K8s provision it, for legacy or anything you haven't moved to K8s yet(databases, etc). I see no technical reason why it cannot be implemented today.
We are basically bringing all the stuff we learned building Cloud Foundry to bear on Kubernetes. There's also an increasing amount of systems being shared. Right now Istio is seen as the long-term north-south and east-west routing solution for Cloud Foundry App Runtime (CFAR, what folks traditionally thought of as Cloud Foundry). We're looking at introducing Loggregator to the Kubernetes world as Oratos, we have factored buildpacks into standalone systems that can have a single image builder for any OCI-compliant platform, we led the formation of the Open Service Broker API effort that creates a common service catalogue approach for both CFAR and Kubernetes, we've contributed to runC and I lose track of all the rest.
When I say "we" I should say this isn't just Pivotal. IBM, SAP and SUSE are deeply involved as Cloud Foundry community members.
Disclaimer: I work on CSE and use it for K8s dev/test. It's really handy.
IBM Cloud Private is a Kubernetes-based private cloud offering, though from your other points I don't think it will have everything you want.
If the latter, I am not so sure (or else I don't understand your point.) A lot of things don't/won't run well on Kubernetes--database management systems are a good example as well as any legacy application, which includes a lot of Windows OS, so you still have to address those.
More subtly Kubernetes is not going to rewrite the whole world or implement distributed firewalling, network attached storage, VLANs, etc. So you are always going to have a non-K8s layer under there that is more or less foreign to the Kubernetes model. The best you can do is make the layering relatively efficient.
Databases run fine on Kubernetes and have been doing fine since 1.7, and the meme that Docker is bad for stateful apps is a getting a bit old. The challenges are mostly the same as with running databases on a VM or bare metal. In particular, you need to know how to manage HA.
The weakest point is perhaps that Kubernetes's scheduler completely ignores disk I/O, so you have to bare careful to avoid sharing a node with multiple disk-heavy apps that will compete for resources. This is includes the file system cache; for example, PostgreSQL works best when it can rely on the OS to cache pages, so you don't want other apps (including Docker and Kubernetes themselves) to compete there.
That said, I wasn't saying that a hosted solution shouldn't also offer VMs. Just that Kubernetes should be the main entrypoint and control plane. And someone figured out a way to run VMs via Kubernetes [1], which is a neat example of a solution to the lack of integration I was complaining about earlier.
[1] https://www.mirantis.com/blog/virtlet-run-vms-as-kubernetes-...
Not saying I've tried it, but some people are having ok luck pinning RDBMS pods in Kubernetes to specific nodes. It goes a bit against k8s principles, but it makes operational sense.
Kubernetes is baking in windows container support, for some legacy scenarios. More interestingly (and posted on HNs front page today), are solutions that allow independent VMs to be run as though they were k8s pods. This provides hybrid models ideal for legacy packaging and maintenance while moving onto new hardware.
> Kubernetes is not going to rewrite the whole world or implement distributed firewalling, network attached storage, VLANs, etc. So you are always going to have a non-K8s layer under there that is more or less foreign to the Kubernetes model
There will always be some natural impedance between hardware and software...
I think this picture is ripe for improvements though, and we're already seeing the edges of it take shape :)
Kubernetes will be moving towards smarter networking solutions to handle more and better use-cases with better performance (BPF), and is incorporating better network abstractions slowly but surely.
From the under layers: software defined networking (SDN), and kubernetes providers for major virtualization platforms (VMware, for example), have commercial offerings that could readily support integrated or "hyperconverged" operations... Microsegmentation and microservices go hand-in-hand. Empowered by a platform connected with role based access controls top-to-bottom there's a lot of potential to harmonize those distributed firewalling/VLAN needs through the same declarative YAML.
We're not there, yet. We are closer than ever though :)
Frankly, it was a supremely painful platform to work on. They obfuscated just enough of the k8s API to make it simultaneously completely unintuitive for less "orchestration minded" team members, yet severely underpowered for me and my fellow platform workers. It struck an unhappy medium for our team that no-one wanted or needed.
All my looks at OpenShift suggest that not enough has been peeled back to make it a useful platform-on-a-platform, but since they probably paved the way for certain features, those features have predominantly been implemented (and more tightly integrated) into k8s. RedHat is going to need to come up with a new value proposition for OpenShift for it to ever be a truly viable alternative to "raw" Kuberenetes, and given their buy-in to the technology, I'm not convinced they'll want to throw away the work they've done so far. Good money after bad, or something.
Not that Google is better here, but I already know what GCP provides; something that wants to compete with GCP/GKE really needs to explain why they're a viable competitor.
(And to people who design these things: If you need to have a "Products" dropdown filled with unexplained product names, you're doing something wrong.)
Also there’s already work being done on supporting cluster management from Kubernetes itself: https://github.com/kubernetes-sigs/cluster-api
As someone who is feeling burned over picking swarm over k8s, I'm not super into the idea of trusting a 3rd party to do k8s first.
Anything anyone implements well will be ported to gke in due time, no?
Operationalizing K8s can be fraught with vendor specific parts. It doesn't help documentation wise when many things gloss over a concept that is offloaded into GCE or GKE.
According to googlers here on HN, Google does use Kubernetes internally via GCP/GKE. But clearly those apps are in the minority, given Google's huge investment in Borg.
Google's spring-cleaning has nothing to do with their open source projects. Kubernetes is one of their most widely used open source "products" and the industry is creating an incredible community around it with several big players that could easily jump in to take over project leadership and financing.
It is actually a good thing that some k8s founding members left Google because they already got what they wanted to achieve and went on to work on stuff that Google probably wouldn't have prioritized as much, see Ark and kssonnet.
Kubernetes is IMHO the platform for cloud native applications for the next 5+ years.
Not wanting to use it because Google dropped everyone's darling (Reader) is rather naive.
I wouldn't equate Google's consumer facing products with their cloud/enterprise products.
It's kind of interesting because Google and Microsoft have opened a new front against Amazon to counter their strategy of locking you in with services. If software from CNCF is good enough to be used in place of these services on each cloud provider then you can nullify AWS's advantage there and Google can lure you in with ML and Microsoft with their enterprise experience.
But I'm not sure these projects can make progress faster than AWS can release and update their fantastic array of services. Kubernetes being the obvious exception (when is EKS GA?!?!)
Just want to remind anyone who hasn't tried K8. You can run MiniKube locally right on your laptop to get a taste of its power (and complexity):
https://kubernetes.io/docs/getting-started-guides/minikube/
For example, a 3-node Redis Enterprise cluster, all run locally:
https://redislabs.com/blog/local-kubernetes-development-usin...
Currently using GCloud and Stackdriver monitoring, but a few of the tools I am excited about include:
Prometheus / Granafa / KSonnet
KubeFlow ML
https://www.youtube.com/watch?v=I6iMznIYwM8
Istio, for programmatic routing
And, Agones, for game hosting
Recently, I spun up a simple pod-to-pod communication example but I found it pretty difficult. If you look up cluster networking in Kubernetes (https://kubernetes.io/docs/concepts/cluster-administration/n...) you'll find a whole fire hose of different options from ingress to calico to fabric and on and on.
This was what it took for me to try and rubber ducky my way to getting networking to work on Kubernetes, and in the end I had to get help from a friend at work (https://stackoverflow.com/questions/50195896/how-do-i-get-on...). It may be better than what came before, but it's not great.
After a few days of playing I set up lets encrypt with load balancing, a running app (Rails). A remaining issue are persistent volumes and how truly persistent these are. I haven't found out yet which solution I should pick for this. Longhorn is a Rancher product, which is probably what I will read more about now, but I cannot be sure. There are so many concepts and terminology that you need to figure out. Having Rancher in between is not helping me get a hang of kubernetes itself more purely of course.
A while back I was playing with Docker Swarm and I must say that I like Docker Swarm better in the sense that it feels closer to the source and because it is built into Docker. I get a feeling however that Kubernetes is where the future is so learning more about Docker Swarm is probably less worthwhile.
For those of us not backed by venture capital and not charging SV ex-googler rates to our clients, we need something to say "we'll host you on this git push containerization thing, it'll be cheap and easy, and we'll expand when the time comes, and that'll be cheap and easy too".
For $36/month (8GB Ram, 2vCPU) you get a single node if you just want to have the Kubernetes APIs. Maybe introducing smaller node plans later in the year.
Unfortunately, not much info on our website but please sign up if you are interested.
I did a (admittedly short) search for kubernetes and kernel bypass, and the only thing that seem remotely relevant was [0], however it didn't indicate whether they work together.
For background, I work for a Dark Pool Alternative Trading System, and we currently utilize kernel bypass for all of our networking using Solarflare NICs & openonload [1].
In the same vein, curious how containers work with CPU shielding and pinning threads to specific cores. Is it possible, and how do multiple containers on the same box interact in that regard. Do they need to be quasi-aware of each other so as to not pin a thread to the same core?
I'd greatly appreciate if anyone with experience with containers can answer these questions. I'm genuinely curious, but it's not worth researching further if there's no solution that can handle these strict requirements (e.g. it's a non-starter if containers increase latency).
[0] https://thenewstack.io/life-post-container-world/ [1] http://openonload.org/
At Datica (where I work), we started in 2014 with a bespoke container orchestration layer. This powered our HITRUST certified Platform. Think Heroku for regulated industries (like Healthcare). After years of hardship trying to keep up with the market we finally decided to go all in on K8s.
Kubernetes gives us the flexibility and community to focus on the compliance and security layer, while not having to build a Platform in house. Until now, no other open source solution had given us this type of flexibility. We're still working toward a GA release, but the speed at which we've been able to move has been incredible.
Now that there is a usable solution for complex orchestration, many newcomers will consider it the goto solution, regardless of scale. You need nginx and some python scripts? Kubernetes! You have 100 daily visitors? Kubernetes!
It’s not really their fault, it’s just a sad consequence of a convenient cloud solution.
I found that only bitbucket deployments is a good one in terms of simplicity of managing deployments (just like Heroku). Most vendors forces me to use their crappy CI solution for doing CD. Why they want me to migrate to your (very limited) build system? I can pay you for a fancy dashboard, but please, allow me to use whatever i need to build my software. There are many options on the market for CI and you just can't build them for everybody. But good CD is a way to go and easier to manage. Just give me a hook for registering for a new build (say docker image versions) and help me manage this stuff. I have tried Spinnakle, but it is too fragile for me - there are no simple way to install it (k8s even easier!) and UI is too bad for a small project.
Good CD is still missing in k8s ecosystem.
On the minus side, the development has been slow, with major breaking changes happening along the way. And in the latest version, the open-source version has been a bit hobbled. In particular, it doesn't support global secrets, so every project to be built has to be created and managed separately. This is not just if it needs build secrets (such as to access private Go packages, NPM modules, Ruby gems, etc.), but it's obviously also required to authenticate with a container registry. We decided to skip that since it means duplicating the same secrets for every single application. We actually reached out to the Drone guys to ask about enterprise pricing, but they didn't respond (!).
(At the moment we're back to building with Semaphore, which is a hosted solution similar to Travis. It's an old-hat CI system that spins up VMs, so it's slow and awkward to work with when it comes to Docker. But it's working okay at the moment.)
I've heard good things about GoCD [2], though. It's next on my list to investigate.
[1] https://drone.io
We have our own monitoring stack, though, so don't use the additional prometheus integrations
Great container-native pipelines and has good integration with Kubernetes, along with GKE specific hooks and the ability to run any kubectl command.
One great k8s tool I like is kompose. It allows our devs a very similar interface around secrets, networks, volumes, etc.
It's actually the same root word as cybernetics. Meaning governance. Hence tools named like Helm etc.
I believe it's the root of words like 'captain' too. Maybe even related to Caesar / Kaiser via Latin.
https://www.infoq.com/news/2015/05/mesos-powers-apple-siri
I would be surprised if they haven't done a k8s PoC at the very least though.
>Big extra points if you have experience with orchestration technologies like: - Mesos/Mesosphere - Kubernetes - Docker [Swarm]"