I have a love-hate relationship with it. It is very complex and builds on 5 other layer of abstraction (K8s, Envoy, Iptables,...). Grasping what is going on requires you to understand all of those layers first. Istio essentially adds one layer of proxy for all your ingress/egress requests and from an engineering/performance/cost perspective that is not amazing.
Once it is working and deployed though it provides a solid set of functionalities as part of the infrastructure directly. AuthN/Z, mTLS, security, metrics and logs are all deployed by default without the end-user having to do anything.
Eventually I expect Istio will evolve to a model that makes more sense with Ambient/eBPF (For cost/performance reasons)
The community behind Istio is especially helpful and one of the main reasons why we went with this project.
Yeah this is a definite no for me.
A year ago, a number of Envoy gateway maintainers (including Contour) announced their intention to join up to build one implementation of an Envoy gateway. They haven't made a lot of noise since, but they are apparently up to v0.4.
https://blogs.vmware.com/opensource/2022/05/16/contour-and-c...
You can also use the Gateway API to manage Istio. So, if you are using Istio, you probably don't need Envoy Gateway either.
Wherever you look, it's still Envoy. Unless of course you look at Linkerd, who have their own thing.
We still haven’t achieved an amazing distributed tracing strategy, we don’t use its MySQL or Redis interfaces, and haven’t rolled out more advanced features like smart retries. It’s hard to get momentum on that versus other must have work.
But for mTLS and authn and authz, it works great. Thanks for the hard work.
Reading between the lines, it sounds like the main problem is Google's tight control over the project. Apple contributes to the Swift implementation and MSFT drives the native .NET implementation, but there's little non-Google input in decision-making for Go, Java, C++ core, or any of the implementations that wrap core.
More subjectively, I'm impressed by the CNCF's willingness to stick to their stated graduation criteria. gRPC is widely used (even among other CNCF projects), and comes from the company that organized the CNCF - there must have been a lot of pressure to rubber-stamp the application.
There is little doubt in my mind that gRPC is a larger and more impactful project than Istio.
Full disclosure: This is my tool
A bit awkward to use but lots of great info there. I use it every now and then (I'm a maintainer of Dapr).
On a personal level, it's one of those projects that someone obsessed with "perfect engineering" develops, regardless of the human cost. Crappier solutions (ex. JSON-over-HTTP) are better in almost all cases.
gRPC _does_ require support for HTTP trailers, which aren't used much elsewhere. If you want to use streaming RPCs, you also need a proxy that doesn't buffer (or allows you to disable buffering).
You couldn't get a better example of good pragmatic engineering than gRPC compared to something like CORBA or DCOM. I can't talk about "all cases" but in the cases I've come across it's a much better solution than JSON over http.
Ignore the CNCF for a second. Both are open source, so will survive regardless, but the former has a single vendor behind it, and the latter has almost all the cloud industry.
There are valid use cases for FreeBSD, but the default choice is Linux.
The same could be said about Apple products, but that doesn't mean people should be dissuaded from using them. Quite the opposite: being in charge of a technology means you can be 100% focused on it and be relentlessly focused on making it great for your customers.
Anyone who reads these comments should be able to get an understanding of both service meshes without having to research other software.
Are you saying that istio IS kubernetes and linkerd is not? I don't think linkerd WANTS to be kubernetes.
I love your podcast, Craig, but this "hot take" is too hot to hold
More than half those companies ran istio in production at large scales.
Part of these decisions are based on things like “What is the rest of the industry doing?” “How vibrant/diverse is the community?” “How mature is the project _for enterprise adoption_?” “What vendors are available for enterprise support?” “Is it already available in my platform of choice?” etc.etc.
The sting of “picking the wrong container orchestrator” is still fresh in a lot of organizations.
We see Istio make it through these questions with good answers for a lot of organizations where other/alternative service mesh vendors strike out pretty quickly.
This is even before we get to the “feature comparisons” for usecases these large organizations focus on/have.
https://github.com/cncf/toc/blob/main/proposals/graduation/i...
Same thing happened to Knative.
CNCF definitely has some politics but its been interesting to see large OSS projects be essential dead on arrival now if its not in a vendor neutral holding org.
I personally try to favor vendor neutral projects now. Slightly smaller chance of being burned like I was with Grafana switching licenses.
for each there are conditions, including number of contributors, number of companies oficially backing it, etc.
Not kubernetes level https://devboard.gitsense.com/kubernetes/kubernetes but still very good.
Full Disclosure: This is my tool, but I figure the insights would be interesting/useful.
Now CNCF needs to figure out how to get Istio to work nicely with the networking k8s addons
Jokes aside, Envoy really deserves some spotlight.
How do I do that exactly? I need to install some iptables rules inside a pod to redirect pod traffic to envoy?
Also I'm worried about its pervasiveness. Is it possible to enable those side-cars only on selected pods?