They also seriously need to give CloudWatch a UI/UX overhaul.
1. https://opencensus.io/introduction/#partners-contributors
E.G. Datadog is basing their newer tracing libraries on OpenTracing, and Prometheus devs are behind OpenMetrics.
OpenTracing and OpenMetrics are more like API specs with libraries left to others to implement, and they're never really used standalone for them to be separate projects. The best option for the industry would be to fold OT and OM into OC and make a single stack, and hopefully include structured logging as well.
E.g. Linkerd gives you service "golden metrics" (success rate, latency distribution, request volumes) without any app changes. It can draw the service topology too, since it's observing everything in realtime. https://linkerd.io/2/features/telemetry/
There is literally nothing else quite like it in the market, and it gives you distributed tracing, automatic metric collection, and pre-defined alerts for a reasonable price.
https://docs.instana.io/core_concepts/tracing/#supported-tec...
The last thing I would want in a production environment is to have some 3rd party software monkey-patching the code at runtime.
What happens when: - a bug only occurs (due to timing or some other extremely subtle issue) when this monkey-patching is applied. - there's a bug in the monkey-patching itself (sounds like a fun debugging session!) - a library is accidentally monkey-patched with a slightly different version, or falsely detected as a known library (maybe it is a fork)
Give me statically compiled, reproducible, dependency free, bit-for-bit identical with what has been thoroughly tested in CI, musl binaries any day. That's how you avoid getting woken up at 4am.
This kind of magic should happen at compile time, if at all.
Can we please stop the buzzword train?
I apologize if this is a naive question but how come this wasn't included as part of the Kubernetes project given that it has the same Google origins?
That being said, I have been looking for a while and I can't find anyone who uses it in production on a platform other than Kubernetes.
Meshes are a lot more than just sidecar proxying -- they are what make sidecar proxying manageable, and they add a lot of other features like authentication, network policies, various other traffic control policies, service discovery, etc. They are an attempt to do for service-to-service communication what Kubernetes has done for container deployment -- make it abstract and declarative, with configurations that are independent from the underlying implementation.
The underlying implementation that works right now is the Kubernetes API and etcd, and alternate implementations need to be provided for those features to work well outside of Kubernetes. I think it will happen sometime in the next few years.
In a monolith you need to implement some of this stuff only once and you don't need a lot of it at all because you are not making remote procedure calls.