> Use the “record” option for easier rollbacks.
As explained in the documentation here https://kubernetes.io/docs/concepts/workloads/controllers/de... this option records the changes you apply with each version, allowing you to roll back to any previous version, and see the changes.
> Use plenty of descriptive labels.
Not just descriptive, but you should also label by version, service, etc. You can use labels also in selectors for loadbalancers and ingresses, and you can also query the CLI by label. This not only makes it easier to find things, but also can be very useful, for example when rolling out a new version – label each with version=..., and then just change the label selector of the LoadBalancer.
How to use selectors, and labels, is explained here https://kubernetes.io/docs/concepts/overview/working-with-ob... (I know that this explanation is very limited, but I don’t know of a better documentation of this feature)
> Use sidecar containers for proxies , watchers etc. Don’t use sidecar for bootstrapping. Use init container instead.
For bootstrapping, Kubernetes will first execute init containers in order, then start the main container. This ensures that they operate deterministically. If you try to do this with sidecars, you might end up with containers running when they aren’t necessary anymore, but also need to build your own deterministic bootstrapping, and handle errors in one of them yourself.
Also see https://kubernetes.io/docs/tasks/configure-pod-container/con... and https://kubernetes.io/docs/concepts/workloads/pods/init-cont...
> Don’t Use latest or no tag.
This is basically common sense, as for any dependency – the way projects update significantly differs, some might never do breaking changes, others might break their entire API in every minor release, and as result, your service might end up down. This is the reason why a decade ago every sysadmin used Debian Stable (no breaking changes ever). On the other hand, if you specify fixed versions, make sure to check for bugfixes manually (e.g., I recently saw a container from a major project that was built with an outdated release of the JVM because they had never updated that version tag).
> Readness & liveness probes are your friends.
Readiness and liveness probes are especially useful for load balancing again – they determine if a service is ready to serve, and automatically remove them out of the pool used by the service, so that requests are only routed to services that are up. You don’t have to use HTTP probes either – for example, several helm charts for clustered databases use their CLI client as probe.
More about the probes here https://kubernetes.io/docs/tasks/configure-pod-container/con... and how they affect the pod lifecycle here https://kubernetes.io/docs/concepts/workloads/pods/pod-lifec...
________________________
And most of the rest seems pretty obvious.Generally, for people interested in the topic, /r/kubernetes, https://kubernetes.slack.com/ and #coreos on Freenode might be a much better place for a discussion than this HN post of an article with bullet points, no explanation, and countless typographic errors.