So I wrote this https://github.com/icy/gk8s#seriously-why-dont-just-use-kube... It doesn't come with any autocompletion by default, but it's a robust way to deal with multiple clusters. Hope this helps.
Edit: Fix typo err0rs
env/
account-a/
dev/
stage/
prod/
account-b/
dev/
stage/
prod/
I keep config files in each directory. I call a wrapper script, cicd.sh, to run certain commands for me. When I want to deploy to stage in account-b, I just do: ~ $ cd env/account-b/stage/
~/env/account-b/stage $ cicd.sh deploy
cicd.sh: Deploying to account-b/stage ...
The script runs ../../../modules/deploy/main.sh and passes in configs from the current directory ("stage") and the previous directory ("account-b"). Those configs are hard-coded with all the correct variables. It's impossible for me to deploy the wrong thing to the wrong place, as long as I'm in the right directory.I use this model to manage everything (infrastructure, services, builds, etc). This has saved my bacon a couple times; I might have my AWS credentials set up for one account (export AWS_PROFILE=prod) but trying to deploy nonprod, and the deploy immediately fails because the configs had hard-coded values that didn't match my environment.
(If I were redoing this all from scratch, I would just have my interactive terminal show some status-information above the command after I typed "kubectl "; the context, etc. That way, you know at a glance, and you don't have to tie yourself to the filesystem. And, this could all be recorded in the history, perhaps with a versioned snapshot of the full configuration, so that when this shows up in your history 6 weeks later, you know exactly what you were doing.)
With that in mind, I do feel like the concept of an "environment" has been neglected by UI designers. I never know if I'm on production, staging, private preview, or what; either for my own software, or for other people's software. (For my own, I use "dark reader" and put staging in dark mode and production in unmodified mode. Sure confuses people when I share my screen or file bug reports, though. And, this only works if you have exactly two environments, which is fewer than I actually have. Sigh!)
Here's the script (along with a bunch of extra utils): https://github.com/pch/dotfiles/blob/master/kubernetes/utils...
For the several dozen clusters that I manage, I have separate kubeconfig files for each and I use the --kubeconfig flag.
It's explicit and I have visual feedback in the command I run for the cluster I'm running against, by short name. No stupidly long contexts.
I use a custom written script but I've used this one in the past - its pretty nice.
https://github.com/jonmosco/kube-ps1/blob/master/kube-ps1.sh
So if we think we're targeting the dev cluster and run 'kubectl -n dev-namespace delete deployment service-deployment' but our current context is actually pointing to prod then we trigger an error as there is no 'dev-namespace' in prod.
Obviously we can associate specific namespaces to contexts to traverse this safety net but it can help in some situations.
(This way, the worst you can do is re-apply some yaml that should’ve already been applied in that cluster anyway)
We also have a Makefile in every directory, where the default pseudo-target is the thing you want 99% of the time anyway: kustomize build | kubectl apply -f -
This approach allows the convenience of short, context-free commands without compromising safety, because the context info in the shell prompt can be relied on, due to the isolation.
There are some things which don't work well inside a docker container (port-forwarding for example), but it does make it simple to have isolated shell history, specific kubectl versions, etc.
I myself am quite happy with the basics, but have an alias on k=kubectl and set-context that without argument displays the current-context. Before doing anything I rename or edit contexts in .kube/config to have a minimal amount of characters to type for the target ("proj-prod"). Using -l name= is another help in filtering, jsonpath and jq too.. as years ago with using the cli prompts with database products, building up muscle memory also gave me opportunity to grok the concepts at the same time.
After some attempts with different tooling, I came to like kubernetes for what it can do.
PS1='\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]\[\033[01;33m\] [`kubectl config current-context| rev | cut -d_ -f1 | rev`] \[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] $ 'e.g.,
spec:
containers:
- command:
- bash
- -c
- tail -f /dev/null
(and comment out any liveness or readiness probes)Very useful to then `exec` with a shell in the pod debug things or test out different configs quickly, check the environment etc.
Would like to add that my favorite under-appreciated can't-live-without kubectl tool is `kubectl port-forward`. So nice being able to easily open a port on localhost to any port in any container without manipulating ingress and potentially compromising security.
List the fields for supported resources
This command describes the fields associated with each supported API resource. Fields are identified via a simple JSONPath identifier:
<type>.<fieldName>[.<fieldName>]
Add the --recursive flag to display all of the fields at once without descriptions. Information about each field is
retrieved from the server in OpenAPI format.Use "kubectl api-resources" for a complete list of supported resources.
Examples: # Get the documentation of the resource and its fields kubectl explain pods
# Get the documentation of a specific field of a resource
kubectl explain pods.spec.containers
Options:
--api-version='': Get different explanations for particular API version (API group/version)
--recursive=false: Print the fields of fields (Currently only 1 level deep)Usage: kubectl explain RESOURCE [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
$
# Lint a Helm chart
# Good to put in pre-merge checks
$ helm template . | kubeval -
different/ better than "helm lint" (https://helm.sh/docs/helm/helm_lint/)?I don't think it validates the Kubernetes resources.
Here's an example:
$ helm create foo
$ cd foo
Then change "apiVersion" in deployment.yaml to "apiVersion: nonsense"
In the linting, I got
$ helm lint ==> Linting . [INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed
$ helm template . | kubeval -
ERR - foo/templates/deployment.yaml: Failed initializing schema https://kubernetesjsonschema.dev/master-standalone/deploymen...: Could not read schema from HTTP, response status is 404 Not Found
kubectl rollout undo deployment <deployment-name>
"You should learn how to use these commands, but they shouldn't be a regular part of your prod workflows. That will lead to a flaky system."
It seems like there's some theory vs. practice tension here. In theory, you shouldn't need to use these commands often, but in practice, you should be able to do them quickly.
How often is it the case in reality that a team of Kubernetes superheroes, well versed in these commands, is necessary to make Continuous Integration and/or Continuous Deployment work?
Example:
The Job "export-by-user" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"416d5527-9d9b-4d3c-95d2-5d17c969be19", "job-name": "export-by-user", Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"....
And it just goes on.The actual error? The job is already running and cannot be modified
I'm pretty sure if you have the time to make a PR to fix it, it would be welcome. But I'm guessing it's non trivial or it would have been fixed by now - probably a quirk of the code generation logic.
Google had a net income of $17.9 billion in just Q1 of 2021.
I believe they have the resources to fix that, and I will not be shamed into "if you have time, please open a PR towards this opensource project".
> But I'm guessing it's non trivial or it would have been fixed by now - probably a quirk of the code generation logic.
I remember the "quirks of generation logic" being used as an excuse for Google's horrendous Java APIs towards their cloud services. "It's just how we generate it from specs and don't have the time to make it pretty".
For the life of me can't find that GitHub issue that called this out. Somehow their other APIs (for example, .net) are much better.
Edit: found it
https://github.com/googleapis/google-cloud-java/issues/2331#... and https://github.com/googleapis/google-cloud-java/issues/2331#...
kubectl get svc -l app=<your-app-name>For the IP address, why do you need that? with k8s dns you can easily find anything by name.
But now my choice is Intellij (or other IDEs from JetBrains) + Lens, which I find more productive and straightforward (more GUI, fewer commands to memorize). Here's my setup and workflow:
1. For each repository, I put the Kubernetes deployment, service configurations, etc. in the same directory. I open and edit them with Intellij.
2. There's also a centralized repository for Ingress, Certificate, Helm charts, etc. I also open with Intellij. Spend some time to organize Kubernetes configs really worth it. I'm working with multiple projects and the configs gets overwhelming very quickly.
3. Set shortcuts for applying and deleting Kubernetes resources for current configs for Intellij. So I can create, edit, and delete resources in a blink.
4. There's a Kubernetes panel in Intellij for basic monitoring and operations.
5. For more information and operations, I would use Lens instead of Intellij. The operations are very straightforward, I can navigate back and forth, tweak configurations much faster than I could with the shell command only.
kubectl get po --all-namespaces -o wide | grep $NODE_NAME
Of course becomes unbearably slow, the more pods you have
https://stackoverflow.com/questions/39231880/kubernetes-api-...
For instance, there are three families (r, x, z) that optimize RAM in various ways in various combinations and I always forget about the x and z variants.
So I put together this "cheat sheet" for us internally and thought I'd share it for anyone interested.
Pull requests welcome for updates: https://github.com/wrble/public/blob/main/aws-instance-types...
Did I miss anything?
A console tool has a UI, it's the shell. And GUIs can be self-documenting too: tool tips, help bars, interactive prompts, manuals.
If you don't work at Google, you don't need a complexity of kubernetes at all, so better forget everything you already know about it. The company would be grateful.
Joke aside, trying to sell something to the masses that could potentially benefit only 0.001% of the projects is just insincere.
Pure CV pump and dump scheme.
If what you want to deploy is best described as “an application” it’s probably not the right tool for the job. If what you want to deploy is best described as “50 interconnected applications” it’s probably going to save you time.
This is an excellent way of looking at it. I've struggled for many years to come up with a response to hacker news comments saying you don't need kubernetes, but this sums it up about as well as I could imagine.
Learning how to operate Kubernetes well takes a while and I would say is only worth the investment for a extremely tiny percentage of companies.
It's very often the wrong tool to deploy our tiny app but many of us go along with it because it ticks some management boxes for various buzzwords, compliance, hipness, or whatever. Once you get out this hammer factory, it's a big and complicated one, so you will probably need a full time team to understand it and manage it. It's also a metric hammer factory, so you'll need to adapt all your other tooling to interoperate. Most of us can get by with lesser hammer factories, even k3s is less management.
If you just need to deploy some containers, think hard if you want to buy the whole tool factory or just a hammer.
Most of us aren't the engineering heads of our departments. So you'll forgive us if we continue pushing the moneymakers we have in our heads and setting up our homelab clusters. I want to be paid, I want to be paid well. It may as well be pushing the technology stack that scales to megacorps because who knows maybe I'll make it there one day.