Remember the first time you saw the AWS console? And the last time?
Besides, personally I find AWS console much easier to understand. I don't get why people hate it.
Because it is hard to manage the configuration. It's why tools like terraform exist.
Anecdote. I worked for a small company that was later acquired. It turned out one of the long time employees had set up the company's AWS account using his own Amazon account. Bad on it's own. We built out the infra in AWS. A lot of it was "click-ops". There was no configuration management. Not even CloudFormation (which is not all that great in my opinion). Acquiring company realizes mistake after the fact. Asks employee to turn over account. Employee declines. Acquiring company bites the bullet and shells out a five figure sum to employee to "buy" his account. Could have been avoided with some form of config management.
That is completely the wrong lesson from this anecdote.
1) The acquiring company didn't do proper due diligence. Sorry, this is diligence 101--where are the accounts and who has the keys?
2) Click-Ops is FINE. In a startup, you do what you need now and the future can go to hell because the company may be bankrupt tomorrow. You fix your infra when you need to in a startup.
3) Long-time employee seemed to have exactly the right amount of paranoia regarding his bosses. The fact that the buyout appears to have killed his job and paid so little that he was willing to torch his reputation and risk legal action for merely five figures says something.
+1 for "click-ops", perfectly put.
Sounds like the tiniest acquisition mistake I’ve ever heard of.
Spin up new instances, load data from snapshots, get back to work.
The console is fine as a learning tool for deployment/management, and for occasional experimentation, monitoring, and troubleshooting, but any IaC tool is vastly more manageable for non-toy deployments where you need repeatability and consistency and/or the ability to manage more than a very small number of resources.
Do you need to manage keys when ssh'n into a VM?
Do you know what the purpose of all the products are? If you don't know one, are you able to at least have an idea what it's for without going to documentation?
The have also directly opposed many efforts for Kubernetes, even to their own customers, until they realized they couldn't win. Only then did they cave, and they are really doing the bare minimum. The most significant contribution to OSS they have made was a big middle finger to Elastic search...
> How do you view all the VMs in a project across the globe at the same time?
I'm not sure what it's got to do with k8s? I can't see jobs that belong to different k8s clusters at the same time, either.
> Do you need to manage keys when ssh'n into a VM?
Well, in k8s everybody who has access to the cluster can "ssh" into each pod as root and do whatever they want, or at least that's how I've seen it, but I'm not sure it's an improvement.
> Do you know what the purpose of all the products are? If you don't know one, are you able to at least have an idea what it's for without going to documentation?
Man, if I got a dime every time someone asked "Does anyone know who owns this kubernetes job?", I'll have... hmm maybe a dollar or two...
Of course k8s can be properly managed, but IMHO, whether it is properly managed is orthogonal to whether it's k8s or vanilla AWS.
Most situations I have a direct comparison, k8s takes less ops. Often thanks to helm.
The AWS console is designed for lockin and I could use configuration management for AWS too but the time required to go through their way of doing x is just not worth it. Unless I want to become a AWS solutions architect consultant
There was a time in between for me - that was Rightscale.
For me, the real thing that k8s bring is not hardware-infra - but reliable ops automation.
Rightscale was the first place where I encountered scripted ops steps and my current view on k8s is that it is a massively superior operational automation framework.
The SRE teams which used Rightscale at my last job used to have "buttons to press for things", which roughly translated to "If the primary node fails, first promote the secondary, then get a new EC2 box, format it, install software, setup certificates, assign an elastic IP, configure it to be exactly like the previous secondary, then tie together replication and notify the consistent hashing."
The value was in the automation of the steps in about 4 domains - monitoring, node allocation, package installation and configuration realignment.
The Nagios, Puppet and Zookeeper combos for this was a complete pain & the complexity of k8s is that it is a "second system" from that problem space. The complexity was always there, but now the complexity is in the reactive ops code, which is the final resting place for it (unless you make your arch simpler).
If I understand this correctly, all of the things could have been automated in AWS fairly easily .
"If the primary node fails" Health check from EC2 or ELB.
"get a new EC2 box" ASG will replace host if it fails health check.
"format it" The AMI should do it.
"install software, setup certificates" Userdata, or Cloud-init.
"assign an elastic IP, configure it to be exactly like the previous secondary, then tie together replication and notify the consistent hashing" This could be orchestrated by some kind of SWF workflow if it takes a long time or just some lambda function if it's within a few mins.
By the way, what does ansible do to help with scaling applications?
Honestly the last time I looked at k8s was like 5 years ago, but back then it looked like a pretty big pita to admin.
Containers are a standard abstraction over the operating system, not over the hardware (or the VM, even). This has its use cases, but making it “the standard” for deployment of all apps and workloads is just bananas, in my view.