Does that mean that Kubernetes should now be used for all hobbyist projects? No. If I'm thinking of playing around with a Raspberry Pi or other SBC, do I need to install Kubernetes on the SBC first? If I'm thinking of playing around with IoT or serverless, should I dump AWS- or GCE-proprietary tools because nobody will ever run anything that that can't run on Kubernetes ever again? If I'm going to play around with React or React Native, should I write up a backend just so I can have something that I can run in a Kubernetes cluster, because all hobbyist projects must run Kubernetes now, because it's cheap enough for hobbyist projects? If I'm going to play around with machine learning at home, buy a machine with a heavy GPU, figure out how to get Kubernetes to schedule my machine learning workload correctly instead of just running it directly on that machine, because uhhh maybe someday I'll have three such machines with powerful GPUs plus other home servers for all my other hobbyist projects?
No, no, no, no, no. Clearly.
But maybe I envision my side project turning into full-time startup some day. Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it. Nah, probably too expensive. Oh wait, I can get something running for $5? Hey, that's pretty neat!
Different people will use different solutions for different project requirements.
The state of the art for cluster management will probably something completely different by then. Better to build a good product now and if you really want to turn it into a startup, productionize it then.
> Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it.
If learning Kubernetes _is_ your side project, then perfect go do that. Otherwise its just a distraction, taking more time away from actually building your side project and putting it into building infrastructure around your side project.
If what you really wanted to build is infrastructure, then great, you're doing swell, but if you were really trying to build some other fun side app, Kubernetes is just a time/money sink in almost all cases IMO.
I generally dislike this way of thinking. Infrastructure is a core component of whatever it is you're building, not an afterthought. Maybe you can defer things until a little bit later, but if you can build with infrastructure in mind you'll be saving yourself so many headaches down the road.
You don't need to build with the entire future of your project's infrastructure in mind, but deploying your project shouldn't be "ok now what?" when you're ready, like it was a big surprise.
Has it not only changed twice in the prior two decades between first VMs and now containers? Don't think this is something you have to worry about long term.
Well, you gotta deploy with something. Why not k8s?
Obviously if you're developing locally, k8s would not help in the least.
For your average developer who just wants to get something running on a port, Kubernetes introduces two barriers: containerization and Kubernetes itself. These are non-trivial things to learn, especially if you don't have an ops background, and both of them add substantial debugging overhead. And again for that developer, they provide very, very small gains.
I think the calculus changes if that developer starts to run multiple services on multiple servers, wants to keep doing that for years, and needs high uptime. I have a bunch of personal services I run in VMs with Chef, and I'm excited to convert that over to Kubernetes, as it will make future OS upgrades and other maintenance easier. But my old setup ran for something like 6 years and it was just fine. For hobbyists whose hobbies don't include playing with cluster-scale ops tooling, I think it's perfectly fine to ignore Kubernetes. It's the new hotness, but it doesn't provide much value for them yet. They can wait a few years; by then the tooling will surely have improved for low-end installs.
From the application-developer side, I'd dispute this. I was told to use Docker + Kubernetes for a relatively small work project recently, and I was able to go from "very little knowledge of either piece of tech" to "I can design, build, and deploy my app without assistance" in about 1 week of actual study time, spread out over the course of the project. And although I have several years' experience, I'm not some super-developer or anything.
What surprised me most is how well-documented (for the most part) everything is. The Kubernetes and Docker sites have a ton of great information, and the CLIs are rich and provide a consistent interface for all the details about your environment. (To tell the truth, that alone makes the time investment worth it.)
After this, there's no way in hell I'm going back to Heroku or similar and trying to piece together their crappy, one-quarter-documented "buildpack" system. I'd take a Kubernetes-and-Docker-first PaaS at a reasonable markup any day of the week.
I don't know if I'm totally convinced by that argument alone, but it would be nice if every critical response didn't seem to assume that every hobbyist is born understanding systemd, ansible, packer, qemu, etc.
Right, but again I think the point being made is that, if those are skills you do want to learn, or plan on using later down the road, it's worth knowing that you can use K8s even at a small scale.
Obviously, in most cases it could be premature optimization, but for some people (including you), it can be fun to learn.
I don't think that is wrong. I do think it is probably overkill, and IMO it does introduce operational burden and complexity. That doesn't mean you shouldn't do it, though, if you're interested in exploring the technology, for example.
It is not like I haven't done it the "old" way. I spent many years doing hand deploys, making deployers, running Ansible/Chef. It is just that we always found we can never confidently update servers running many apps as it would step on other applications. So we'd just make new ones, test and switch. This was not an easy process either. Plus we'd encounter issues like oh someone didn't make a startup script or filled up /var with logs, or had something eat up all the memory. All of these operational problems are gone with K8s. I know what you are thinking "well you did it wrong". Yes sometimes developers do things wrong. But in container/K8s land that wrong stuff is contained, and if you don't do things "right" you can't even run.
So we had operational issues there. Now we have a universal platform that someone can ship their app anywhere and have it run the same. That is a huge win. All for no extra work.
The question isn't really whether you need dozens of machines, it's whether you can foresee eventually maybe needing dozens of machines.
Remember the bad old days when people said that relational databases were worthless because they "don't scale", that using Mongo and other NoSQL databases were practically a necessity for doing anything modern and "web-scale" because otherwise after you got your big break and you got popular you would need to keep up with all the new traffic and not crash? A lot of engineers have this tendency to worry about scalability long before it's ever a problem. Something about the delusions of grandeur incurred by people who got into engineering because they were inspired by great people building big things.
Starting out by running Kubernetes on a three-node cluster is actually the correct call for a small project if you can reasonably foresee needing to elastically scale your cluster in the future, and don't want to waste days or weeks porting to Kubernetes down the line to deal with your scalability problems that you foresaw having in the first place.
Again, that doesn't mean that Kubernetes is right for every hobbyist project. But there is definitely a (small) subset of hobbyist projects for which it is not overkill.
The difference is that if you did build it all by hand as the author said, if it ever scales, you're going to have double the job to make it scale.
It's all a question of: do I think my software will succeed?
If it's a hobby project that will never get big, it's not worth the hassle. If it actually has a chance of succeeding, the small added complexity of Kubernetes will pay dividends extremely quickly when the system needs to scale.
Even with as little as two machines, I'd argue k8s is already adding more value than managing those by hand. People can say otherwise because they're used to it, but being used to it is not the point of the discussion.
The author also talks about Ansible which is another piece of complexity that would be comparable with doing it in k8s. I'd argue you have less YAMLs with k8s than with Ansible for a small project.
The only argument I see for doing anything by hand today is if it's a play thing.
IMHO it makes sense for most setups that have multiple micro-services that need to interact with each other. A single node cluster running a single container is kind of pointless; I agree. And you are not going to run much more than that on a micro instance. So, I agree with the main point of the article that this probably is not an appropriate setup for any kind of home setup unless of course you really want to have kubernetes (which would be a valid reason for attempting this).
If you run multiple microservices you have most of the problems that kubernetes solves out of the box and attempting to solve those by manually gobbling together bits of infrastructure outweighs the financial overhead of running kubernetes. So any moderately small setup where you are in any case going to have 2 or 3 machines running multiple containers, you probably should be looking at kubernetes.
So, if you are in Google or amazon, hosted kubernetes is definitely worth considering. You probably want a loadbalancer as well. So, at that point you are looking at ~50-100+$ per month anyway for a couple of instances, a LB and whatever else you need (e.g. RDS, S3, etc).
For anything running commercially, that's entirely defensible. Yes you can run cheaper on bare metal but people tend to forget all the hours doing devops stuff are also cost you. A day of a competent dev will easily run you kubernetes for quite some time. Unless your devs are super bored, make them spend their hours on more valuable stuff than reinventing wheels.
Well, it shouldn't be used but for very very few (if any) hobbyist projects.
Which is closer to this post, than the original article.
>But maybe I envision my side project turning into full-time startup some day.
Facebook managed this just fine as a simple PHP project on some guys laptop.
The original article concludes with "It's my contention that Kubernetes also makes sense for small deployments and is both easy-to-use and inexpensive today". This article contends that it's not at all easy to use and price-wise he hasn't compared like-with-like.
Exactly. Or maybe "I would love to advance my career and work for a larger company that is using Kubernetes, and I can get some hands on experience without breaking the bank."
"Do you want to do all of this because you think is fun? Or because you want to learn the technology? or just because? Please, be my guest! But really, would I do all of this just to run a personal project? No thanks."
I don't know what term to use but "full stack" apparently just means front end (html/css/js in browser) and backend (server software). Full stack is missing backup and restore, deployment, seamless upgrades (pushing new versions without down the service), scaling, testing infrastructure both front and backend, staging
As such, I have a hacked manual backup for my site. Anytime I want to update my site I have to take it offline for at least a moment. If any users were on it they lose their work. If it ever gets too much traffic I'll have to manually figure out how to scale it. It also took at least a week to get it where it is, as in a week doing stuff not my actual site but just learning how to deploy at all.
I can see no reason all of that can't be 100% provided out of the box and if Kubernetes is the path there I'm all for it.
Ideally I want to do something like
git clone complete-stack
install deps
edit server-main.js (or other lang)
edit client-main.js (or other lang)
deploy --init
Then I want edit server-main.js (or other lang)
edit client-main.js (or other lang)
stage
test
edit server-main.js (or other lang)
edit client-main.js (or other lang)
stage
test
deploy
And scale --num-servers=2
I would use it for any web based projects that aren't static servers.I'm told by people who provide tech support for people using Kubernetes that my dream of all of this being provided out of the box is about 10yr out.
Stateful workloads (the database servers) are not quite there yet and remains one of the most challenging parts of K8S. We are just starting to see Operators written for specific datastores (Mongodb, Postgresql, Redis, etc.)
I don't know about 10 years ... but it is not that turn-key right now, yet.
He says it's fine to do it if you want to learn the technology, but points out (rightly, I think) that if your concern is that you might need it at some point down the road, worry about it at some point down the road and not when you're trying to get started.
No! Although you could deploy it as a docker container easily.
Simple heuristic: how many containers and machines am I dealing with? More than one machine and one container? Consider moving to Docker Swarm or K8s. A single container? What is there to orchestrate?
I recently setup a digital ocean droplet and setup my blog there to actually understand how it works. It was great because I learned a ton and feel in control. Pretty simple setup - single droplet, rails with postgres, capistrano to automate deploys and a very simply NGINX config. It took me multiple days to setup everything, compared to the 5 minutes Heroku would have required - and it's not as nice as what Heroku offers.
Still, I'd wait as long as I can to get out of something so simple as Heroku for _anything_. I understand it gets expensive quickly, but I really want to see the cost difference of Heroku vs the time spent for the engineering team to manage all the complexities of devops, automated deploys, scaling, and I'm not even mentioning all the data/logging/monitoring things that Heroku allows to add with 1 click.
Well, if you use a k8s cluster on GKE for example, you will have literally all those things by default. Not even a click needed.
IMO running your own Kubernetes cluster for a company is insanity unless you have a very good reason to do so.
Kubernetes really looks like designed by the software developers for the software developers: dump all configs of your services in one place, imagine that they run on the network. The uninteresting parts of the job (like managing nodes and ingress, fixing internal overlay network and DNS, adding services for centralized logging) aren't mixed with the actual services. Obviously, the package management is solved by using containers (essentially OS images) as the package format.
But for now we are in this weird mode where the Kubernetes momentum is eclipsing even Docker, even though raw K8s reminds me of Linux in the Slackware days. there is so much FOMO, people don’t consider Heroku or anything off the Kubernetes wagon, except maybe AWS Lambda.
I know this is a dirty thought, here on Hacker News.
Professional mechanics use high grade tools that can cost thousands of dollars each. We have laser alignment rigs, plasma cutters, computer controlled balancing and timing hardware, and high performance benchmarking hardware that can cost as much as the car you're working on. We have a "Kubernetes" like setup because we service hundreds of cars a month.
The shade-tree mechanic wrenching on her fox body mustang on the other hand? her hand-me-down tool box and a good set of sockets will get her by with about 90% of what she wants to do on that car. she doesnt need to perform a manifold remap, so she doesnt need a gas fluid analyzer any better than her own two ears.
I should also clarify that these two models are NOT mutually exclusive. If i take home an old Chevy from work, I can absolutely work on it with my own set of tools. And if the shade-tree wants to turn her mustang into a professional race car, she can send it to a professional "kubernetes" type shop that will scale that car up proper.
With any car it's about making that single car more reliable or performant.
K8s doesn't care about reliability of any single instance, just the uptime of the whole service
It's more like you building your own car versus a Toyota manufacturing plant. You may think procuring and programming the robots to be an overkill for a single car, but it makes sense for a factory.
What do the pros use, then? I hear of things like DC/OS, Openstack, I know that Google's got "Borg", which is like professional k8s.
In other words, I think there are two answers to your question of "what do the pros use?" The first answer is "Kubernetes, because that's the right tool for the job." The second answer is "My product division has an internal team the size of a growth stage startup, and it's specifically dedicated to solving server scaling problems, and that's just my product division."
Another analogy would be the question "how would an F1 team solve this problem?" One answer is "you don't need an F1 team for that", and the other is "first, hire an F1 team, then have them build all of the custom tooling the F1 car needs."
That is to say, if scaling is your primary concern, you have a dozen other things more important to fix than your choice to use shell scripts vs. Kubernetes.
And, fwiw, Linux has run professional services somewhere around 10x longer on those "beginner tools" than containers have even existed.
My current setup uses a couple of Hetzner dedicated machines and services are deployed with ansible playbooks. The playbooks install and configure nginx, install the right version of ruby/java/php/postgres, configure and start systemd services. These playbooks end up copied and slightly modified for each project, and sometimes they interfere with one another in sublte ways (different ruby versions, conflicting ports, etc)
With my future Kubernetes setup I would just package each project into its own self-contained container image, write a kubernetes deployment/pod spec, update the ingress and I'm done.
I actually have a weirdly similar setup to you (I run on Hetzner and used and still use ansible), and I've written about it, most recently when I switched my single node cluster to Ubuntu 18.04 [0]. In the past I've also run a single node kubernetes clusters on CoreOS Container Linux, Arch, and back to CoreOS Container Linux in that order, from versions 1.7~1.11.
[0]: https://vadosware.io/post/hetzner-fresh-ubuntu-install-to-si...
I have quite some experience working with Kubernetes clusters for my larger clients. Usually for clients that are big enough to have their own AWS account.
The thing I am still on the fence about is whether I should go for a DIY Kubernetes setup on one or more Hetzner dedicated machines (cheap, more work, less scalable) or if I should just shell out for AWS and run an easily scalable cluster with Kops (which is what I use for some clients) and take advantage of all the AWS goodies like load balancing and RDS.
But I would recommend looking at the Hashicorp stack as a possible alternative, which might be entirely suitable for your use-case without the complexity of Kubernetes. This involves running Nomad and Consul to provide cluster scheduling and service discovery respectively - these are both single binaries with minimal configuration. Then you'd need some kind of front-end load-balancer like nginx or traefik which uses Consul to decide where to route requests.
It doesn't cover all the use-cases and features that Kubernetes does, but it does have the benefit of being much more straightforward to work with, so definitely worth considering!
As someone whom can setup and run a kubernetes cluster in my sleep, I can tell you that it is a superb production ready platform that solves many real world problems.
That in mind, kubernetes has constraints also, like running networked elixer containers is possible, but not ideal from elixer's perspective. Dealing with big data takes extra consideration. etc. etc.
All said, if you have an interest in DevOps/Ops/SysAdmin type technologies, learning Kubernetes is a fine way to spend your time. Once you have a few patterns under your belt, you are going to run way faster at moving your stack to production for real users to start using, and that has value.
I think the initial author (not this article, the other one) was just pointing out that you can indeed run kubernetes pretty cheap, and that is useful information and good introduction. This article is clickbait designed to mooch off of the others success.
I think the point is... do you actually have those problems? A lot of people jump immediately to worrying about having thousands of requests per second when it doesn't make any sense.
Deploying without downtime? Yep, it's nice to have because your favorite customer will have been testing that site in the exact 2 minutes of downtime which you deploy it ....believe me, murphy's law rules here.
Staging and Production environments that are the same, so I don't have surprises from local development to production release? Yep, another real problem that will slow momentum of development.
I suppose if you are developing a personal project of garbage that no one will ever see, than these problems don't exist. But if you are actually developing a product, these problems exist.
Kubernetes is likely here to stay. If you're interested in running a cluster to undestand what the hype is all about and to learn something new, you should do it. Also, ignore everybody telling you that this platform wasn't meant for that.
Complexity is a weak argument. Once your cluster is running you just write a couple of manifests to deploy a project, versus: ssh into machine; useradd; mkdir; git clone; add virtual host to nginx; remember how certbot works; apt install whatever; systemctl enable whatever; pm2 start whatever.yml; auto-start project on reboot; configure logrotate; etc. Can this be automated? Sure, but I'd rather automate my cluster provisioning.
About complexity, what you're saying is true, but I think "once your cluster is running" is making a lot of assumptions about what is actually running in the cluster in terms of infra and what workloads you can run there.
For Kubernetes, I found the docs a bit bad, the starting concepts are very easy to grok but the docs obfuscate them. I wrote a very short article on the basics [0], for anyone who might be interested in learning. After reading the article, reading the docs should be much easier, as you'll know the terms much more intuitively.
Already answered in a way:
> Can this be automated? Sure, but I'd rather automate my cluster provisioning.
If I need more computational power or a specific 3rd-party service that I don't have available at this point, I simply tear down my current cluster and deploy it elsewhere.
I see you've never tried to upgrade a running kubernetes cluster or been in an on call schedule for one. It's a new technology that is still maturing but it has a lot of moving parts all of which require a fair bit of understanding and which change on a regular basis.
Hell, just a few months ago the ACM agent totally got rewritten and now you have a choice between alpha software or a deprecated project!
Whenever anyone says "just do something" these days, it usually means that it hasn't been thought through properly. Is that only my personal experience?
- pm2 for uptime (pm2 itself is setup as a systemd serivce, it's really simple to do and pm2 can install itself as a systemd service)
- I create and tag a release using git
- on the production server, I have a little script that fetches the latest tag, wipes and does a fresh npm install and pm2 restart.
- nginx virtual host with ssl from letsencrypt (setting this stuff was a breeze given the amount of integration and documentation available online)
Ridiculously simple and I only pay for a single micro instance which I can use for multiple things including running my own email server and a git repo!
The only semi-problem that I have is that a release is not automagically deployed, I would have to write a git hook to run my deployment script but in a way I'm happy to do manual deployments as well to keep an eye on how it went :)
I understand why people might not want to invest the time onto learning a new technology, but that's not a reason to say it's a bad fit. If you know how to use Kubernetes, doing these bash scripts and doing a few YAML files will take basically the same time and the end result will be vastly superior on Kubernetes.
[1] Gitea (Github clone), Murmur (voice-chat server), Matrix Synapse (instant messaging), Prosody (XMPP), nginx (for those services and for various static websites)
Run two instances of something if you want to survive a single crash or a node update. Run another copy of your application stack if you want to try out a different version or config.
Without looking at the docs, most of the things in your list are single-instance stateful applications, so unless you plan to run another copy of them for a different purpose, K8S is overkill.
In the end, the steps you take to deploy with rsync and run your systemd service are the same (conceptually) you'd take to run on K8S, but translated to some YAML and a docker push. In one case you need to learn a new paradigm, in the other case you deal with something you already know. Not having to learn something new is an argument, but it doesn't mean your bare-Linux approach is simpler than the K8S approach. You just know it more.
Why separate your code into multiple files? Why write tests? Why use a code linter? Why use virtual environments? Why write a Makefile?
If you're working on a small personal project, or you're a newer developer learning the ropes, or the project is temporary, not important, doesn't need to scale, etc. then it's simply a matter of personal choice. It doesn't make sense to get bogged down learning a lot of tools and processes if there's no compelling business need and you're just trying to get the job done.
If you already know how to use these tools, though, they usually make your life a whole lot easier. I learn how to use complex systems in my career, where they're necessary and helpful. I apply these same tools and practices on my personal projects now, because once you know how to use something like Kubernetes, there's little cost to it and many of the benefits still apply.
yep, i think you nailed here.
Unless the personal project is something that you really care about, potential startup or something like that, then obviously you choose something that you are already proficient in because then it's about getting stuff done and moving forward.
So while it may make sense to discuss what technology is good or bad for some kind of companies, I think we won't arrive at any ultimate conclusion like "X is good/bad for personal projects".
As soon as this author mentioned he was happy with using Ansible, Systemd etc instead (which are all reasonable tools for what they are) he lost me - this is collectively much more work for me as the sole developer than a simple Docker container setup for virtually all web app projects in my experience. If you understand these relatively complex tools, you can likely learn Docker well enough in about an hour or two, the payoff in time savings in the future will make this time well spent.
In my experience "Dockerising" a web app is much, much less time consuming than trying to script it in Ansible (or Chef, Puppet, <name your automation tool>) and of course much less error prone too. I've yet to meet an Ansible setup that didn't become brittle or require maintenance eventually. If you are using straight forward technologies (Ruby, Java, Node, Whatever) your Dockerfile is often just a handful of lines at most. You can even configure it as a "service" without having to bother with Systemd service definitions and the like at all.
Then playing with Kubernetes on private project would have only Résumé-value for me.
[0] https://docs.ansible.com/ansible/latest/modules/docker_conta... (check other docker modules too)
If I really had to for one of these, I'd probably just do something at the loadbalancer to start routing users to the new container stack then shutdown the old ones, much as you might have in the pre-container days. I can just wait the old fashioned way (by sitting in my chair for a minute) for them to start.
You don't need to run a new cluster for every project. You can deploy multiple projects in a single cluster. I was running close to 5 different projects in a single cluster, backed by about 3-6 machines (machines added/removed on demand).
Kubernetes is basically like your own heroku. You can create namespaces for your projects. No scripts. You can deduce everything (how is a service deployed, whats the config, whats the arch) from the config files (yml)
> Is a single Nginx virtual host more complex or expensive than deploying a Nginx daemon set and virtual host in Kubernetes? I don't think so.
Yes it is. I wonder if the author has actually tried setting this themselves. I do realise i had similar opinions before I had worked with kubernetes, but after working with it, I cannot recommend it enough.
> When you do a change in your Kubernetes cluster in 6 months, will you remember all the information you have today?
Yes, why does the author think otherwise ? Or if this is a real argument why does the author think their "ansible" setup would be at the top of the head. I had one instance where I had to bring a project back up on prod (it was a collection of 4 services + 2 databases not including the LB) after 6-8 months of keeping it "down". Guess what, I just scaled the instances from 0 to 3, ssl is back, all services are back, everything is up and running.
This is not to say you wont have issues, I had plenty during the time i started trying it out. There is a learning curve and please do try out the ecosystem multiple times before thinking of using it in production.
It is just my opinion after all. I'm just trying to share my thoughts :)
> Yes it is. I wonder if the author has actually tried setting this themselves.
I've used K8s for months in production, maintaining a few clusters at my previous job.
Avoid using a load balancer as they are quite pricey (although it will allow you to create and use auto-managed SSL certificates for free.)
Of course you will also pay for egress traffic.
The nicest part of Fargate is that:
* you can define your whole cluster using a docker-compose like format.
* you can manage your cluster using the ECS CLI. No extra tool needed.
I work in my day to day 100% and fully dedicated automating Kubernetes cluster lifecycle, maintaining them, monitoring them and creating tools around it. Kubernetes it's a production-grade container orchestrator, it solves really difficult problems but it brings some challenges though. All of their components work distributed across the cluster, including network plugins, firewall policies, the control plane, etc. So be prepared to understand all of it.
Don't get me wrong, I love Kubernetes and if you want to have some fun go for it, but don't use the HA features as an excuse to do it.
But overall saying "NO" to rsync or ansible to deploy your small project just because it's not fancy enough it sounds to me like "Are you really going to the supermarket by car, when there are helicopters out there?"
Great article!
Containers (and thus Kubernetes) aren't the magical solution to every problem in the world. But they help, and the earlier you can get to an automated, consistent build/deploy process with anything that'll actually serve real customers, the better off you are. Personally, I'd rather design with containers in mind from day one, because it's what I'm comfortable with. There's nothing wrong with deploying code as a serverless-style payload, or even running on a standalone VM, but you need to start planning for how something should work in the real world as early as you can reasonably.
So, back to the point, I'm sure you couldn't deploy your app on Heroku if that's your requirement (because cedar-14 is deprecated, and not available for new deployments anymore) but if you seriously wanted to try containerizing it onto Kubernetes, and if you don't have other obstacles to 12-factor design that you're also not prepared to tackle, then Hephy Workflow v2.19.4 might actually work for you.
https://teamhephy.com and https://docs.teamhephy.com
I'm sure this probably won't work for you, for reasons you may not have explained, but ... maybe you'd like to look?
I'm not doing a great job selling it, the one redeeming quality I've mentioned is that it runs an outdated stack that you need ;)
Every new thing that you add, adds complexity. If that thing interfaces with another, then there is complexity at the interfaces of both.
Modern tools that atomise everything reduce density (and thus complexity), but people aren't paying attention to the amount of abstractions they are adding and their cost.
It needs a certain scale before the overheads are worth it.
Unfortunately the devops community always wanted to promote themselves as the only option for containers and even though they were based on the LXC project they did not explain the technical decisions and tradeoffs made as they did not want users to think there are valid alternatives. And this is the source of fundamental confusion among users about containers.
Why are you using single process containers? This is a huge technical decision that is barely discussed, a non standard OS environment adds significant technical debt at the lowest point of your stack. Why are you using layers to build containers? Why not just use them as runtime? What are the tradeoffs of not using layers? What about storage? Can all users just wish away state and storage? Why are these important technical issues about devops and alternative approaches not discussed? Unless you can answer these questions you cannot make an informed choice.
There is a culture of obfuscation in devops. You are not merely using an Nginx reverse proxy or Haproxy but a 'controller', using networking or mounting a filesystem is now a 'driver'. So most users end up trying Kubernetes or Docker and get stuck in the layers of complexity when they could benefit from something straightforward like LXC.
Using something you are familiar with, even if it's just a 10-line bash script, a simple virtual private server and the adding an nginx config there, is usually faster than having to orchestrate everything. If you want to invest the time in setting up Kubernetes for all your personal projects, it would probably make sense.
Basically, is it worth it? https://xkcd.com/1205/
in office we do not work with docker, containers, cloud etc, we run legacy asp.net 2.0 on-premise without any kind of automation (just a couple of us coordinating the releases and copying and pasting into the customer Windows Server 2008).
Kubernetes for personal projects? In my case, after 10 years of on-premise deployments, VM Ware, SQL Clusters, web.config, IIS, ARR and the rest of the things related, YES!
I absolutely want 3 hosts for less then 100$, a gitlab account for 4$, a free account in cloudflare, code and deploy.
We are glad to hear that you like using GitLab!
Regarding the documentation, have you checked out the following doc? https://docs.gitlab.com/ee/user/project/clusters/index.html#...
Of course, if you're in a workplace on a project likely to see more than a few hundred simultaneous users in a given application, definitely look at what K8s offers.
Edit: as to deploys, get CI/CD working from your source code repository. GitLab, Azure DevOps (formerly VSTS), CircleCI, Travis and so many others are free to very affordable for this. Take the couple hours to get this working, and when you want to update, just update your source repo in the correct branch.
But they're tiny, tiny things that are very personal (i.e. they have 1 user - me)
If you're getitng to the point where you need to scale things using a kubernetes cluster or whatever it seems to me like that thing has graduated from "personal project" to an actual product that needs the features of kubernetes like reslience and so on.
I mean, I'd love the idea of having a kubernetes cluster to throw some things onto but I really don't have the patience to set it all up right now, it seems way too much cost and effort
Like everything, there are tradeoffs. If there were a fairly easy way to do a one-node Kubernetes setup (say, Minikube), I would probably just go that route. One doesn't have to use the full feature set of Kubernetes to get one or two things that are advantageous.
As it is, I setup Minikube for the dev machines for the team I am on. I might consider Kubernetes for my personal side project if I knew Minikube would do well for machines under 1 GB of memory (it doesn't really).
The pre-emptible VMs that cost less than $5 is interesting, and I might do something like that.
For anybody who is interested in understanding this basic building blocks I decided to write https://vpsformakers.com/.
If there was a Jailfile equivalent for FreeBSD and a command-line tool with the same interfaces as docker, namely `docker run --rm -it ...`, I might be staying on FreeBSD.
No kubernetes involved - just a webinterface to run containers, install dockers as "apps" on your server. And Unraid is linux, you can but dont need to tinker.
Unraid is how i started using Dockers and became happy friends with my home server again. (tm)
The problems one _actually_ has on a personal projects are indeed solved with simple tools like rsync.
So I'm calling it quits for now. Just running the cluster requires a small ops team.
In case you are using GKE, you actually need two ingresses to support IPv6 + IPv4
This adds up to like 10 times the cost of an single droplet. For personal projects this seems kind of wasteful to me.
I'd argue that a lot of the complexity people find in Kubernetes is essential when you consider what it takes to run an application in any kind of robust manner. Take the simplest example -- reverse proxying to an instance of an application, a process (containerized or not) that's bound to a local port on the machine. If you want to edit your nginx config manually to add new upstreams when you deploy another process, then reload nginx be my guest. If you find and setup tooling that helps you do this by integrating with nginx directly or your app runtime that's even better. Kubernetes solves this problem once and for all consistently for a large amount of cases, regardless of whether you use haproxy, nginx, traefik, or whatever else for your "Ingress Controller". In Kubernetes, you push the state you want your world to be in to the control plane, and it makes it so or tells you why not.
Of course, the cases where Kubernetes might not make sense are many:
- Still learning/into doing very manual server management (i.e. systemd, process management, user management) -- ansible is the better pick here
- Not using containerization (you really kinda should be at this point, if you read past the hype train there's valuable tech/concepts below)
- Not interested in packaged solutions for the issues that kubernetes solves in a principled way that you could solve relatively quickly/well adhoc.
- Launching/planning on launching a relatively small amount of services
- Are running on a relatively small machine (I have a slightly beefy dedicated server, so I'm interested in efficiently running lots of things).
A lower-risk/simpler solution for personal projects might be something like Dokku[0], or Flynn[1]. In the containerized route, there's Docker Swarm[2] +/- Compose[3].
Here's an example -- I lightly/lazily run https://techjobs.tokyo (which is deployed on my single-node k8s cluster), and this past weekend I put up https://techjobs.osaka. The application itself was generically written so all I had to do for the most part was swap out files (for the front page) and environment variables -- this meant that deploying a completely separate 3-tier application (to be fair the backend is SQLite), only consisted of messing with YAML files. This is possible in other setups, but the number of files and things with inconsistent/different/incoherent APIs you need to navigate is large -- systemd, nginx, certbot, docker (instances of the backend/frontend). Kubernetes simplified deploying this additional almost identical application in a robust manner massively for me. After making the resources, bits of kubernetes got around to making sure things could run right, scale if necessary, retrieve TLS certificates, etc -- all of this is possible to set up manually on a server but I'm also in a weird spot where it's something I probably won't do very often (making a whole new region for an existing webapp), so maybe it wouldn't be a good idea to write a super generic ansible script (assuming I was automating the deployment but not with kubernetes).
Of course, Kubernetes is not without it's warts -- I have more than once found myself in a corner off the beaten path thoroughly confused about what was happening and sometimes it took days to fix, but that's mostly because of my penchant to use relatively new/young/burgeoning technology (for example kube-router recently instead of canal for routing), and lack of business-value to my projects (if my blog goes down for a day, I don't really mind).
[0]: http://dokku.viewdocs.io/dokku
[1]: https://github.com/flynn/flynn/
I assume "any" should be "every"?
Yep! For any thing which goes beyond the initial viability test, I make an OS package. SmartOS has SMF, so integrating automatic startup/shutdown is as easy as delivering a single SMF manifest, and running svccfg import in the package postinstall. For the configuration, I just make another package which delivers it, edits it dynamically and automatically in postinstall if required, and calls svcadm refresh svc://...
it's easy. It's fast. The OS knows about all my files. I can easily remove it or upgrade it. It's clean. When I'm done, I make another ZFS image for imgadm(1M) to consume and Bob's my uncle.
No, the author of the Kubernetes article completely, so utterly missed the point that it's not even funny: none of those Kubernetes complications are necessary if one runs SmartOS and optionally, as a bonus, Triton.
Since doing something the harder and more complicated way for the same effect is irrational, which presumably the author of the Kubernetes article isn't, I'm compelled to presume that he just didn't know about SmartOS and Triton, or that the person is just interested in boosting their resume rather than researching what the most robust and simplest technology is. If resume boosting with Kubernetes is their goal then their course of action makes sense, but the company where they work won't get the highest reliability and lowest complexity that they could get. So good for them, suboptimal for their (potential) employer. And that's also a concern, moreover, it's a highly professional one. I'm looking through the employer's eyes on this, but then again, I really like sleeping through entire nights without an incident. A simple and robust architecture is a big part of that. Resume boosting isn't.
[1] https://www.joyent.com/content/11-containerpilot/chart.15076..., https://docs.joyent.com/private-cloud
I haven't heard of Joyent or SmartOS in years! I am super surprised to hear of anyone recommending it today as a competitor to Kubernetes, and I have no facts or deep understanding of that platform so I won't belabor you with an argument about how Kubernetes is better. (I can't say if it is or isn't.) It's just not in the same ballpark. I'm glad it works for you. I'm especially glad to hear about another option (that we could potentially replace our bespoke deployments with), because the more of these things I know about, the louder I can clamor to upper management about the fact that we're not using any of these technologies yet, and we should be (to sleep through the night!)
I learned about Kubernetes through Deis Workflow. It took years to understand Kubernetes from end-to-end, and I was already a container veteran when Deis moved to k8s. I resisted! I caved. I came over, now I have years of experience with Kubernetes, and I can't say I'd recommend anything else. "Those complications" are all very hard to get over, but then ... you get over them! And largely don't have to do that again.
If you are on Kubernetes, then you are not locked in to any cloud provider (unless you have opted into another technology that made you locked in.) I can't say the same for Triton.
For the purposes of disclosure, I am a member of Team Hephy, the open source Deis Workflow fork. (Deis Workflow is EOL and Hephy is the continuation.) Workflow is how I learned Kubernetes, and I would still recommend it highly to anyone else that wants to learn Kubernetes. But I will not kid anyone into thinking it's going to happen overnight. (With Workflow though, you can absolutely start using it productively in about an hour.)[3]
[1]: https://docs.google.com/spreadsheets/d/1LxSqBzjOxfGx3cmtZ4Eb...
[2]: https://www.cncf.io/certification/software-conformance/
[3]: https://web.teamhephy.com or https://blog.teamhephy.info/#install
...but I would be beholden to GNU/Linux and have to do the same thing I do with SmartOS in a far more complex way, built on an operating system substrate which cannot provide the reliability that I need to be able to sleep through my nights without an incident.
Kubernetes, Docker, Linux are a time sink that I can never get back, on things which Solaris solved far better and reliably 20 years ago. I don't want to go from a Pegasus to a donkey.