I'll say that this is a good point, especially because if you don't use containers or a similar solution (even things like shipping VM images, for all I care), you'll end up with environment drift, unless your application is a statically compiled executable with no system dependencies, like a JDK/.NET/Python/Ruby runtime or worse yet, an application server like Tomcat, all of which can have different versions. Worse yet, if you need to install packages on the system, for which you haven't pinned specific versions (e.g. needing something that's installed through apt/yum, rather than package.json or Gemfile, or requirements.txt and so on).
That said, even when you don't use containers, you can still benefit from some pretty nice suggestions that will help make the software you develop easier to manage and run: https://12factor.net/
I'd also suggest that you have a single mechanism for managing everything that you need to run, so if it's not containers and an orchestrator of some sort, at least write systemd services or an equivalent for every process or group of processes that should be running.
Disclaimer: I still think that containers are a good idea, just because of how much of a dumpsterfire managing different OSes, their packages, language runtimes, application dependencies, application executables, port mappings, application resource limits, configuration, logging and other aspects is. Kubernetes, perhaps a bit less so, although when it works, it gets the job done... passably. Then again, Docker Swarm to me felt better for smaller deployments (a better fit for what you want to do vs the resources you have), whereas Nomad was also pretty nice, even if HCL sadly doesn't use the Docker Compose specification.
So IMO it's perfectly possible to run Java applications without containers. You would need to think about network ports, about resource limits, but those are not hard things.
And tomcat even provides zero-downtime upgrades, although it's not that easy to set up, but when it works, it does work.
After I've got some experience with Kubernetes, I'd reach for it always because it's very simple and easy to use. But that requires to go through some learning curve, for sure.
The best and unbeatable thing about containers is that there're plenty of ready ones. I have no idea how would I install postgres without apt. I guess I could download binaries (where?), put them somewhere, read docs, craft config file with data dir pointing to anotherwere and so on. That should be doable but that's time. I can docker run it in seconds and that's saved time. Another example is ingress-nginx + cert-manager. It would take hours if not days from me to craft set of scripts and configs to replicate thing which is available almost out of the box in k8s, well tested and just works.
I've seen something similar in projects previously, it never worked all that well.
While the idea of shipping one archive with everything is pretty good, people don't want to include the full JDK and Tomcat installs with each software delivery, unlike with containers, where you get some benefit out of layer re-use when they haven't changed, while having the confidence that what you tested is what you'll ship. Shipping 100 app versions with the same JDK + Tomcat version will mean reused layers instead of 100 copies in the archives. And if you don't ship everything together, but merely suggest that release X should run on JDK version Y, the possibility of someone not following those instructions at least once approaches 100% with every next release.
Furthermore, Tomcat typically will need custom configuration for the app server, as well as configuration for the actual apps. This means that you'd need to store the configuration in a bunch of separate files and then apply (copy) it on top of the newly delivered version. But you can't really do that directly, so you'd need to use something like Meld to compare whether the newly shipped default configuration doesn't include something that your old custom configuration doesn't (e.g. something new in web.xml or server.xml). The same applies to something like cacerts within your JDK install, if you haven't bothered to set up custom files separately.
Worse yet, if people aren't really disciplined about all of this, you'll end up with configuration drift over time - where your dev environment will have configuration A, your test environment will have configuration B (which will sort of be like A), and staging or prod will have something else. You'll be able to ignore some of those differences until everything will go horribly wrong one day, or maybe you'll get degraded performance but without a clear reason for it.
> So IMO it's perfectly possible to run Java applications without containers. You would need to think about network ports, about resource limits, but those are not hard things.
This is only viable/easy/not brittle when you have self-contained .jar files, which admittedly are pretty nice! Though if shipping JDK with each delivery isn't in the cards (for example, because of the space considerations), that's not safe either - I've seen performance degrade 10x because of a JDK patch release was different between two environments, all because of JDK being managed through the system packages.
Resource limits are generally doable, though Xms and Xmx lie to you, you'd need systemd slices or an equivalent for hard resource limits, which I haven't seen anyone seriously bother with, although they're at a risk of the entire server/VM becoming unresponsive should their process go rogue for whatever reason (e.g. CPU at 100%, which is arguably worse than OOM because of bad memory limits).
Ports are okay when you are actually in control of the software and nothing is hardcoded. Then again, another aspect is being able to run multiple versions of software at the same time (e.g. different MySQL/MariaDB releases for different services/projects on the same node), which most nix distributions are pretty bad at.
> And tomcat even provides zero-downtime upgrades, although it's not that easy to set up, but when it works, it does work.
I've seen this attempted, but it never worked properly - the codebases might not have been good, but those redeployments and integrating with Tomcat always lead to either memory leaks or odd cases of the app server breaking. That's why personally I actually enjoy the approach of killing the entire thing alongside the app and doing a restart (especially good with embedded Tomcat/Jetty/Undertow), using health checks for routing traffic instead.
I think doing these things at the app server level is generally just asking for headaches, though the idea of being able to do so is nice. Then again, I don't see servers like Payara (like GlassFish) in use anymore, so I guess Spring Boot with embedded Tomcat largely won, in combination with other tools.
> After I've got some experience with Kubernetes, I'd reach for it always because it's very simple and easy to use. But that requires to go through some learning curve, for sure.
I wouldn't claim that Kubernetes is simple if you need to run your own clusters, though projects like K3s, K0s and MicroK8s are admittedly pretty close.
> The best and unbeatable thing about containers is that there're plenty of ready ones. I have no idea how would I install postgres without apt. I guess I could download binaries (where?), put them somewhere, read docs, craft config file with data dir pointing to anotherwere and so on. That should be doable but that's time. I can docker run it in seconds and that's saved time. Another example is ingress-nginx + cert-manager. It would take hours if not days from me to craft set of scripts and configs to replicate thing which is available almost out of the box in k8s, well tested and just works.
This is definitely a benefit!
Though for my personal needs, I build most (funnily enough, excluding databases, but that's mostly because I'm lazy) of my own containers from a common Ubuntu base. Because of layer reuse, I don't even need tricks like copying files directly, but can use the OS package manager (though clean up package cache afterwards) and pretty approachable configuration methods: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
In addition, my ingress is just a containerized instance of Apache running on my nodes, with Docker Swarm instead of Kubernetes: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-... In my case, the distinction between the web server running inside of a container and outside of a container is minimal, with the exception that Docker takes care of service discovery for me, which is delightfully simple.
I won't say that the ingress abstraction in Kubernetes isn't nice, though you can occasionally run into configurations which aren't as easy as they should be: e.g. configuring Apache/Nginx/Caddy/Traefik certs which has numerous tutorials and examples online vs trying to feed your wildcard TLS cert into a Traefik ingress, with all of the configuration so that your K3s cluster would use it as the default certificate for the apps you want to expose. Not that other ingresses aren't great (e.g. Nginx), it's just that you're buying into additional complexity and I've personally have also had cases where removing and re-adding it hangs because of some resource cleanup in Kubernetes failing to complete.
I guess what I'm saying is that it's nice to use containers for whatever the strong parts are (for example, the bit about being able to run things easily), though ideally without ending up with an abstraction that might eventually become leaky (e.g. using lots of Helm charts that have lots of complexity hiding under the hood). Just this week I had CI deploys starting to randomly fail because some of the cluster's certificates had expired and kubectl connections wouldn't work. A restart of the cluster systemd services helped make everything rotate, but that's another thing to think about, which otherwise wouldn't be a concern.
But it's a pretty objective notation that manually scaled single machines don't scale as well as automation.
Containers are a good common denominator because you essentially start with the OS, and then there's a file that automates installing further dependencies and building the artifact, which typically includes the important parts of the runtime environment.
- They're stupidly popular, so it basically nullifies the setup steps.
- Once setup, they by combinding both OS layers and App, they solve more of the problem and are therefore slightly more reliable.
- They're self-documenting as long as you understand bash, docker, and don't do weird shit like build an undocumented intermediary layer.
Infrastructure as Code does the same thing for the underlying infra layers and kuberenetes is one of the nicer / quicker implementations of this, but requires you have kubernetes available.Together they largely solve the "works on my PC" problem.