What k8s brings to the table is a level of standardization. It's the difference between bringing some level of robotics to manual loading and unloading of classic cargo ships, vs. the fully automated containerized ports.
With k8s, you get structure where you can wrap individual program's idiosyncracies into a container that exposes standard interface. This standard interface allows you to then easily drop it into server, with various topologies, resources, networking etc. handled through common interfaces.
I said that for a long time before, but recently I got to understand just how much work k8s can "take away" when I foolishly said "eh, it's only one server, I will run this the classic way. Then I spent 5 days on something that could be handled within an hour on k8s, because k8s virtualized away HTTP reverse proxies, persistent storage, and load balancing in general.
Now I'm thinking of deploying k8s at home, not to learn, but because I know it's easier for me to deploy nextcloud, or an ebook catalog, or whatever, using k8s than by setting up more classical configuration management system and deal with inevitable drift over time.