Yeah, that's what I used. It comes with some providers out of the box, but they strike me as toys. For example, it gives you support for node-local volumes, but I don't really want to have to rely on my pods being scheduled on specific nodes (the nodes with the data). Even if you're okay with this, you still have to solve for data redundancy and/or backup yourself. The Rancher folks have a solution for this in the form of LongHorn, so maybe we can expect that to be integrated into k3s in the future. There's also no external DNS support at all, and IIRC the default load-balancer provider (Klipper LB, which itself seems to be not very well documented) assigns node IPs to services (at random, as far as I can tell) so it's difficult to bind a DNS name to a service without something dynamically updating the records whenever k8s changes the service's external IP address (and even then, this is not a recipe for "high availability" since the DNS caches will be out of date for some period of time). Basically k8s is still immature for bare metal; distributions will catch up in time, but for now a lot of the hype outpaces reality.
There’s metallb that lets you announce bgp to upstream routers. Another solution would be to just announce it via daemonset on every node and setup a nodeport. Or just add every frontend node IP into DNS. Obv all highly non-standard as it depends on your specific setup
Yes, to be clear, these problems can be worked around (although many such workarounds have their own tradeoffs that must be considered in the context of the rest of your stack as well as your application requirements); I was observing that the defaults are not what I would consider to be production-ready.