This generally means running a GPU all the time. My personal preference is to use my preferred cloud (GCP).
FWIW, I'm using the VertexAI API rather than running an LLM all the time. They have data privacy in the ToS, so I'm not worried about them training on my data. It's far cheaper and better than running a lower quality model myself. When I get around to some fine-tunings, they have options, but you can get pretty far with prompts, RAG, and agents