I think even stuff like the storage model can change who deems a tool acceptable.
A decade or two ago, I liked Ganglia which used a round robin database. You'd lose granular data over time, but you'd keep aggregate data. I.E. one-minute metrics may be available for a week, but after that you'd have 15-minute metrics stored for a month, and then maybe hourly metrics for a year. Disks were expensive and small so this model made monitoring feasible for some people. Other tools just dropped old data so you needed to choose to retain for a long time ($$$) or only keep recent data.
Storage is now quite cheap and we're storing even more metrics, deep insights into application behavior. With many tools you're back to storing as much data as your budget allows. There's a lot of focus on building queries over the data, turning those into alerts, and keeping those performant.
Open source often means self hosting. If you've got an Ops team, sure, go for it. Otherwise focus your engineers on your app and use a managed tool.