The setup took a couple months to research, build, and make "perfect". However, ongoing it takes little time to maintain (less than an hour a week, if averaged). Every three years, I build new servers and place them into service (2 to 3 weeks of time to perform). I also do periodic hardware maintenance roughly every three months (typically 1/4 of a day to perform).
Due to the cost savings, we also are able to do quite a bit of redundancy, such as: dual PSUs, SSDs in RAID 10 on non-SAN servers, RAID-Z2/3 on SAN servers, offsite backups, complete server redundancy, spare servers ready to be slotted (I live an hour from colo), spare parts on hand, even multiple physical colos.
If components are selected carefully (i.e. sharing components between server roles), regular maintenance is performed, and redundancy is ensured on a per component, per server, and per datacenter level, it's not very time intensive or costly.
I am a software engineer by trade, but love the ins and outs of hardware/ops. As such, everything is automated and scripted (that can be). I can raise/move instances in minutes, just like EC2 (currently use XCP).
Even with the research, it still saves roughly 100k per 3 years.