If you'd like greater accuracy, UTC isn't a great starting point because UTC's leap seconds complicate getting an accurate estimate of UT1. Its hard to tell if your local idea of UTC has had the latest leap second applied or has had false leap seconds applied (or worse, accidentally received a leap smeared source, which can't be backed out to convert to UT1).
If you're going to produce a more accurate estimate of UT1 either using interpolations of IERS bulletin B predictions or a model based on the daily USNO ultra-rapid UT1 VLBI measurements the leap seconds don't do anything to help you, but they can mess up your calculations because you need to correctly and consistently back them out.
So essentially the practice of leap seconds helps applications that need UT1 to within better than an hour or two (otherwise just using TAI with offset or TAI plus a static linear correction is good for thousand of years), but hurts applications that need subsecond UT1 accuracy, hurts anyone that needs consistent or accurate duration, and creates a lot of software bugs including ones that can show up when there hasn't actually been a leap second.
I would argue that today the number of applications that need UT1 at all are much smaller and less significant than ones that need subsecond consistent durations or times. And that of the applications that want UT1 most either don't need leapseconds at all (e.g. TAI or TAI plus the simplest static linear correction is enough) or would prefer better than 1 second accuracy, where at best leapseconds don't help and in practice they add a lot of complexity and still sometimes cause failure.