Unix, for example. That's a pretty big example. Look at gettimeofday. Completely incapable of handling leap seconds in any reasonable way, except if you use smoothing.
Windows, for example. That's another pretty big example. Just ignores the leap second bit and goes backwards at the next synchronization.
I'm not even talking about bugs here—these are straight up design flaws.
There are actually two reasonable ways of handling leap seconds with gettimeofday(). The first, which is in actual use by a range of people, is to define that the kernel time is actually a TAI-10 count not a UTC count. Arthur David Olson's "right" timezone system does this. The second is to allow the microseconds count to go up to 2,000,000.
I wonder how many clients handle that correctly. How many log files will have timestamps at "23:59:59.1500" instead of "23:59:60.500"? If you are going to break APIs you might as well make a new one instead.
And if you replace a simple API with one that requires distributing leap-second tables…