Having every second increased by a non-trivial amount (~0.001%) on some days, and not on others, will produce subtly wrong results in all kinds of fields, from manufacturing to astronomy.
This was a bad, bad choice by Google.
https://developers.google.com/time/
"We recommend that you don’t configure Google Public NTP together with non-leap-smearing NTP servers."
If you point your NTP clients to time1.google.com to time4.google.com then don't point them to anything else.
That's all.
If you use Google's time servers you are using them either to be fully in sync with Google or because you like the time-smearing feature. In both cases, just use them, don't mix. Think it as a non-standard-service which is for convenience API compatible with the "standard" NTP.
As magicalist pointed, there are already other smearing algorithms online:
https://developers.google.com/time/smear#othersmears
and Google plans to switch to the new algorithm soon. If all those who need smear standardize around one algorithm, it's going to be even better: there will be one more standard, with the new name, but then it will be even more obvious to everybody what's going on. Obviously both approaches are needed, depending on the usage scenario.
Network Time Protocol Best Current Practices
draft-ietf-ntp-bcp-02
1. Introduction NTP Version 4 (NTPv4) has been widely used since its publication as
RFC 5905 [RFC5905]. This documentation is a collection of Best
Practices from across the NTP community.Holy leaping second, batman! Unilaterally being off by up to a half second from the rest of the world's clocks is a pretty aggressive step. I think I would have preferred to see a resolution made by an independent body on something this drastic.
https://developers.google.com/time/smear#othersmears
> preferred to see a resolution made by an independent body
Independent bodies have spent the last decade debating if leap seconds should even exist. Agreeing on how to treat them if we keep them is way down the priority list.
Hilarious. Was this intentional?
[1] for good reasons
Google doesn't think so: "No commonly used operating system is able to handle a minute with 61 seconds"
> All Google services, including all APIs, will be synchronized on smeared time, as described above. You’ll also get smeared time for virtual machines on Compute Engine if you follow our recommended settings.
Smear is a workaround for those who care about phase alignment but don't care about frequency error. ... and who don't need to exchange times with anyone else. This last point reduces the set to no one, since it can't extend to everyone (some parties care a lot more about frequency error than phase error!).
This circus is enhanced by NTP's inability to tell you what timebase it's using (or, god forbid, offsets between what its giving you and other timebases...)
It's going be especially awesome when NTP daemons with both smear and non-smear peers get both the smear frequency error AND get a leap second.
I for one welcome this great opportunity for an enhanced trash fire to help convince the world that we need to stop issuing leap seconds. (It's absurd-- causes tens of millions in disruption easily, -- and it would take 4000 years to even drift an hour off solar time, at which point timezones could be rotated if anyone really cared).
I don't quite understand that point. E.g. the typical web server doesn't have much of a need to exchange precise time with others. HTTP, TLS, ... require timestamps, timestamps are shown to users occasionally, but as long as they are roughly right that is enough. As long as all internal systems work off the same standard it is fine. Which seems to be the reasoning under which Google choose to use it, even though one might argue that with their cloud offerings they are not as insular.
http://arstechnica.com/science/2016/04/the-leap-second-becau...
Edit: the worst thing is: it would make those calculations harder to do correctly, but it would be too seductive to not care for leap seconds. After all, what's a few seconds error, really? This leaves you in a state where, guaranteed, 99% of time software will not be correct and the error will compound over time.
Or... recognize that super rare bugs are inevitable and create a higher level way to avoid them entirely. I vote for option 2.
Now if I want to write software that uses precise TAI, I can't do that because of broken UTC from time servers and TAI is defined as UTC+tai_offset on my side.
Here are two Red Hat articles on how to deal with the leap second, from 2016 and 2015:
https://access.redhat.com/articles/15145
http://developers.redhat.com/blog/2015/06/01/five-different-...