We'd be giving ourselves a Y2k-equivalent to deal with every century. Maybe that's not the biggest deal in the world, but I still remember the last viscerally and we're already a fifth of the way to the next one (not to mention the timestamp bug one coming up in a decade).
I disagree with the other comment's suggestion that leap hours can be taken into consideration like DST. I see no reason why DST code (which is generally implemented as an extra timezone) could straightforwardly be adapted to handle a single permanent shift of time.
The point is that we don't have to add the additional code to handle leap hours, as it can be implemented as a large swath of time zone changes.
We can imagine that there would be two versions of UTC, one before the leap hour (hereafter UTC) and one after the leap hour (hereafter UTC'). After the leap hour UTC and UTC' will coexist, where UTC' = UTC-1. Since time zone definitions themselves have UTC offsets, the trick is that instead of replacing all occurrences of UTC with UTC' (so that they would have UTC' offsets) we replace those offsets to have the same net effect. For example EST is UTC-5 before the leap hour and UTC'-5 = UTC-6 after the leap hour, so we pretend that the definition of EST changed to UTC-6.
Note that this approach essentially fixes the UTC-TAI offset (as opposed to the UTC'-TAI offset) and makes UNIX timestamp (time_t) deviate from the current version of UTC (well, UTC'). I believe this is just a matter of definition and thus much doable than changing UTC every few year. After all, leap hours would be much rare, probably the first one would occur at least a few centuries later.
tzdata updates happen all the time (in a relative sense). So code would need to handle those, and I don't see why potential leap hours couldn't be handled like that. Just means that Europe/London eventually will be +01:00 (or is it -01:00, anyways) instead of +00:00 its now.
Examples would be great. Especially examples of things relying civil time to track rotation.
However, for civil time keeping and for some technical purposes, UTC as currently defined is exactly what we need. Don't fuck with it.
Programmers have to deal with the outside world. The outside world has timezones defined as offsets from UTC. So when UTC changes, the whole world changes their clocks, and programmers have to cope. If your program doesn't agree with that world about what time it is, your program will get blamed.
If it was a choice that I could individually make, I would tell astronomers to go pound sand. But it isn't. By historical happenstance, they have been put in charge of time, and it is truly a question of the tail wagging the dog when they care about a 1 second deviation between UTC and the position of the sun at a particular longitude.
You are asserting that without any justification. I see no benefits and lots of downsides in having leap seconds in civil time specifically.
Why not just use TAI for almost everything and leave UTC (including regular leap seconds) for applications that need it?
More and more I use Epoch time for most things just because of how convenient it is.
I rarely need sub-second granularity, and it's just one number, so it's sortable, math-able, etc.
Then whoever is doing the display can convert it any way they want to.
https://en.wikipedia.org/wiki/Leap_second
Edit: Actually I was slightly wrong, in the event of a leap second, we would repeat the same second twice.
GPS isn't an example of that because 1 second is already a quarter mile off.
It also contains UTC offset information so that you can derive the current UTC time from GPS if you wanted to.
Generally, you should only be dealing with UTC for things that have to display/accept a time to/from human users; there's no reason for computers to reason about the interval between events using UTC.
Neither UTC nor local times are times in the physical sense. At most you can say that they are times passed from a non-constant origin of time, which changes frequently and in an unpredictable mode for the future.
People always forget what UTC and local times really are and they very frequently make mistakes in their handling, especially for future events.
The foolproof way, as advocated by Bernstein and others, is to use only TAI for internal timekeeping and computation, and to convert to UTC or local times only the values that will be seen by human eyes.
I have done like this since many years ago, and it was always simpler for me. It would have been much simpler if this would have been a general practice, as that would have eliminated the useless conversions required for communication with other systems that use UTC, e.g. when using NTP.
* Many apps can get away with only using TAI, so they don't necessarily have to track the difference between UTC and TAI
* If your system uses TAI in its distributed coordination mechanisms, you don't need to worry about leap seconds and an error in the UTC-TAI difference won't cause your system to fail
edit: Oh, I see that it's available in Linux https://www.man7.org/linux/man-pages/man3/clock_gettime.3.ht... . Can't find similar calls for Windows or OSX
Good one. Wikipedia[0] is more helpful and states it comes from "temps atomique international" which is french.
[0]: https://en.wikipedia.org/wiki/International_Atomic_Time
Seems sensible to me actually. I don't really see why I would personally want to ever use UTC over TAI or local time. Sadly, a lot of time libraries don't seem to expose TAI as easily as they expose UTC, which I guess is why I continue to use UTC.
There's really nothing to be afraid of; a few services would implement some skew, the rest of us would get NTP updates and life continues.
If you read around the topic, there was plenty of non-conformance for the relatively recent leap seconds.
As you say, clock skewing happens continuously in NTP so that part looks proven.
I have no clue.
[1] If we ignore friction due to tidal forces from moon and sun (which is valid on these time-scales), the "earth system" is not affected by any non-central forces and so the angular momentum is constant.
But tidal forces are not completely irrelevant to leap seconds.
The length of the atomic SI second was calibrated to match the previous sidereal second, which was based on the length of a year. In practical terms, the sidereal second was based on Newcomb’s Tables of the Sun, which was a model of the solar system developed at the end of the 1800s based on historical astronomical observations https://en.wikipedia.org/wiki/Newcomb%27s_Tables_of_the_Sun
So the length of the second is based on the length of the year 1900, and derived from older astronomical measurements of the rate of movement of bodies in the solar system.
Leap seconds exist because by the 1960s the length of day no longer matched Newcomb’s calculations for 1900: it was longer by 1 or 2 milliseconds.
...and some liquid (or at least semi-liquid) stuff beneath the crust?
This would assume that mined materials stay at a higher elevation than they were. And that the materials above those items didn't collapse down and/or aren't heavier than the mined mass.
P.S.: not serious
https://www.kinetica.co.uk/2014/03/27/chinese-dam-slows-down...
There are only two candidates I can think of:
1. Movements in magma below the Earth's crust.
2. Global warming. Less ice at the poles means that water has moved away from the Earth's axis, which will slow the rotation down. I haven't run the maths to see if that adds up though.
Time related stuff sure is messy.
Someone ended up asking on one of the FreeBSD mailing lists, and it seems that they test for it:
* https://lists.freebsd.org/pipermail/freebsd-stable/2020-Nove...
So at least when it comes to the FreeBSD kernel and ntpd, there's nothing to worry about there. Userland code may be a different matter of course.
I’m curious what systems would catastrophically fall apart if time went back one second?
Going back one second is what generally happens now when a Linux box does the "inserting leap second" thing. It goes from 23:59:59.999999 to 23:59:59.000000, then runs that whole second again. You get to 23:59:59.999999 again, and then you finally roll over to 00:00:00.000000.
From the perspective of the typical time_t rendering of Unix time, there is no way to uniquely represent that 61st second. It just "disappears".
Having lived through systems going backwards some 17 seconds due to a botched NTP-GPS appliance config, I can tell you that what died at that particular site was all of the locking code that used wall time clocks instead of monotonic clocks. They all CHECKed and died when their preconditions were no longer valid.
This didn't have to happen. They could have used monotonic from the get-go, and then they wouldn't have died when the clocks got yanked backwards 17 seconds to the proper time base.
This is a great and really intuitive summary of the problem. Is there a certain class of problem related to disappearing second? Like are these more likely to be filesystem issues or things that rely on timestamps? Or are there second order problems as well?
>"I can tell you that what died at that particular site was all of the locking code that used wall time clocks instead of monotonic clocks. They all CHECKed and died when their preconditions were no longer valid. I can tell you that what died at that particular site was all of the locking code that used wall time clocks instead of monotonic clocks. They all CHECKed and died when their preconditions were no longer valid."
Sorry if this is a silly question but was that check simply that "time t1 is greater than time t0"? Also was the duration of that outage(17 seconds) or would this have been equally catastrophic at a single second?
Well then, it really was botched. I was running an off-line hodge podge of 50 or so RH7.3 + Win2K systems back in 2002, where I had to manually fix the few second drift every couple of months, and the RH machines all slewed faster/slower to adapt, non ever went backward or jumped forward (Don't remember how the Win2K handled it; They weren't mission critical and I didn't care much).
Updating the clock one second means taking a whole cluster offline untill you are sure the clocks are synchronized exactly before bringing them online again. It brings an outage and potential of headaches.
If the clocks are not synchronized you break the consensus, might cause transactions to commit partially or cause inconsistent reads and writes.
Any distributed system that breaks in the presence of leap seconds is necessarily poorly designed.
The fact that any non-UI code uses any time standard except TAI is a very unfortunate historical mistake that will hopefully be cleaned up over the next few years.
Thankfully since 2013, Linux kernel has supported TAI.
Does anyone know why Earth's rotation has sped up?
The Earth is built in layers. The solid bit we see is called the crust. Then there is a liquid layer where heat melts rock, that's called the mantle. And then at the center of the Earth the pressure again turns things solid, that's called the core.
All of the things affecting the Earth's rotation in the long term pull on the crust. The two biggest are tides (slows us by 2.3 ms/century) and glacial rebound (estimated to speed us up by 0.7 ms/century). The result is that the crust and the core wind up turning at different rates, and interact with each other through the mantle. Which transfers angular momentum back and forth between the crust and core.
The article that first showed this is https://www.nature.com/articles/333353a0 if you want to look further.
Learned something new today!
Uniquely specifying the four times in the first sequence as UNIX timestamps isn't strictly possible, but it is possible with negative leap seconds.