Our units of temporal measurement, from seconds on up to months, are so complicated, asymmetrical and disjunctive so as to make coherent mental reckoning in time all but impossible. Indeed, had some tyrannical god contrived to enslave our minds to time, to make it all but impossible for us to escape subjection to sodden routines and unpleasant surprises, he could hardly have done better than handing down our present system. It is like a set of trapezoidal building blocks, with no vertical or horizontal surfaces, like a language in which the simplest thought demands ornate constructions, useless particles and lengthy circumlocutions. Unlike the more successful patterns of language and science, which enable us to face experience boldly or at least level-headedly, our system of temporal calculation silently and persistently encourages our terror of time.
… It is as though architects had to measure length in feet, width in meters and height in ells; as though basic instruction manuals demanded a knowledge of five different languages. It is no wonder then that we often look into our own immediate past or future, last Tuesday or a week from Sunday, with feelings of helpless confusion. …
—Robert Grudin, Time and the Art of Living.
[1] https://www.gnu.org/software/coreutils/manual/html_node/Date...
https://ciechanow.ski/gps/#time
> "When it comes to the flow of time on those satellites, there are two important aspects related to Einstein’s theories of relativity. Special relativity states that a fast moving object experiences time dilation – its clocks slow down relative to a stationary observer. The lower the altitude the faster the satellite’s velocity and the bigger the time slowdown due to this effect. On the flip side, general relativity states that clocks run faster in lower gravitational field, so the higher the altitude, the bigger the speedup is."
> "Those effects are not even and depending on altitude one or the other dominates. In the demonstration below you can witness how the altitude of a satellite affects the dilation of time relative to Earth..."
There's also a nice if complex explanation of why your GPS receiever needs four satellite emitters to calculate the time bias of its clock.
For more fun: A less known fact is that SR also states that from the POV of the satellite, the satellite is a stationary observer and the guy on earth is fast moving, therefore that observer's clock slows down relative to the satellite.
Yes, that's right: Each of them sees the other one's clock as slower.
IIRC It takes the fact that the satellite moves in an orbit, not in a straight line, to resolve which one is actually "slower" because up until then, the word "slower" isn't really meaningful at all.
Unix time should be a steady heartbeat counting up the number of seconds since midnight, January 1 1970. Nice, clean, simple. How you might convert this number into a human-readable date and time is out of scope/implementation-defined/an exercise for the reader/whichever variation of "not my problem" you prefer.
This is nice and clean, so long as you have exactly one computer. The second there are more than one, and they are talking to each other, their clocks can go out of sync with each other. And they will, because they are physical systems that are imperfect and in general much less precise than you'd expect them to be.
This means that there has to be a way to correct for errors. The best method, that almost everyone who manages a lot of computers converges onto, is to "smear out" any errors, by never discretely changing the time on any machine, but just shortening or lengthening seconds slightly to bring any outliers back to the correct values. And once you have this system, dealing with a leap second using it is the easiest, simplest and least errorprone method.
I do think that there are purposes where local "machine time", which is just a monotonic clock counting upwards from bootup, would make sense. Especially when subsecond accuracy is important. But it should always be clear that there is no way to reliably convert between that and wallclock or calendar time. There are *no* intervals of calendar/wallclock time that reliably convert to any interval of machine time. It is not guaranteed that any wallclock minute contains exactly 60 machine seconds.
I fielded tech support questions from an app that had grown to send email. Every week we got a couple of emails from some time on January 1, 1970, each from a new person, and a whole slew of people whose batteries were on the edge of failing and so their machine was days or months off from reality.
The HTTP 1.0 spec already had a solution for two machines with different ideas of the current date. It’s one of my favorite parts of the spec and I’ve used it a few times in order to avoid having to implement my own time negotiation protocols (or in fact to stop others from doing it).
I don’t think that battery chemistry has changed all that dramatically since that time. It was still a 2032 cell, for machines that have a discrete battery. Instead it’s the clock chip and network time protocols that have gotten more efficient, and we use the machines more consistently. Or at least the machines where time counts matter the most are on all the time.
The cost is that conversion to/from civil time is far more complicated, and worse, cannot be computed for future dates for which leap seconds have not yet been determined.
I think that 86,400-second days with a 24-hour leap smear hits a sweet spot of utility and usability: https://developers.google.com/time/smear
There are very few applications that will know or care that seconds get 0.001% longer for 24 hours.
If you need better conversion, you have to know how the system chose to handle leap seconds. If it smears, you have to know and implement the smear algorithm, if it doesn't then some unix timestamps will map to two different points in civil time (second 59 and 60 in any hour with a leap second).
For 99.9% of applications, it's not a bad tradeoff. But if we're willing to fudge time by up to a second, recording time in TAI (just count a second every second) and living with future dates sometimes being off by a couple seconds when converted to UTC isn't that different in terms of utility/usability tradeoff, and conceptually simpler.
> Vell, I'm just zis guy, you know?
(A quotation from Gag Halfrunt, Zaphod Beeblebrox's "personal brain care specialist".)
And another... (Also, given Adams wrote a few Dr. Who episodes, one which turned into Life, the Universe and Everything, the "Time Lords" reference could be another.) The "Oh, and one more thing" is a slight misquote of, "And another thing."
As DNA is my favorite author bar none, I like this post a lot.
I was surprised to see that very few systems other than computer (servers and suchlike) use unix timestamps. So GNSS systems have their own representation of time and the protocols that most network equipment uses is called PTP. Now-a-days most are trying to use the white rabbit protocol.
I was at a conference on this and there are a lot of crazy applications of these timing and sync technologies. The main is in 5G networks but the high frequency trading companies do all of there trading in FPGAs now to reduce time to make a decision so accurate timing is essential. Even the power companies are need high accuracy time to detect surges in powerlines. I spoke to someone from Switzerland who said they had serious security concerns as the timing fibres are poorly secured and open to easy disruption.
It was a very interesting domain to work in even though I only was do the app part of the thing. Didn't pay enough though and I was promised a promotion that never came.
It's not so much that (Ethernet?) network equipment uses PTP, but rather to get the accuracies desired (±nanoseconds) there needs to be hardware involved, and that makes baking it into chips necessary. It's an IEEE standard so gets rolled into Ethernet.
Applications for PTP are things like electrical grid and cell network timings. Most day-to-day PC and server applications don't need that much accuracy.
Most office and DC servers generally configure NTP, which gives millisecond (10^-3) or tens/hunderds-microsecond (10^-6) accuracy. Logging onto most switches and routers you'll probably see NTP configured.
To get the most out PTP (10^-9) you need to generally run a specialized-hardware master clock.
Well, seconds have not been defined as "a convenient fraction of the time it takes a particular large rock to complete a full rotation around its own axis" for quite some time, and the origin is set to an abstract event in the past, which is not (as far as I know) subject to retroactive revision as a consequence of the vagarities of planetary or celestial motion (if it is, I would be fascinated to know more.)
That is true.
> origin is set to an abstract event in the past
That is also true.
> which is not (as far as I know) subject to retroactive revision as a consequence of the vagarities of planetary or celestial motion
I'm afraid you are wrong on that. The unix time is synced with UTC. UTC has so called "leap seconds" scheduled at irregular intervals by the International Earth Rotation and Reference Systems Service to keep it in sync with the Earth's actual movements. So in effect the unix timestamp is wrangled to sync with the Earth's motion.
> if it is, I would be fascinated to know more
Examples: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds
I also learned that 00:00:00 UTC on 1 January 1970 is a proleptic time, as UTC was not in effect then, though I am not sure that makes it subject to subsequent planetary motions.
No, it isn't. The very article you linked explains that Unix time is monotonically increasing and ignores leap seconds.
Seconds have not ever been defined that way, because the time it takes for the earth to complete a full rotation around its own axis (the "sidereal day") was never a question of much interest. It's mostly relevant to astronomers.
Seconds were always defined in terms of the synodic day, the time it takes for a point on the earth that is aimed directly at the sun to rotate around the earth until it is once again aimed directly at the sun.
They still are defined that way, in the sense that the only purpose of other definitions is to match the traditional definition as exactly as possible. If cesium started vibrating faster or slower, we'd change the "official definition" of a second until it matched the traditional definition. Given that fact, which definition is real and which isn't?
I think we need both a way to reckon our everyday experience of passing time and a constant measure, and one cannot be just a special case of the other. Personally, I feel that introducing leap seconds into Unix time just muddied the waters.
As to the sidereal day being of little interest outside of astronomy, James Watt's patron/partner Matthew Boulton built a sidereal clock and tried hard to find a purchaser (even having it shipped to Catherine the Great in hopes of sparking some interest) without success. It was on display in his Birmingham home when we visited a few years ago.
https://www.thefreelibrary.com/Boulton+clocks+on+to+the+past...
A second being 1/86400th of a day (24 hrs * 60 minutes * 60 seconds per minute) is still essentially true, and based, essentially still on the seasons and our movement around the sun (or relative movements between the various bodies in the solar system, depending).
Being a chaotic natural system, we of course need to add fudge factors here and there to simplify the math day to day while keeping it aligned with observed reality, like leap seconds and all), at least where it intersects with reality in a material way.
No, but it is a phone number in the Raleigh area.
Shamelessly I'll here remind people to never use gettimeofday() to measure time. E.g. setting the time backwards used to cause any currently running "ping" command to hang. (they fixed it about 10 years after I sent them a patch)
More fun examples of bugs like that at https://blog.habets.se/2010/09/gettimeofday-should-never-be-...
It comes from a natural inclination of I want something to expire some period from now.
The natural way is to say what time is it now. Figure out what time you want to expire with a date add. Then busy wait until that time expires using some form of gettime. The very big assumption you make is that the gettime methods always move forward. They dont.
This bug is easy to make thinking you are treating a wait item as a calendar event. Its not. You need to find something to busy wait on, that always counts up. Do not use the system clock. Also pick something that counts at a known fixed rate. Not all counters are equal. Some can skew by large margins after an hour and triggering when you do not expect. Which makes people want to reach for the system clock again. If you somehow decide 'i will use the clock' be prepared for some pain of the dozens of different ways that fails.
Sometimes we used, for dev and testing, dedicated machines running with clocks set 20-25 years in the future. (They was a measurable investment of capital, power, and cooling back then.) This was smart: the remnants of DEC were able to sidestep the whole Y2K cluster**k.
Are our key tech vendors doing the same now? It's about a quarter century until the 2038 fin-de-siecle. I sure would like some assurance that the OSs, DBMSs, file systems, and other stuff we all adopt today will have some chance of surviving past 2038-01-19.
I know redefining time_t with 64 bits "solves" the problem. But only if it gets done everywhere. Anybody who's been writing programs for more than about 604 800 seconds has either thought through this problem or hasn't.
That's an inconvenient property to have.
https://news.mit.edu/2022/quantum-time-reversal-physics-0714
https://www.youtube.com/watch?v=eOti3kKWX-c
I like to pretend I can fix computers, this guy can actually fix computers, which is probably why I find his show so fascinating.
Putting on my pedantic hat though, I see that East and West were switched in the discussion of Japan's unique 50/60Hz AC frequency split, and I can't get my mind off it. Hope you can make the edit.
If you still have such a machine (preferably without any valuable data), try setting the date right before the overflow with this command
date -s @$[2**31 - 10]
and see if you can recover the system without reboot.
I have seen different daemons stopped responding and just consuming CPU, NTP clients flooding public servers, and other interesting things. I suspect many of these systems will still be running in Y2038 and that day the Internet might break.
A digression: The documented bash syntax for arithmetic expansion is $(( EXPRESSION )) . I see that $[ EXPRESSION ] also works, but I don't see it documented anywhere. (Both syntaxes also work in zsh.)
I wouldn't be surprised if a lot of companies will handle it by just setting the clocks back 50 years or so on industrial equipment. But God have mercy on those that forget some systems.
https://en.wikipedia.org/wiki/Doomsday_rule
At a glance it looks like 1971 and 1993 might be good alternatives for 2038. The command line cal program agrees with where dates start each month for all three years.
- small, limited, and/or fixed data size
- compatibility across systems
- range of expressable times
- monotonicity (either locally or distributed)
- express absolute datetimes accurately
- express relative offsets accurately
- accurate over very short or very long timescales (or both)
- synchrony to solar noon
- synchrony to sidereal time
- timezones
- relativisitc effects
Pick any, uhhh, well pick as many as you can and try to shoehorn them together and you get your typical time implementation.
Earth time: UT (UT1).
UTC is a vague approximation of UT1 using TAI and a leap seconds data source similar to tzdata.
On most modern CPUs, there is an accurate monotonic counter. The problem though is it doesn't know anything about when it was started or when it went to sleep in any time system.
Oh and bugs. Some systems interpret edge cases in one way, and others in others. Academic purity, interoperability: pick 1.
And then calendars are discontinuous through the years: October 15, 1582 and twice in 1752. They also vary by country which calendar system was in use when. The Julian and Gregorian calendars were in dual use until 1923.
Daylight savings time rules change. Timezones change. And then you get tzdata.
Excerpt From A Deepness in the Sky by Vernor Vinge
Unix time is UTC based, so it ignores leap seconds and deviates from how many seconds actually passed since the start of the epoch.
The actual number of seconds passed corresponds to TAI, but you can't convert future timestamps from TAI to UTC since you can't predict leap seconds, so you can't display future TAI timestamps using typical date notation.
> For recording and calculations, it doesn't.
Depends on what you're recording and calculating. Storing a UTC timestamp generally works when recording an event that already happened.
But it doesn't work for scheduling events like meetings, since there the authoritative time is often local time. If you simply store such future events in UTC you'll run into problems when the definition of the relevant timezone changes.
Both past timestamps and future timestamps are not exact numbers, but approximate numbers, affected by errors.
The only difference between past timestamps and future timestamps that are converted from TAI to UTC is in the magnitude of the errors.
If the past timestamps have been acquired on a computer whose clock had been synchronized by NTP, then the errors are likely to be less than 10 millisecond.
If the past timestamps have been acquired on a computer whose clock had not been synchronized with a precise source, then the errors are likely to be greater than a second and they may be as large as of several minutes.
For future timestamps, the uncertainty of the value of the difference between TAI and UTC makes the likely value of the errors to be of several seconds, and increasing with the distance of the timestamp in the future.
In conclusion, one must be aware of the probable magnitude of the errors affecting an UTC timestamp, but that does not prevent conversions between TAI and UTC, whenever that is desired.
One of the greatest mistakes when dealing with time values is handling them like they were exact values, which they are not.
To handle future timestamps with minimal errors, it is necessary to have 2 distinct data types, a TAI time expressed as a number of some time units, e.g. nanoseconds, and an UTC time that is a structure containing date and time of the day. An UTC time must never be expressed as a number of seconds from some past date. That is the huge mistake made by the "UNIX time".
Future timestamps must be stored using the appropriate data type, e.g. UTC time for a future scheduled meeting.
The hardware clocks and the low-level clock synchronization and handling programs should always use TAI time.
When the current time approaches a future UTC timestamp, the error of the estimation of the corresponding TAI time diminishes. The leap seconds are announced with many months in advance, so the correct TAI time for a future UTC timestamp will also be know with at least a few months in advance. There is no risk that you computer will fail to notify you about the right time of the meeting, except when a naive programmer handles the timestamps wrongly, which is unfortunately quite frequent.
Edit: I’m not entirely correct here, as aside from UTC the time zone, there is also UTC the time standard. See below.
You cannot, by definition, tell how many Unix seconds later "third of may, 2025, at noon, local time" is - there is no way to convert future, local times to Unix times, because that "conversion code" is not fixed. Sure, we can reasonably expect that definition of time flow relationship to local time will not change for past dates, but one must expect these changes for times in the future.
Local time requires a time zone, times zones are defined in relation to UTC, and there is an unknown number of leap seconds between now and 3 May, 2025.
Time. Time. What is time? Swiss manufacture it. French hoard it. Italians squander it. Americans say it is money. Hindus say it does not exist. Do you know what I say? I say time is a crook.
"there has also been an astronomical tradition of retaining observations in just the form in which they were made, so that others can later correct the reductions to standard if that proves desirable, as has sometimes occurred." [https://en.wikipedia.org/wiki/Epoch_(astronomy)]
UNIX time, like all times, is a very good one, if we but know what to do with it. -Ralph Waldo Emerson