18

I'm asking myself after having read on papers that around the year 2035, we, on the earth and our computers by the way, might have to correct our time by one second back to comply with astronomical time.

When I read that, I believed first that this would put computers in great trouble: A clock that is going back and replays a second that already existed wouldn't be supported by many programs, and first: operating systems.

But then I've figured that we are doing something similar every year: we are going back one hour when applying winter time. I don't know if it's the same in every country, but at 3 o'clock in the morning we decide that it's 2 o'clock.


How does a Linux operating system manage timestamped operations when this hour change happens and the timestamp goes from 2024-10-27 02:59:59.999999 to 2024-10-27 02:00:00.000000? Every message ordered by time should be tricked then, for example.

But maybe this isn't what is happening, and going from 2024-10-27 02:59:59.999999 to 2024-10-27 02:00:00.000000 is still going, in terms of system timestamp, from 1730069999 to 1730070000 (+1) (in seconds)?

And in this case the problem of removing one second from our time, in 2035, would be different than going back one hour like we are doing each year? For example, that it could cause us to assign: 2019682799 = 2034-12-31 23:59.59, 2019682800 = 2035-01-01 00:00.00, 2019682801 = 2035-01-01 00:00.00 2019682802 = 2035-01-01 00:00.01

But the problem here is that:

  1. Computers with corrected OS will meet computers without correction (too old or not updated OS), believing that: 2019682801 = 2035-01-01 00:00.01 and 2019682802 = 2035-01-01 00:00.02 when it's not the case. But this is the same problem if some OS still exist that don't know about summer and winter time changes.

  2. I guess that there aren't many examples of code like this around the world:

    timestamp = System.timeInMillis()
    millis = millis % 1000
    secondsSince1970 = millis / 1000
    seconds = secondsSince1970 % 60
    minutesSince1970 = ...
    

    because here, the seconds variable becomes wrong in 2035 if the second removal is applied, as 2019682802 should lead to 01 and not to 02.

3

6 Answers 6

45

In Linux, the operating system maintains a clock that runs fundamentally in UTC time, which does not have Daylight Saving time shifts.

The (usually) one-hour Daylight Saving Time is handled by not changing the clock, but changing the UTC offset applied to it when displaying local time.

As a result, in a Central European timezone for example, the timestamp

2024-10-27 02:59:59.999999 UTC+2  =  2024-10-27 00:59:59.999999 UTC = 1730001599

will be followed by timestamp

2024-10-27 02:00:00.000000 UTC+1  =  2024-10-27 01:00:00.000000 UTC = 1730001600

(Note that I'm not using POSIX timezone specifiers here: since the systems POSIX specifications were based on were developed mostly in America, and they reserved positive-integer timezone specifiers for themselves, the sign of POSIX timezone offset is inverted from what you might expect based on general understanding of UTC offset.)

This is why, if a program must ever store timestamps in local-time format, it should always include some UTC offset identifier together with the timestamp. If a program needs to avoid human-scale ambiguities at the end of Daylight Saving Time, it should always store timestamps internally in some UTC-equivalent format.


"Moving UTC time back one second" is equivalent to inserting one extra second in the timescale. This is not a new thing, and the UTC time standard already has a standard way to do it: leap seconds. At the end of every June and December (UTC), there is an opportunity to insert a leap second, so that e.g.

xxxx-12-31 23:59:59 UTC

will be followed by

xxxx-12-31 23:59:60 UTC

and then by

(xxxx+1)-01-01 00:00:00 UTC

Whether or not this is actually done depends on measurable imperfections on Earth's rotation, as decided by the international organization IERS.

The last time a leap second was inserted was at the end of year 2016:

https://hpiers.obspm.fr/eoppc/bul/bulc/UTC-TAI.history

Currently, there is no leap second insertion planned for the end of June 2024. The authoritative source for the next leap second insertion is IERS Bulletin C:

https://datacenter.iers.org/data/latestVersion/bulletinC.txt

The NTP time synchronization protocol has a leap second announcement feature to cover this. Linux date command will dutifully display a 23:59:60 UTC timestamp at the appropriate time to demonstrate that the OS is aware of what's happening, but obviously this means not all Unix timestamp seconds are equal in length: at leap second insertion, most OSs consider the Unix time second to be stretched to the length of two seconds.

(NTP also has a facility for a "negative leap second", but so far, this has never been required in practice, and is not expected to be needed in foreseeable future.)

A newer alternative pragmatic solution is leap smearing: the extra second is handled by slowing the system clock so that the extra second will be accounted for within a day or so. This is based on the idea that the uniformity of the length of each second is more important than the absolute accuracy of the timestamp at the +/- 1 second range. It can be a valid solution for most "general purpose" uses of time.

The leap second is obviously an issue for those that need sub-second timing accuracy at all times, or an exactly accurate count of seconds over all timespans longer than half a year or so. However, it turns out that people who need this kind of accuracy are mostly already aware of the fact and are already dealing with it.

If you need such high-precision timekeeping in a Linux/Unix system, you could set up a local time synchronization facility (e.g. a modified NTP server) that distributes TAI (International Atomic Time) instead of UTC. Then you could have your system clocks run in TAI instead of UTC, and use the right/ variants of the timezones in the IANA/Olson timezone database (i.e. right/Europe/Paris instead of just Europe/Paris: these will include leap seconds).

8
  • 3
    @ilkkachu I assume this means what's prioritised is the uniformity of the length of each "adjacent" second - in the end there will still be two instances of seconds differing in "duration" (at the start and end of the smearing period), but that difference (1:1+1/86400) will be much smaller than the 1:2 ratio around the leap second (where a specific second lasts twice as long).
    – N1ark
    Commented Mar 31 at 20:48
  • 9
    @ilkkachu eg. if you set a 5 second timer, you wouldn't want it to last 6 seconds, whereas it lasting ~5.00005787 seconds is probably acceptable
    – N1ark
    Commented Mar 31 at 20:50
  • 6
    "NTP also has a facility for a "negative leap second", but so far, this has never been required in practice, and is not expected to be needed in foreseeable future." This is actually what OP was referring to. It is expected to be needed in about five years; see: nature.com/articles/…
    – Oliphaunt
    Commented Apr 1 at 18:24
  • 1
    With a negative leap second, a timestamp that would have received value N without it will instead have value (N+1), so a negative leap second actually adjusts the clock forward, not backward. For "general purpose" computing, shifting the clock forward is usually easier than shifting it backward, so this should not be a big deal. For users of high-precision timekeeping, it is obviously just as much of an issue as regular leap seconds, and the same tools (like the use of TAI instead of UTC) should work to deal with it.
    – telcoM
    Commented Apr 2 at 2:59
  • 1
    @Thomas Guyot-Sionnest That is unfortunately only true if you are running NTP version 4.2.4 or older, or version 4.2.8p3-RC1 or newer, and are running it with the -x option. See the discussion in NTP bug 2745. If you are not using the -x option on any version, or if you are using NTP version newer than 4.2.4 but older than 4.2.8p3-RC1, a leap second will cause a step change.
    – telcoM
    Commented May 10 at 7:59
13

Lots of good answers here already, but I didn't see the mention a far more prosaic fact: this kind of second-shifting happens all the time already on your computer. Sometimes the shift is even more than just one second - and it's OK.

You see, the hardware clocks in your regular PC/server are notoriously imprecise. Without frequent clock synchronization they will drift by entire minutes per year. Source: I've seen/had to deal with it on more than one occasion, both on home PCs and servers.

I don't know if it's too difficult/expensive to make a clock that would be stable all year round, but my guess is that NTP is such a simple and efficient solution that there's just no demand for more precise hardware clocks. At least not under normal circumstances.

And - as you might have noticed - all of our computers are completely fine with this. In fact, it's no different that when you change the clock on your computer manually. By the way - this is another operation which can mess with the clock wildly, yet never causes any problems.

The reality is that most programs just don't care about the clock moving around. If they need a stable monotonic clock (like a computer game calculating what to draw in a frame) there are separate functions in the OS for that. They don't return a timestamp but rather "nanoseconds since the computer booted" or something like that. Perfect for measuring elapsed time between two events.

Timestamps are usually used for something like logging or other places where precision isn't that big of a deal. It's very rare that you need an actual millisecond-precise timestamp all the time.

Added: Just remembered another scenario with wild clock jumps: when your computer goes into sleep or hibernate mode and wakes up afterwards. Now this actually is an operation that some applications are unable to cope with, but even then it's fairly rare. And obviously the OS is fine.

Added 2: TL;DR - the support for leap seconds at OS/application level is irrelevant. It's support for clock synchronization that matters. As long as the computer can successfully synchronize its clock to some external time source, it will pick up the leap seconds as a matter of course and won't even notice anything out of the ordinary. And clock synchronization today is bog standard on all devices and enabled by default.

Added 3: I'm not saying that this will NEVER cause any problems. Obviously this can be an issue under the right circumstances. It's just that in practice it's very rare.

16
  • 2
    @Jasen - True. Though it wouldn't be any different from the OS/app point of view - just a periodic sync to an external clock.
    – Vilx-
    Commented Mar 31 at 23:20
  • 2
    The premise of this answer is false. Modern systems do not jump the clock to correct for clock drift. They adjust the clock rate to compensate for drift so that the adjustment to the actual time is smooth and monotone. The only way jumps in either direction occur is as a result of leap seconds, if whoever setup the system opted to handle them that way, and these do break a lot of things. As such, a number of people prefer to have the clock drift handling smooth out leap seconds over a matter of minutes or hours or even a whole day, rather than jumping the clock. Commented Apr 1 at 0:34
  • 1
    @R..GitHubSTOPHELPINGICE - I'm pretty sure that whenever I've synchronized the clock manually (that is, running the appropriate command-line command in Linux or clicking the "Sync now" button on Windows) the jump has been instantaneous - and without the system crashing, burning or even hiccupping. But perhaps the background processes that keep the clock continuously synchronized do things differently. That I do not know.
    – Vilx-
    Commented Apr 1 at 11:37
  • 2
    @JohnBollinger presumably you haven't dealt with transactional databases Commented Apr 1 at 15:22
  • 2
    @aviro There's no reason to expect the monotonic clock to be more accurate generally (although it's sometimes more precise, for example if it's driven by a CPU cycle counter), but there is reason to expect it to be monotonic (so you can reliably assume that if you ask for the time twice, it will be no lower the second time), and for it to be unaffected by adjustments to the system's real-time clock (which may make it more accurate when used to measure lengths of time whilst the real-time clock is being adjusted).
    – James_pic
    Commented Apr 3 at 11:14
6

I'm not adding anything new to the other answers, but I'll try to be more concise and clear.

Timezone

The actual hardware clock of the computer show the time (in seconds) since Epoch (1970-01-01 00:00:00 UTC). The system itself, through some glibc libraries (such as strftime(3)) knows how to convert it to a human readable time in the specific timezone.

For instance, in US/Pacific timezone, you can check when the timezone changes in year 2024 using the zdump command:

$ zdump -V -c 2024,2025 US/Pacific 
US/Pacific  Sun Mar 10 09:59:59 2024 UT = Sun Mar 10 01:59:59 2024 PST isdst=0 gmtoff=-28800
US/Pacific  Sun Mar 10 10:00:00 2024 UT = Sun Mar 10 03:00:00 2024 PDT isdst=1 gmtoff=-25200
US/Pacific  Sun Nov  3 08:59:59 2024 UT = Sun Nov  3 01:59:59 2024 PDT isdst=1 gmtoff=-25200
US/Pacific  Sun Nov  3 09:00:00 2024 UT = Sun Nov  3 01:00:00 2024 PST isdst=0 gmtoff=-28800

So, if we want to convert this to seconds since EPOCH, you can use the following command:

$ date --date='Sun Mar 10 01:59:59 PST 2024' +%s
1710064799

So it this specific time in PST, there would have been 1710064799 seconds that have passed since Epoch (1970-01-01 00:00:00 UTC).

Now, if you check the US/Pacific time at this exact second, you'll see:

$ TZ=US/Pacific date --date='@1710064799'
Sun Mar 10 01:59:59 PST 2024

This is still PST (Pacific Standard Time). But if you add just one second:

$ TZ=US/Pacific date --date='@1710064800'
Sun Mar 10 03:00:00 PDT 2024

You can see it "jumped" one hour, from 02:00 to 03:00, and the time zone switched from PST (Pacific Standard Time) to PDT (Pacific Daylight Time). The seconds in the hardware clock are still running the same way, only the human representation (that depends on your specific time zone) changed.

Leap second

How does your system gets the right time in the first place? It's uses the NTP (Network Time syncronisation Protocol) to poll correct time from some time servers (usually it's the router). It also uses different algorithms to sync the time when there's a difference between the local HW clock and the time polled from the time server. Then it's the job of the NTP server to add or remove the leap second. There are different approaches for that.

For instance, in Cisco routers, the leap second gets added or deleted to the last second of the month.

vl-7500-6#show clock
23:59:59.123 UTC Sun Dec 31 2006
vl-7500-6#show clock
23:59:59.627 UTC Sun Dec 31 2006
<< 59th second occurring twice
vl-7500-6#show clock
23:59:59.131 UTC Sun Dec 31 2006    
vl-7500-6#show clock
23:59:59.627 UTC Sun Dec 31 2006

Google use a Leap Smear approach, where the last day before the leap second, every second will go a bit slower or faster until the added/removed leaped second is completely added up at the end of those 24 hours.

In this example, we will suppose there is a leap second at the end of December 2022, although the actual schedule has not yet been announced.

The smear period starts at 2022-12-31 12:00:00 UTC and continues through 2023-01-01 12:00:00 UTC. Before and after this period, smeared clocks and time service agree with clocks that apply leap seconds.

During the smear, clocks run slightly slower than usual. Each second of time in the smeared timescale is about 11.6 μs longer than an SI second as realized in Terrestrial Time.

[...]

Over the 86,401 SI seconds of the smear, the stretch in the 86,400 indicated seconds adds up to the one additional SI second required by the leap.

5

Timezone, daylight savings time, and leap seconds are only a human representation of time. Computers generally track time as elapsed time since a fixed reference such as the Unix epoch; this allows monotonic clocks to be made available for processes which don’t want to deal with such variations.

Changing to and from daylight savings time implies a change in the current timezone, so even human-readable timestamps are fine (as long as the timezone is represented in the output).

Things get more complex with human-generated timestamps, e.g. for job scheduling. Administrators generally schedule jobs based on local time, not a fixed timezone. There are a few general rules, for administrators, which help avoid problems:

  • avoid scheduling jobs during times than may be repeated or skipped (in Europe, 2-3am)
  • make jobs idempotent, and/or aware of their previous execution
  • if a job runs too soon after its previous run, skip it (e.g. a daily job which ran less than 23h ago)
  • make jobs take care of all the work accumulated since the previous run (if an hourly job runs at 1:15, then the 2:15 run is skipped because 2:15 doesn’t happen, the 3:15 job must use 1:15 in the previous timezone as its reference)

Another approach is to skew time: starting far enough in advance, time is slowed down or accelerated, in such a way that any given instant (in the desired frame of reference) occurs once and never repeats, but at some point the skewed time is aligned with the external reference again. Google does this for leap seconds (see “leap smearing” in telcoM’s answer), but I’ve seen it done more generally, if only conceptually. When skipping ahead (moving to daylight savings time), if a scheduler handles jobs defined in local time, and sees that from one minute to the next, an hour was lost, it can run all the jobs that would have been started during that hour.

8
  • And I'm not sure I understand the last part about the skewd time, time being slowed down or accelerated and being aligned with some "external reference". What kind of reference? How is it being aligned? I mean, I can understand if we're talking about leap month, for instance, but how can you skew it for a second? Or for an hour (especially if the hour is "compensated" every 6 months anyway)?
    – aviro
    Commented Mar 31 at 8:17
  • The only practical example I could think of is a leap year, where if, for instance, you want to run a job once a year, you'll run it every 365 days + 24/4 hours (to add the extra day every 4 years). Is that what you mean?
    – aviro
    Commented Mar 31 at 8:34
  • 2
    @MarcLeBihan, if you're talking about seconds since the epoch as POSIX defines it, then no, see unix.stackexchange.com/a/758951/170373 and esp. the sources linked there.
    – ilkkachu
    Commented Mar 31 at 10:19
  • 1
    Thanks! Buf If the formula tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 + (tm_year-70)*31536000 + ((tm_year-69)/4)*86400 - ((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400 mentioned in the post cannot become false, how will you loose a second? Commented Mar 31 at 10:25
  • 1
    @ilkkachu then the problem would only be to switch computers from normal working mode to a mode where they'll have to consider one specific minute being longer in "physical time" than the other ones? What's the Linux command to provoke this behavior (making a minute during 61 seconds)? I have a watch, I would enjoy controlling this. Commented Mar 31 at 18:53
4

Local time is purely a matter of the presentation layer. It is not part of the data model of unix time or time on any remotely modern system (even modern Windows supports keeping the hardware clock in UTC). This means that events are ordered globally with respect to each other (except for certain modes of handling leap seconds, which is why they are hard and controversial) and that each user (or even each process) can have its own idea of how you want time displayed. Times are compareable between systems with different policies for how they show time at the presentation layer, whether the systems are located in different time zones or just have users who prefer to work in different time zones or to work with UTC.

1
  • 1
    That's the beauty of timestamps and duration. If 10 seconds have passed since the Unix epoch, and I wait 1 second, now 11 seconds have passed since the Unix epoch. No need to worry about offsets, where I am in the world, leap seconds and so on. 11 seconds is 11 seconds! I could show that 11 seconds as "5:00 PM UTC" or I could call it "Banana Time!!", it's all just a display format. *Well except for Special Relativity 😂 Commented Apr 1 at 14:48
0

The system mostly doesn't care, except sometimes applications do.

I work with systems that due to testing are often set to the wrong day, month, or year. Mostly it just works except for obvious cases, for example:

A timer set to calendar instead of monotonic time can fire sooner or later or never depending on the time travel in exactly the way expected.

TLS fails because the server cert reads as expired or not-yet-valid.

Mostly things that check the date and time fail when date and time are wrong, as expected.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .