Re: [LEAPSECS] Stupid question from programmer: Why not just eliminate ALL UTC corrections ?

From: Markus Kuhn <Markus.Kuhn_at_cl.cam.ac.uk>
Date: Fri, 05 Aug 2005 10:52:15 +0100

Mark Calabretta wrote on 2005-08-05 00:03 UTC:
> On Thu 2005/08/04 08:18:55 +0200, Poul-Henning Kamp wrote
> in a message to: LEAPSECS_at_ROM.USNO.NAVY.MIL
>
> >Most if not all modern UNIXes already have the table of leapseconds.
>
> Without wishing to imply that the solution is so simple, your answer
> begs the question: what do you see as the principle difficulty in
> having systems use TAI internally and maintain a table of leap seconds
> for conversion to/from civil time?

[... Rewind mailing list archive by for years ...]

If leap-second tables get out of sync, then the human-visible civil-time
representation of system-clock values will get out of sync. At present,
most designers who carefully think about the issue do not consider this
a worthwhile and acceptable risk to take, given that most users care far
more about that the system knows local civilian time accurately than
they care about all seconds being exactly equally long without
excpetion.

The ultimate question is whether you want the mapping between your
system timescale and UTC (and via it local time) to be deterministic and
predictable or not. If you choice is the former, then you should use UTC
instead of TAI.

Some people think exclusively about the deterministic predictability of
the clock itself, and forget about the practical importance of the
deterministic predictability of the mappings between different
representations of time.

Example: We both schedule an event on 2006 January 2005 00:00:00 UTC. If
I convert this schedule from UTC to my system-internal TAI before the
announcement of the next leapsecond at the end of this year, but you
convert it after you receive this announcement, then we both will
schedule this event 1 second apart, even though we thought we had an
unambiguous agreement.

By using UTC as the system time scale, we avoid the above problem.

In many applications (other than geophysical monitoring and navigation),
it is far more important that things happen simultaneously, rather than
whether they happen exactly in 123456789 or 123456788 SI seconds from
*now*.

There is also the problem of interoperability with systems that have no
good access to TAI.

For >99% of all applications, using TAI as the time_t basis is the wrong
trade-off today in distributed computing. In addition, it is against
what the POSIX standard says, which sees time_t as an integer-encoding
of a UTC clock value rather than as a simple count of seconds (although
the historic term unfortunately "Seconds since the epoch" suggests
otherwise to anyone who hasn't actually read its exact definition).

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain
Received on Fri Aug 05 2005 - 02:52:29 PDT

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:55 PDT