Re: [LEAPSECS] Precise time over time

From: Markus Kuhn <Markus.Kuhn_at_cl.cam.ac.uk>
Date: Wed, 10 Aug 2005 14:14:36 +0100

Poul-Henning Kamp wrote on 2005-08-09 09:57 UTC:
> And that is where normal people will get screwed over, when the
> systems they depend on but don't understand fails to work as
> advertised because the engineers who implemented them didn't
> get the interfaces right for details like leapseconds.
>
> For some reason, the acronym POSIX comes to mind :-)

You keep badmouthing POSIX and UTS, but you have failed completely so
far to give us even a single convincing and practical example where and
how things *really* break if one just feeds some form of smoothed UTC
(e.g., UTS) into a POSIX system. And I mean a commercial real world
example, and not some obscure improvised specialist equipment that was
obviously only set up to break unless the clock is 100% perfect 100% of
the time (and could therefore never survive out there in the real world,
with or without leap seconds).

Simply quoting your decades of experience in implementing real-time
control systems doesn't cut the meat. Many of us here have exactly that.

The most striking fault in your line of reasoning is that you live in a
dreamworld where clocks are never wrong. In the real world of embedded
systems, leap seconds are a negligible hazard compared to the everyday
occurance of independent clocks temporarily losing synchronization. In
practice, they easily pile up several seconds per week. Clock have to be
resynchronized somehow all the time. If you ever had any involvement
with real-world real-time systems, then surely you are perfectly aware
that any of these must be designed to permit readjusting clocks without
taking the entire system offline.

What would your customers think if, in a system you designed, the
tinyest of problem with the clock setting requires a full reboot, if not
a reinstallation from scratch? You discuss leap seconds as if there are
many systems like that out there, or will be soon. I don't believe that.

The fundamental robustness requirement of being able to deal with
slightly out-of-sync clocks will not go away. It will not disappear when
the US delegates to the ITU win their "war against sundials". Today,
that requirement already ensures that leap seconds are, in practice, no
problem. They are just yet another tiny clock adjustment, like all the
others that we need to do routinely because of because of glitches,
human error, power cuts, battery failures, network failure, Microsoft's
insistence to still run the Windows PC CMOS clock in local time (!),
etc.

More than a decade since the wide deployment of GPS and NTP synchronized
systems started, we still can comfortably count with the fingers of a
single hand the published incidents where a leap second was alleged to
have played a causal rule (and none of these sound very convincing). I
take that as very good evidence for the fact that there is an important
independent robustness requirement that covers leap seconds pretty well,
too.

Until you manage to compile a convincing list of practical and realistic
failure patterns and reports that are *only* caused by the choice that
the POSIX world made with regard to handling leap seconds, I will
continue to consider your opinion to be quite a bit at the far end of
the scale.

I believe the vast majority of POSIX users are very happy with how it
encodes time. The only thing we slightly worry about is that existing
ABIs may keep time_t restricted to 32 bits until 2038.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain
Received on Wed Aug 10 2005 - 06:14:52 PDT

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:55 PDT