Re: [LEAPSECS] Introduction of long term scheduling

From: Zefram <zefram_at_fysh.org>
Date: Tue, 9 Jan 2007 09:56:09 +0000

Steve Allen wrote:
> But it is probably safer to come up
>with a name for "the timescale my system clock keeps that I wish were
>TAI but I know it really is not".

True. I can record timestamps in TAI(bowl.fysh.org), and by logging
all its NTP activity I could retrospectively do a more precise
TAI(bowl.fysh.org)<->TAI conversion than was possible in real time.
To be rigorous we need to reify an awful lot more timescales than we
do currently.

Another aspect of rigour that I'd like to see is uncertainty bounds
on timestamps. With NTP, as things stand now, the system clock does
carry an error bound, which can be extracted using ntp_adjtime().
(Btw, another nastiness of the ntp_*() interface is that ntp_adjtime()
doesn't return the current clock reading on all systems. On affected
OSes it is impossible to atomically acquire a clock reading together
with error bounds.) If I want a one-off TAI reading in real time, I can
take the TAI(bowl.fysh.org) reading along with the error bound, and then
instead of claiming an exact TAI instant I merely claim that the true
TAI time is within the identified range. In that sense it *is* possible
to get true TAI in real time, just not with the highest precision.

If I have a series of timestamps from the same machine then for comparing
them I don't want individual error bounds on them. The ranges would
overlap and I'd be unable to sort them properly. This is another reason
to reify TAI(bowl.fysh.org): the errors in the TAI readings are highly
correlated, and to know that I can sort the timestamps naively I need
to know that correlation, namely that they came from the same clock.
Even in retrospect, when I can do more precise coversions to true TAI, I
need to maintain the correlation, because the intervals between timestamps
may still be smaller than the uncertainty with which I convert to TAI.

>(or at least it is if you are one of Tom Van Baak's kids. See
>http://www.leapsecond.com/great2005/ )

Cool. I'd have loved such toys when I was that age. My equivalent was
that I got to experiment with a HeNe laser, as my father is a physicist.
Now I carry a diode laser in my pocket. When TVB's children grow up,
they'll probably carry atomic watches.

>There seems little point in claiming to use a uniform time scale for a
>reference frame whose rate of proper time is notably variable from
>your own.

Hmm. Seems to me there's use in it if you do a lot of work relating to
that reference frame or if you exchange timestamps with other parties
who use that reference frame. Just need to keep it in its conceptual
place: don't assume that it's a suitable timescale for measuring local
interval time. Another reason to reify a local timescale.

> what happens when the operations of distributed systems demand
>an even tighter level of sync than NTP can provide?

Putting on my futurist hat, I predict the migration of time
synchronisation into the network hardware. Routers at each end of a
fibre-optic cable could do pretty damn tight synchronisation at the
data-link layer, aided by the strong knowledge that the link is the
same length in both directions. Do this hop by hop to achieve networked
Einstein synchronisation. (And here come another few thousand timescales
for us to process.)

>What if general purpose systems do not have a means of acknowledging
>and dealing with the fact that their system chronometer has deviated
>from the agreeable external time,

This has long been the case. Pre-NTP Unix APIs have no way to admit
that the clock reading is bogus, and systems like Windows still have no
concept of clock accuracy. What happens is that we get duff timestamps,
and some applications go wrong. The number of visible faults that result
from this is surprisingly small, so far.

-zefram
Received on Tue Jan 09 2007 - 01:57:00 PST

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:55 PDT