Re: [LEAPSECS] Introduction of long term scheduling

From: Steve Allen <sla_at_UCOLICK.ORG>
Date: Mon, 8 Jan 2007 22:57:23 -0800

On Mon 2007-01-08T01:54:56 +0000, Zefram hath writ:
> Possibly TT could also be used in some form, for interval calculations
> in the pre-caesium age.

Please do not consider the use of TT as a driver for the development
of any sort of commonplace API. In the far past no records were made
using TT for the timestamp, and nobody ever will use TT except when
comparing with ancient eclipse records.

I agree that system time should increment in as uniform a fashion as
possible, but amplifying reasons recently listed here I disagree that
anyone should specify that the operating system uses TAI. TAI is
TAI, and nothing else is TAI. Note that even in the history of TAI
itself there have been serious discussions and changes in the scale
unit of TAI to incorporate better notions of the underlying physics.

GPS is not (TAI - 19), UTC is not (TAI - 33). Millions of computers
claiming to be running using TAI as their system time, even if they
have rice-grain-sized cesium resonators as their motherboard clocks,
will not make that statement true. Instead it will simply obscure
the concept of TAI much worse than it is misunderstood now.

For simplicity, sure, let earthbound systems try to track TAI. For
simple systems just let the simple algorithm assume that the
tolerances are large enough that it is safe to make time conversions
as if the timestamps were TAI. But it is probably safer to come up
with a name for "the timescale my system clock keeps that I wish were
TAI but I know it really is not".

Don't forget that UTC and TAI are coordinate times which are difficult
to define off the surface of the earth. For chronometers outside of
geostationary orbit the nonlinear deviations between the rate of a
local oscillator and an earthbound clock climb into the realm of
perceptibility. Demonstrating that the proper time of a chronometer
is notably different from the coordinate time of TAI is now childsplay
(or at least it is if you are one of Tom Van Baak's kids. See
http://www.leapsecond.com/great2005/ )
There seems little point in claiming to use a uniform time scale for a
reference frame whose rate of proper time is notably variable from
your own.

Right now most general purpose computing systems with clocks are on
the surface of the earth, so counting UTC as subdivisions of days
makes sense. Off the surface of the earth it isn't clear why it's
relevant to demand that the operating system time scale should result
in formatted output that resembles how things were done with the
diurnal rhythm of that rock over there.

Right now NTP can keep systems syncronized to a few microseconds, but
no two clocks ever agree. Even if we stick to discussing systems on
earth, what happens when the operations of distributed systems demand
an even tighter level of sync than NTP can provide?

It is relatively easy to calculate when the lack of sync between clock
and sun will become a problem if leap seconds are abandoned: around
600 years.

What if general purpose systems do not have a means of acknowledging
and dealing with the fact that their system chronometer has deviated
from the agreeable external time, or if there is no agreeable external
time?

I don't think that handling leap seconds is the biggest issue that the
evolution of general purpose computer timekeeping is going to face,
and I think that not facing the other issues soon will result in
problems well before 600 years have elapsed.

--
Steve Allen                 <sla_at_ucolick.org>                WGS-84 (GPS)
UCO/Lick Observatory        Natural Sciences II, Room 165    Lat  +36.99858
University of California    Voice: +1 831 459 3046           Lng -122.06014
Santa Cruz, CA 95064        http://www.ucolick.org/~sla/     Hgt +250 m
Received on Mon Jan 08 2007 - 22:57:59 PST

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:55 PDT