Re: [LEAPSECS] Longer leap second notice, was: Where the responsibility lies

From: Rob Seaman <seaman_at_NOAO.EDU>
Date: Tue, 3 Jan 2006 17:17:24 -0700

On Jan 3, 2006, at 4:22 PM, Poul-Henning Kamp wrote:

> In message <43BAFFEA.9010006_at_edavies.nildram.co.uk>, Ed Davies writes:
>> Poul-Henning Kamp wrote:
>>>> If we can increase the tolerance to 10sec, IERS can give us the
>>>> leapseconds with 20 years notice and only the minority of computers
>>>> that survive longer than that would need to update the factory
>>>> installed table of leapseconds.
>>
>> PHK can reply for himself here but, for the record, I think RS's
>> reading of what he said is different from mine. My assumption is
>> that PHK is discussing the idea that leaps should be scheduled many
>> years in advance. They should continue to be single second leaps -
>> just many more would be in the schedule pipeline at any given
>> point.
>>
>> Obviously, the leap seconds would be scheduled on the best available
>> estimates but as we don't know the future rotation of the Earth this
>> would necessarily increase the tolerance. In theory DUT1 would be
>> unbounded (as it sort of is already) but PHK is assuming that there'd
>> be some practical likely upper bound such as 10 seconds.
>>
>> Am I right in this reading?
>
> yes.

I'm willing to entertain any suggestion that preserves mean solar
time as the basis of civil time. One could view this notion as a
specific scheduling algorithm for leap seconds. My own ancient
proposal (http://iraf.noao.edu/~seaman/leap) was for a tweak to the
current algorithm that would minimize the excursions between UTC and
UT1. This suggestion is more than a tweak, of course, since it would
require increasing the 0.9s limit. One could imagine variations,
however, with sliding predictive windows to balance the maximum
excursion against the look ahead time. One is skeptical of any
advantage to be realized over the current simple leap second policy.

I continue to find the focus on general purpose computing
infrastructure to be unpersuasive. If we can convince hardware and
software vendors to pay enough attention to timing requirements to
implement such a strategy, we can convince them to implement a more
complete time handling infrastructure. This seems like the real goal
- one worthy of a concerted effort. Instead of trying to escape from
the entanglements of this particular system requirement, why don't we
focus on satisfying it in a forthright fashion?

There is also the - slight - issue that we aren't only worried about
"computers". There is a heck of a lot of interesting infrastructure
that should be included in the decision making envelope.

In general, the strategy you describe could also be addressed as an
elaboration on the waveform we are attempting to model with our
clocks. Not a constant cadence like tick-tick-tick-tick, that is,
but tick-tick-tock-tick. I do think there might be some interesting
hay to be made by generalizing our definition of a clock to include
quasi-periodic phenomena more complicated than a once-per-second
delta function. Would give us some reason to explore the Fourier
domain if nothing else.

Rob Seaman
National Optical Astronomy Observatory
Received on Tue Jan 03 2006 - 16:18:43 PST

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:55 PDT