Re: USNO leap seconds - a minmum-change approach

From: Rob Seaman <seaman_at_noao.edu>
Date: Tue, 22 Aug 2000 10:05:43 -0700 (MST)

John Cowan says:

> Do you mean "ephemeral" in the usual sense of "lasting for a short
> time only", or in the technical sense "pertaining to an ephemeris"?
> The context suggests the former.

Thanks - I hadn't noticed the pun. Just another example of the
pervasive presence of time in our language and patterns of thought.

As you say, I intended the first meaning of the word.

> Who, in all this wide universe, cares about *anything* except humans,
> as far as we know?

As far as we know, nobody. My point (if anybody could miss my hamfisted
posts) was that this whole discussion is being phrased in terms appropriate
for the "precision time community" (the word community seems to come up a
lot in general).

Technical considerations appropriate for projects tossing around machine
parsable information at prodigious rates may well lead the folks making
this decision (well, the folks who seem convinced that this is solely
their decision to make) in directions that are inappropriate for direct
human use.

> I meant of course "used only by people who only care about what time it
> is to an accuracy of a few minutes or so" as distinct from systems that
> will malfunction if they do not know civil time to the nearest second
> or better. Most of us don't expect accurate time from our microwaves.

Well - it would also be an incomplete inventory of human uses of time
to suggest that people only care about not missing the first couple of
minutes of "Survivor".

There is a very big difference between the random errors of the world's
ensemble of clocks and a bias of 2+ minutes per century. Those couple
of minutes are small for some uses, but immense for others.

> All technologies whatever are "interim", I think.

That's simply the point I was aiming for. The M&K paper makes a lot about
avoiding future technical problems. But the only two examples the authors
could come up with are pretty unpersuasive. I may be very ignorant about
GLONASS - perhaps there are parts of the world where every street corner
has folks using GLONASS receivers...but I don't think so.

If GLONASS and spread-spectrum applications are broken - they should be
fixed by the people responsible, not have the whole world work around
the difficulties with a gigantic kludge.

> There are no *fundamental* timing complications. We *could* adapt to
> a system in which the length of the hour varies with the date, as the
> whole of Western culture did until mechanical clocks were invented..
> All is a matter of convention.
>
> However, the notion that the number of seconds in a year can't be
> predicted a year in advance makes for *practical* timing complications.

Yup.

> As a practical matter, most timepieces (other than sundials) are set
> using local civil time, which is slaved to UTC, but then they tick
> minutes with exactly 60 seconds in them each, until the next time
> someone has a practical reason to reset the clock, like power failure
> or (if the device is primitive enough) DST transition.

Yes - "most timepieces". Almost by definition the precise timing
community doesn't concern itself with "most timepieces". The engineers
and project managers who need to worry about fundamental timing issues
have access to sophisticated timing resources beyond the microwaves in
their laboratory break rooms.

Engineers in most of the world select metric standards by default.
In the U.S., we have a choice between metric and SAE. The choice is
governed by many things, but it isn't one the engineers ignore lightly.
If an unsegmented time standard is required - why would an engineer
choose UTC?

Some subset of spread spectrum applications are stated to be having
difficulties handling leap seconds. The engineers and project managers
responsible for the design and implementation of those applications are
also responsible for selecting UTC as a time standard. Having done so
it remains their responsibility to have built their products to match
specifications that presumably did not include going offline with each
leap second. They could have chosen some unsegmented time scale based
on TAI. They did not. They could have tested the behavior of their
system upon encountering a leap second. They apparently did not test
well enough. This is where the responsibility lies.

> A more sophisticated device could be set up to reckon 61 seconds
> in certain minutes, provided those minutes were predictable in advance.
> Even a wall clock or a watch could be programmed to do that at small cost.
> But do you really foresee a future in which every wall clock and watch is
> wired so that it can download the leap seconds from the Internet? That
> is just too techno-optimistic for me to swallow.

I don't know if it's optimistic - or perhaps just realistic - but
certainly all sorts of consumer goods now include vast capabilities.

A clock is one device that places requirements on its environment.
Springs need to be wound; batteries need to be replaced; the time needs
to be reset on occasion or the rate adjusted. Whatever one's opinion of
Apple Computer - an iMac is certainly a consumer item containing a clock.
The MacOS date and time control panel includes ntp. Buy a Mac - buy a
clock - and buy a clock that is designed to maintain good time with
minimal effort.

Certainly not every clock needs to keep good time. I wouldn't think the
time and frequency community would center its policies on this particular
statement of fact, however.

Rob Seaman
National Optical Astronomy Observatory
Received on Tue Aug 22 2000 - 10:05:57 PDT

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:54 PDT