Upgrade, don't degrade

From: Rob Seaman <seaman_at_noao.edu>
Date: Mon, 9 Apr 2001 13:12:05 -0700 (MST)

I found the transcript from the PTTI meeting both enlightening and
disquieting. The only voices expressed appear to alternate between
suggestions for degrading our current civil time, or at best, reasons
for maintaining the status quo. This is most definitely NOT just a
discussion of extremely technical issues. We are talking about changing
the definition of time for entire planetfuls of humans yet unborn.

Dramatic? Yes. There is drama hidden in these dry academic debates.

I'm resisting the impulse to critique the PTTI discussion point-by-point.
For instance, I object when our worthy moderator says:

    "When you look at why they were opposed to it, most of the people
    were opposed to it for reasons that were not related to money or
    anything practical."

Yes, we can and should try to quantify the economic impact of any
changes (or non-changes) to UTC. But the practical issues are not all
economic - nor are they all dependent on the precision timing community.

The reality is that precision timing projects are required to properly
use whatever facilities and standards are provided - now and in the future.
If some highly technical projects are failing to make appropriate use of
the current time infrastructure, there is no reason to expect that those
projects will make any better use of a changed infrastructure. Other
projects don't appear to be having the same trouble.

Our real customers are explicitly the non-technical, non-professional
users - that is - "people". It isn't enough to hide the technical details
behind ever fancier clocks - nor is it true that civilians have no need to
worry about those details. Some do and always will. More to the point,
the design and implementation of the world's civil time scale is simply
one of those things that we ought to try to get right.

Here is one voice for improving what we already have.

Rob Seaman
The 11/99 "GPS World" article by McCarthy and Klepczynski offers five
options for near-to-mid future UTC policies:
    - Continue Current Procedure
    - Discontinue Leap Seconds
    - Change the Tolerance for UT1-UTC
    - Redefine the Second
    - Periodic Insertion of Leap Seconds
Virtually all of the discussion to date has centered on the single choice
of discontinuing leap seconds.  Naively this appears to "solve" the largest
number of "emerging problems", while intrinsically discomfiting only odd
ducks such as astronomers and traditional sextant navigators.
There is a significant price to pay in any change to civil time, but the
argument goes that a similar price tag is attached to doing nothing - and
that we simply *can't* continue our current procedures for much longer.
(Where "much longer" is some hazy period of time from 5 to 500 years.)
In the past I've argued in various ways that we should not so easily
dismiss the many subtle requirements our society places on civil time -
on UTC, that is - to continue to track the rotation of the Earth.
Let's invert the process and look at the current leap second scheduling
algorithm itself.  Perhaps in one of M&K's other options lies the cure.
What is it that we are really trying to do?  A lot of technical jargon is
disguising a very simple need - the need to keep two clocks synchronized.
In general, to "synchronize our watches" there are four possibilities:
    - reset watch A (add or subtract a delta-t discontinuity)
    - reset watch B
    - adjust watch A's rate
    - adjust watch B's rate
Or some combination of the above - and perhaps introduce higher order
rate terms.  In addition, if the rates continue to differ any delta-t
adjustments must be repeated on some regular or irregular schedule to
match specified tolerances.
In our case watch A is Atomic time (TAI) and watch B is the Earth (call
it watch "E") approximated by UT1.  Our current procedure to synchronize
TAI and UT1 - watches A and E, that is - is to reset watch E periodically
using leap seconds.  We measure UT1, but we distribute and use UTC.
We can immediately reject a couple of the other options.  The whole point
of TAI is to remain the "best" time our species is capable of keeping.
Resetting watch A is not an option.  Similarly adjusting watch A's rate,
which is equivalent to M&K's "Redefine the Second" is obviously a proposal
that would be denounced by every physical scientist and engineer on the
planet.  So our choice becomes:
    - reset watch E (what we're doing now) or
    - adjust watch E's rate
Let's also dispense with the latter choice, but not before acknowledging
our species' impact on the Earth.  It may be absurd to suggest synchronizing
Universal Time with Atomic Time by speeding up the Earth's rotation - but
humanity's activities can indeed affect the Earth's rotation - for instance
through the high latitude impoundment of water in reservoirs or the polar
ice caps.  (Or adversely by allowing global warming to melt those ice caps.)
Returning to M&K's options, we can categorize them using the watch method:
    - Continue Current Procedure              reset watch E
    - Discontinue Leap Seconds                forget the whole thing
    - Change the Tolerance for UT1-UTC        reset E
    - Redefine the Second                     adjust A's rate
    - Periodic Insertion of Leap Seconds      reset E
All of M&K's realistic options amount to exactly the same mechanism -
except for the one that has received all the notice.  Should we really
just be throwing up our hands in dismay that we are incapable of doing
this job correctly?
So, let's acknowledge that the folks who designed and implemented our
current leap second system did indeed create an excellent mechanism.
Let's assume that we will continue to use this excellent (if not perfect)
mechanism in the future.  Should we be considering only options to
degrade the system?  Or should we rather take a detailed look at ways
of improving what is already good and serviceable into something even
In short - we (humanity that is) should continue to synchronize civil time
to UT1, and we should do it right.
The precision timing community has been doing an excellent job and cannot
be faulted for failing to provide detailed information beyond the call of
duty.  For instance, the USNO maintains a file of daily predictions and
standard results for the Earth Orientation Service at:
Some interesting plots can be constructed from these data.  Here is a
graph of UT1-UTC for the last quarter century:
The first thing to note is a bias toward positive values.  A leap second
is issued as soon as possible (presumably to provide "slack" later).
The average DUT1 is around a +0.1s bias (~0.14s over the last decade).
The other thing to note is how many days fall outside of the magic
-0.5s to 0.5s window.  Any values larger than 0.5s (absolute) represent
dates that could have been better served by a different scheduling of
leap seconds.  About 15% of the days are thus poorly served.
The data also show that half of the leap seconds in the last decade have
been in June and half in December.  When there is talk about a 1 or 1.5
year cadence of leap seconds, what is really meant is a 6 month sampling
rate.  This is the Nyquist frequency needed to fully sample the current
leap second "waveform".
(As many readers will know better than the author, the Nyquist Theorem
states that the minimum rate to properly sample a waveform containing
frequencies up to a value of "f" is 2*f.  Thus the digital sampling rate
of a CD is 40+ KHz, but our ears only respond to frequencies up to 20 KHz.)
Another aspect of pursuing such a Nyquist analysis (or at least, analogy)
is that it should be clear that to reliably schedule leap jumps (of however
many seconds tolerance) every *ten* years, for instance, will still require
the scheduling freedom to issue a leap jump every *five* years.
It is the *guarantee* of "read my lips - no new leap seconds" that the
squeaky wheels among the precision timing clients want - not the mere
reality of it having happened in retrospect.
The CCIR 460-4 standard that governs UTC requires only that:
    "A positive or negative leap-second should be the last second of
     a UTC month"
There is only a preference, not a guarantee, given to December and June.
I presume the authors of the standard were familiar with Nyquist, too.
In past discussions, some mention has been made of March and September as
possible leap second scheduling candidates.  This may suggest that there
are folks "on the inside" thinking along the same lines as this proposal.
It is striking how much room is already available in the standard to
implement the facilities needed to improve the current UTC mechanism.
Note that the standard is explicitly designed to transfer UT (UT1, that
is) accurate to 0.1s.  A tenth second is also considered an appropriate
buffer for IERS against "unpredictable changes in the rate of rotation
of the Earth".  On the other hand, virtually all civil use revolves
around the raw uncorrected UTC clock.  Should we be focusing all of the
discussion on the tenth second effects but none on the whole seconds?
A large part of the document is also concerned with transmitting the
DUT1 signal on top of the old radio system.  The mechanism requires
modifying 16 of the second markers to count up to eight tenth second
ticks positively and another eight negatively.  If the most obvious
modification were made to the system to loosen the current 0.9s UT1-UTC
tolerance, this would only allow about 3 seconds worth of growth for
DUT1 in any event.
Instead of asking how large we can make the discrepancy between UTC and
UT1 in order to permit longer and longer delays between leap seconds -
we should ask what the best leap second schedule is to minimize the
divergence between UTC and UT1.
The Bulletin B numbers (Bulletin A is really quite a good predictor
these days: ftp://gemini.tuc.noao.edu/pub/seaman/leap/BminusA.pdf)
allow a reconstruction of the overall trend over the last 25 years:
One thing to note is the steep slope resulting from initializing UT to the
old Ephemeris Time definition of so many seconds in 1900.  This slope all
but guarantees that a negative leap second will never occur - not only would
the Earth's current slowing trend have to be combatted, but the accumulated
bias over a coherent 1.5 year timescale would have to be reversed.  We don't
have leap seconds because of current slowing - we have leap seconds because
the Earth has already spun down.
The trend will make the slope even steeper in the future (that's the whole
problem, after all) and a negative leap second will become ever more unlikely.
The change I propose is an explicit monthly scheduling cadence.
Most months would include no leap second, of course.  The relative current
epoch frequency of leap seconds would remain the same at one per year or
per year-and-a-half.  However, the freedom to position a leap second
12 times a year will allow a much closer pragmatic fit of UTC to UT1,
dramatically reducing the outlier dates:
Only a bit more than 2% of the dates now lie outside of the plus-or-minus
half second window.  Other statistical measures are similarly improved even
in this non-optimized application of a monthly sampling rate.  (Various of
the example leap seconds would have been better scheduled either a month
earlier or later.)
The Bulletin A predictions can be used as input to a variety of scheduling
algorithms and various "best fit" criteria can be evaluated.  I don't want
to tell the appropriate agencies how to do their jobs - rather it's obvious
that they know what they're doing and therefore that an even better product
should be requested from them in the future.
It is also revealing to consult past bulletins of the IERS:
These have managed six month advance predictions of leap seconds for at
least the last five years.  The 460-4 standard only requires 8 weeks.
Note the single biggest advantage of this modest suggestion.  It is
already written into the standard.  I'm somewhat embarrassed making even
this level of fuss over it, since it is obvious from reading the text of
CCIR 460-4 that a monthly scheduling pace was foreseen from the beginning.
Let's just implement what was always inherent in the standard.
Some of the PTTI discussions focused on the difficulty of making the
case to one's funding agencies for the significant resources necessary
to retrofit large complex projects in the field to support new timing
standards - new timing standards that will produce no benefit to your
project.  Commercial projects may face even larger funding pressures.
Continued leap seconds or a much larger DUT1 range - both have a cost.
There is also the question of a time table for implementing any of the
various relaxed tolerance options.  It appears that these proposals are
being fast-tracked with the intent of benefiting current systems such as
GLONASS which do not reliably handle leap seconds.  (Whether the same
systems will be able to handle values of DUT1 larger than 0.9s is an
open question.)  In any event, in order to benefit current systems, any
change to UTC must happen during the lifetime of those projects.
Note the artificial crisis that is being created.  Most changes to
widespread standards include specific attention to issues of backwards
compatibility.  Certain systems or usages may be "grandfathered in".
Every care is taken to avoid trampling on current users.  The normal
pattern of adoption of a new standard is to rely on convincing new
users, and old users with new projects, to design to the new rules.
The old rules may be deprecated, but are available for the lifetime
of pre-existing systems.
We, however, don't have that luxury.  If we make a change - we change
the system for everybody, both new and old - and we force the schedule
for that change.  Don't underestimate the expense of publicizing such
a change.  With Y2K we had worldwide media attention shining on our
efforts and still we are hearing of "glitches".  We will have none of
the positive aspects of Y2K, but all of the negative aspects.  How many
new projects will continue to design DUT1 < 0.9s into their code while
operating off of old copies of the standard?  (We could perhaps mitigate
that somewhat by removing the proprietary restrictions on this particular
On the other hand, imagine the benefits of restricting our efforts to
only changes that are supported by the current standard.  No need for
a worldwide publicity campaign to try to reach the full extent of our
users.  No need for any specific schedule at all - we could implement
our new leap second scheduling policies in stages over the next several
decades - or centuries.
And imagine the relative ease of selling your own funding agency on a
modest sized project to verify that your own code supports the full
letter of a pre-existing standard.  A project that is necessary, not
because the precision timing community has decided to degrade the
quality of civil time - but specifically because we have decided to
support an upgrade to UTC.  And then realize that there is no need to
seek even this modest level of supplemental funding - because we can
simply decide to delay implementation so far into the future that all
current projects - in the field or in planning - will have lived out
their complete life cycles.
Meanwhile, let's consider what happens if we reach agreement along the
lines suggested in this proposal in the next five or ten years.  (And
does anybody really expect the other proposals to advance much more
quickly?)  New projects could begin to code to the new leap second
scheduling guidelines immediately - in fact, projects could begin to
prepare before agreement was reached since this is, after all, the
current standard.  Backwards compatibility with the December/June
scheduling (that will likely continue to persist for quite some time)
is guaranteed.  Our attentions could be focused on developing tools
for testing and verification and on writing documentation and user's
Public relations would be a breeze.  Not "here are some unpersuasive
technical arguments about why we decided to allow UTC to diverge from UT1",
but rather "here is how we have decided to improve timekeeping for the
world community".  Which do you think will play better in Scientific
American and on Nightline?
Civil time and the UTC standard are such basic concepts that all policy
discussions embrace exceptionally long time scales.  These time scales can
make even pragmatic discussions sound like science fiction.  For example,
the "Report of the URSI Commission J Working Group on the Leap Second":
Includes this passage under Appendix III:
    "2.) I also received many comments about the effects on society
    when UT1 diverges.  Note that we are talking about a minute in the
    next century.  Society routinely handles a one-hour switch with
    every daily savings time, and a half-hour offset if they live at
    the edge of a time zone.  By the time leap seconds add up to an
    hour, the world will be very different.  If we have settled the
    solar system, a whole new scheme will probably have evolved.  Even
    if we have not changed our system, society has enough slop in its
    timekeeping that people will slowly shift without even knowing it.
    More people will start showing up to work at 9:00 AM, and less at
    8:30 AM, etc."
Beside the confusion here between periodic and secular effects (not to
mention a strange tolerance for "slop" to be coming from the precision
timing community) we start speculating about what "scheme" our descendents
will use while exploring the solar system.  All right then, let's speculate.
The current standard has lasted for 25 years and could reasonably serve
us well for ten or a hundred times longer than that - so presumably any
replacement must be expected to last equally far into the future.  As such
it is not just our idle whim to attempt to predict what scheme our species
will be using - it is our current obligation at undertaking such a revision.
I'll be bold enough to provide an answer to the speculation right now.
What scheme will our species ultimately use to synchronize Atomic time
and Earth time?
We will redefine the second.  What else is available?
Eventually the leap second pace will indeed accelerate to the point that
a monthly (or even weekly or daily) rate fails to sample the underlying
waveform acceptably well.  At that point - what?  Will our multi-millennial
grandchildren find it reasonable to allow their clocks to register day for
Perhaps the thought is that some entirely different clock will be used
(one imagines some futuristic variation on a "metric" clock).  What
difference does that make?  The same underlying issues will remain.
The rates between the Earth and the Atomic (or Antimatter :-) clocks
will diverge and some scheme will be needed to synchronize them.
So - they will need to find some reasonable revision of the fundamental
unit of time.  Or more likely, they will define a civil second that is
some fraction longer than the "scientific" second.  And then they will
continue to issue leap seconds on a palatable schedule according to the
Earth's whims.
(The future PTTI community would have the freedom to select an epoch
for such a change that would correspond to some nice round number or
even ratio of fundamental physical constants, and thus perhaps even
simplify the handling of time units.)
Redefine the unit to match the long term variation.  Schedule leap
seconds for the short term deviations.  That is - adjust watch A's rate
every few millennia AND reset watch E in between.
In any event, the suggested monthly scheduling of leap seconds should
be good for the next 500 or 1000 years - by which time the more sci-fi
suggestions that we won't care in the future as we spread through the
cosmos will be shown to be silly.  (Are most of the Earth's inhabitants
ever likely to leave the ground?  How many millennia will pass before
the majority of our species lives elsewhere?)
Actually, let's assume that we will indeed have a future interest in
synchronizing multiple diurnal rates (Earth AND Mars, say) with our master
clock (or vice versa).  Won't that be made much easier with a more rapid
scheduling cadence of single leap seconds (on both planets), than with
the infrequent and inflexible scheduling of much larger jumps?
It certainly won't be possible if we schedule no leap seconds at all.
Allowing UTC to drift from UT1 would be to abandon a central mission
of the precision timing community.  The unacceptable nature of this would
become clear in decades or merely in years - not only after millennia or
centuries.  We would be implementing a hobbled time scale that would be
the ridicule of future historians and scientists.
Degrading UTC to allow much larger discontinuities on a much less
frequent schedule will only encourage lazy engineering, programming and
management practices.  Both the heightened amplitude and lengthened
period are guaranteed to make future Y2K-like crises more likely.  We
would sacrifice everyone's (literally everyone's) long term peace of
mind for the short term expediency of a minority of special interests.
On the other hand, a monthly leap second sampling rate would improve UTC.
It is already permitted under the standard.
A monthly pace of leap second handling would quickly sort out those
precision timing projects that actually need to use UTC from those that
would better benefit from an unsegmented timescale such as TAI.  This is
not an issue that can be avoided, and as such we should face it head on.
Quickly reaching a consensus on a solution to this non-crisis would
allow the community to turn its attention back to the real needs of both
UTC and TAI clients - how to communicate standard time signals reliably
and remotely.
CCIR 460-4 states:
    "GMT may be regarded as the general equivalent of UT."
Let's preserve the original intent of the standard.
Rob Seaman
National Optical Astronomy Observatory
Received on Mon Apr 09 2001 - 13:12:09 PDT

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:54 PDT