shoehorning ( TI - UT ) into 16 bits

From: Steve Allen <sla_at_ucolick.org>
Date: Sun, 6 Jul 2003 17:02:04 -0700

On Sun 2003-07-06T00:14:27 -0700, Steve Allen hath writ:
> provide offset ( TI - UT ) as part of their format.
> This leaves open the question of at what accuracy level?
> I.e., what might happen with leaps of varying sizes?

Given that UTC will no longer be explicitly in use, it seems likely
that whatever new format is created for broadcast time signals that
give TI will have to have a total break with the scheme that is
currently used to provide DUT1, the predicted value of ( UT1 - UTC ).
Nevertheless, there may be certain constraints of the time signal
format which are deemed impossible to rearrange, and I posit what
might be done within those constraints.

The current format for broadcast time signals (ITU-R TF.460-6 and the
previous many revisions) provides 16 bits for communicating DUT1 by
emphasizing the "ticks" of seconds 1 through 16 of each minute.
However, the information in these bits is encoded by allocating the
first 8 bits for counting positive values of DUT1, and the second 8
bits for counting negative values of DUT1, all in increments of 0.1
second. This is not optimal data compression, but it probably made
sense in 1970 when far fewer people (and machines) could be expected
to be able to count in binary.

Taking into consideration the current sophistication of radio
receiving and computing hardware, I posit an alternative means
of using those 16 bits to encode the hypothetical new value of
( TI - UT ).

Let the value of ( TI - UT ) be encoded using the currently available
16 bits worth of emphasized second "ticks" as a purely binary
encoding. This gives 65536 distinct values for ( TI - UT ).

The currently provided value DUT1 is in 0.1 s increments. This
is a bit more coarse than some navigators would like, but really
pretty well matched with the accuracy that can be attained using
traditional astronomical sighting methods.

The currently provided value for UTC leaps by 1.0 s increments.
This is notably larger than can be easily hidden in timestamping
applications that want to use UT as the timestamp. Such applications
would probably prefer leaps of ( TI - UT ) that were smaller than
the 0.1 s accuracy of DUT1.

So, if we presume that only the current 16 bits are available,
now it becomes an engineering tradeoff of ( TI - UT ) resolution
vs. ( TI - UT ) range.

Of course it is important that ( TI - UT ) not go negative (much as
the excursion of ( UT1 - UTC ) became uncomfortably large at the end
of the first year with leap seconds in 1972). If at the inception of
TI the earth is again doing what it is doing right now, then a
negative value of ( TI - UT ) could happen if the initial conditions
are not set correctly.
(If I read the IERS right, since June 12 of this year the crust of the
earth has accelerated so much that the length of day has been shorter
than 86400 SI seconds, and the value of ( UT1 - TAI ) has *increased*
by 2 ms in the past month.)
Basically this means that the IERS should not bother inserting any
leap second into UTC if one seems imminent at about the time that
TI is supposed to go into effect.

%%%%%%%%

So, what is a resonable way of dividing up the 16 bits?

How about 4 bits of fraction and 12 bits of integer?

This gives ( TI - UT ) to a resolution of 1/16 second. This is better
than the current values of DUT1, and perhaps even good enough to make
it feasible to do timestamping in UT (for those applications which
still want to do so). It also gives a range of 4096 seconds, which is
more than an hour. That means the scheme will last for at least 600
years, which is far longer than seems necessary for planning the
hardware characteristics of radio broadcast time signal usage.

If that resolution is too coarse ...

How about 6 bits of fraction and 10 bits of integer?

That gives ( TI - UT ) to a resolution of 1/64 second. It makes the
discontinuities of timestamps in UT small enough that any application
silly enough to want to do that should not worry about the leaps. It
allows for 1024 seconds of ( TI - UT ), which is good enough for
several hundred years. That should still be long enough for the SRG
to congratulate itself on a job well done.

Alternatively, the decimal-minded could desire that the ( TI - UT )
offset be in centiseconds.

Of course it may be that a new broadcast format can allocate many
more than 16 bits to the number, and in that case the tradeoffs
of resolution and range become as much easier to handle.

--
Steve Allen          UCO/Lick Observatory       Santa Cruz, CA 95064
sla_at_ucolick.org      Voice: +1 831 459 3046     http://www.ucolick.org/~sla
PGP: 1024/E46978C5   F6 78 D1 10 62 94 8F 2E    49 89 0E FE 26 B4 14 93
Received on Sun Jul 06 2003 - 17:02:15 PDT

This archive was generated by hypermail 2.3.0 : Sat Sep 04 2010 - 09:44:54 PDT