The ESI Spectrograph Online Documentation

How to Make and Use the Regression Test Suite

Some of the tests are Generic (and they live in the Generated directory, whose Makefile constructs them from generic test templates in the cvs/kroot/util/ktest directory). Others are so specific to the instrument that they have to be hand crafted, and they live in the Hardwired directory. Documentation for the individual test themselves is found in the Test Brick.

The current convention is that the tests are copied from the Generated and Hardwired directories into the QA directory, where they will be run and where their results will accrue. The top level of QA is a mess of .tst and .log files. The results of each test run are found in QA/Results. These results are found in files of four distinct names:

  1. TestName.log.TimestampInt : the verbose test log
  2. TestName.err.TimestampInt : the test error log (hopefully 0 length)
  3. TestName.dat.TimestampInt : data output via Data commands in the test
  4. TestName.trp.TimestampInt : data saved from a Trap command in the test
  5. TestName.reg.TimestampInt : a regression extract file
When a test is run, the log/err/dat/trp files are moved to the subdirectory TestName/Prior. Only the reg file is left. The only reason for this is to make the number of files at the top level less overwhelming. These tests may be run tens or hundreds of times.

Tools such as the ktqual suite (qv) and TkKtResults look into these test result directories and use the contents of these files.


The life cycle of these tests is in theory more or less as follows:

In practise, all the real testing has been done in the cvs source tree. SPG never got enough test time on ESI to work out the final packaging of the regression test suite, so the regression runs were done in the QA directory of cvs/kroot/kss/esi/ktest. Our intention was to have a "canonical" set of result files installed under KROOT/data/esi, against which all subsequent tests would be compared (regression, in other words). However, the instrument hardware was changing right up until the last few days before shipment and we never did get a stable set of results. So this will have to be part of the commissioning effort.

All that is necessary to run the suite in its current configuration is to check out the stuff in kroot/kss/esi/ktest, then

  • make gentests
  • cd QA
  • make Hardwired
  • cd ..
  • ESI.regression.test QA >& ESI.regression.log &

    It should take about 3.5 to 4 hours. You can tail -f the log file to see when it is done. When it has completed, cd QA and look at the extract files LastRun*.

    A very useful combination is the regression test suite and ktlwatch. If ktlwatch is configured for rapid sampling (say .1 to .2Hz) while the regression suite runs, then you can get a good correlation between the test suite activity and the state of the instrument over time. This is how we found the crosstalk problem between the shuttle motors and the temperature sensors: ktlwatch data as viewed using DataMynah clearly showed noise on the temperature sensors only when the shuttle stage tests were running.

    De Clarke <de@ucolick.org>
    $Date: 1999/07/19 23:55:07 $
    The Observer documents are hand-written. The Technical Documents are produced from plain text files in the CVS source tree by some Tcl scripts written at UCO/Lick Observatory. The Reference Documents are mostly generated by software from data in a relational database. Individual authors are responsible for the content of the Observer and Technical Documentation. The Lick SPG as a whole is responsible for the content of the Reference doco. Send mail to de@ucolick.org to report inconsistencies or errors in the documentation.