The ESI Spectrograph Online Documentation
ktest for ESI

KTL regression test scripts for ESI

The concept of a regression test is this: when a system (software, hardware, or both) is very complex, then any change to it can have unexpected consequences. Therefore, after any change is made, it is essential to verify the behaviour of the complete system. Any change to the behaviour (whether for better or worse) is relevant. Also, you can learn a lot from the repeatability of system behaviour when forced change is not taking place. Therefore it is good to have an extensive test suite whose results can be "diffed" against a previous run of the test suite. Differences are indicative.

This is a slightly different approach from Pass/Fail, but they are related. In a regression test suite, failure on any particular subtest might be acceptable -- as long as it was a known problem and the failure was repeatable.

The ESI regression test suite is actually a multipurpose suite. It consists of a number of test scripts (see the ktest overview each of which exercises one particular stage or subassembly. Any individual test can be run at any time using ktrun (qv). The entire suite can be run by a special script called ESI.regression.test.

The test scripts in the suite use the special ktest procs Pass and Fail to test specific stage function. Each test script might, by completion, have executed several hundred individual Pass/Fail subtests. The results of these subtests appear in the log file for the test, in a rigidly specified . You can think of these subtests as forming a "chain" of T, F or 1, 0 results for that test. This chain (stripped of all extraneous date/time information and message strings) is the "signature" of the test for regression purposes. Two runs of the same version of the test against the same system should have exactly the same signature.

After all tests have run, the log files are "scraped" by the ScrapeResults script to produce synoptic logs called LastRunList.PASS, LastRunList.FAIL, LastRunResults.PASS, and LastRunResults.FAIL. These logs can be quickly scanned using "less" or a text editor. The *List* files simply list the names of the scripts which had any failures, and the *Results* files actually extract the subtest failure messages from the logs. This is the simplest way to evaluate the test results.

Another way to evaluate the test results is the GUI application TKKtResults. If pointed at a ktest result dir, it can selectively load in current or past results for any test or set of test (this is done by substring matching in the test script namespace, so that 'position' would get all scripts called *position*). It displays the Pass/Fail signature of each loaded test, with failures highlighted. It is very easy to see at a glance how similar or dissimilar the results are.

The Observer documents are hand-written. The Technical Documents are produced from plain text files in the CVS source tree by some Tcl scripts written at UCO/Lick Observatory. The Reference Documents are mostly generated by software from data in a relational database. Individual authors are responsible for the content of the Observer and Technical Documentation. The Lick SPG as a whole is responsible for the content of the Reference doco. Send mail to de@ucolick.org to report inconsistencies or errors in the documentation.