• The howtos and wherefores of Dan Kelson's spectral reduction software


  • The howtos and wherefores of Dan Kelson's spectral reduction software

    Overview

    The idea of Kelsonware is to create a new coordinate system, one in which the sky lines are straight lines across the spatial axis of the slit. The input data is used to map out this coordinate system, then the sky subtraction is performed. Once the sky is removed, the same coordinate system is used for extraction of the spectra.

    Kelson's software is written in Python and using a number of libraries to run. The best way to obtain it and use it is to download the sofware.

    The steps

    In More Detail

    Step one is removing the bias level. This is performed using an instrument specific piece of software. Usually the overscan region is used for the actual subtraction. For DEIMOS data, deimosbias is used. For LRIS, one uses lris2bias for the red side and lris4bias for the blue.

    The second step is the y distortion calculation. This performs a FFT to cross correlate a reference row, by default the central row the image, with the rest of the image. A certain amount of binning is done in both the x and y direction so the resulting cross-correlation map is much smaller than the input data. This result is computed as a set of coefficients which is placed in the header of the image. The image that is used to compute this result is usually the flatfield. Then, this fit is copied to the rest of the images.

    Now the slits are defined in the data. Once again, there are a number of ways of doing this, including marking by hand or using automated processes that find the slits based on a mask definition file. Once again this result is copied to the remaining data.

    Next the x-distortion is fit to the data. This uses the slits defined in the previous section. Once again the software uses an FFT to compute a cross-correlation which is then fit. The cross correlation is performed with reference being a column in the data for each slit. The resulting fits are stored in the header on a slit by slit basis.

    Now, the mapping between x and y in pixel space to x and y in on an ideal sky has been performed. However, one would like the wavelength mapping as well. The wavelength mapping is, in principle very similar. A list of lines with positions in ideal x (not pixel x) is matched to the input line list given in angstroms.

    Finally, one flat fields the data. There a number of different approaches that the software supports. Generally, the first thing done is a two-dimensional flat field where the slit function is removed. Second, a one dimensional flat is used to remove large order gradients.

    Now is a good time to find the objects. Once again, an input guess of the positions of the objects is cross correlated with the actual data. This gives you a small shift between the data and the model. Those pixels are then masked for later sky subtraction.

    Now, the sky modeling is actually done. This is based on the mapping from before, so ideal sky lines are map into the coordinate space of the actual pixels and fit. One key thing to note is that the sky model is a true two-dimensional one. This means errors in rectification or flat fielding can, in principle, be incorporated.

    The sky subtraction is a simple step of subtracting the model from the data. The model is realized in the data coordinates.

    To extract the spectra, a cross correlation between the data and the object positions obtains a model. Then, a second procedure uses this model (after applying the shift) to plop down apertures to remove the objects. As a first guess the data is fit with a Gaussian. That can be improved by using a Gauss-Hermite decomposition of bright objects (or all objects). This is used to weight the data for optimal extraction. A bright star per frame, or even the box stars, could be used to come up with this fitting model.

    Go to the Start
    Last modified: Fri May 11 13:39:26 PDT 2012