IPP Status Report : Single-Image Analysis

This report summarizes the current status of the IPP single-image analysis steps. Individual exposures pass through four major analysis stages before they are ready to be combined (stacking or difference imaging). These steps are:

  • Chip Analysis The individual GPC1 OTA CCDs are processed

independently: the analysis perform the detrend corrections, generates a single pixel array ('chip mosaic'), and performs the basic photometric analysis: detection of the sources in an image, determination of a PSF model, PSF photometry of all sources, morphological identification of extended and unresolved (CR) sources, and determination of the curve of growth and aperture corrections. One of the major results of this analysis is a per-chip FITS table of the detected sources (CMF file) with associated metadata.

  • Camera Analysis The collection of chip-level detection tables are

assembled together into a single file for each exposures. Based on the reported telescope position and camera rotation, astrometric reference stars are loaded, matched to the detected sources, and an astrometric solution is measured. Currently, the astrometric reference catalog is derived from the 2MASS PSC, with estimated grizy photometry based on the 2MASS colors, and to a limited extent the USNO-B photometry and, for brighter stars, the Tycho photometry. During the astrometric calibration, an approximate photometric calibration is also determined based on the synthetic grizy photometry. The major output data product from this analysis is a single file with the FITS tables of all detections from all chips, including image headers with the astrometric and photometric calibrations.

  • Fake / Force Analysis After the astrometry is determined, forced

photometry can be performed for pre-defined locations on the sky. In addition, during this stage, fake sources are injected and recovered in order to measure the detection efficiency of point sources as a function of magnitude. Note: although this analysis stage is implemented, it is currently untested, and needs significant shake-out.

  • Warp Analysis Once images have been processed and have had their

astrometric calibration determined, they may be geometrically warped into the skycells representing common pixel grids. Each of the survey modes (3pi, MD, etc) may choose their own tessalation of the sky, and the science images are automatically warped into this representation. Currently, the IPP is using a somewhat suboptimal tessalation which has a ~15% overlap on average. Szalay and Buvari have offered to explore additional tessalation options. The IPP infrastructure can flexibly choose tessalations whenever a final decision is made. In terms of the processing capability of the IPP, the choice of the sky tessalation is not a significant impact.

All of these stages of the analysis can and have been run in 'semi-automatic' mode on substantial amounts of data. In this context, 'semi-automatic' means that there has been a manual selection of groups of images to be processed, rather than automatically processing all science images as they arrive from the telescope. Most of the data that has been processed has been targetted at one of a variety of experiments to test, eg, the quality of the photometry or astrometry, the telescope pointing model, to measure the flat-field correction, or to make specific science demonstrations with selected subsets of the data.

Automated processing of the nightly exposures is possible, and has been running since 2008.10.27. We will continue to run all data labeled for science in the automatic fashion for the foreseeable future. We have also started to initiate processing of large test sets of data from the preceding two weeks to build up more uniform statistics.

Detrend Processing

The IPP is currently applying a dark (3D model including bias, trend with temperature, and trend with exposures time), a flat-field, and a mask. We have not generated a fringe frame for the y-band exposures yet. It is clear that the fringing in y-band is very weak, but it is present and will eventually need to be corrected. We do not yet have sufficient observations to attempt this. The IPP is capable of generating and applying fringe frames (tested with Megacam data), so we are confident that this can be addressed when the y-band total exposures become more significant.

Detrend.stats.png

We have generated flat-field images based on twilight sky images. We have gone through two iterations on this to date: we first generated a master flat set for griy in May using a modest subset of twilight images. In September, we used those masters to test the validity of all of the flat-field images taken since July 1. From this analysis, we selected a subset of clean, consistent input flats to generate a new set of flats. Since the baffling had been installed since the May flats were built, the new flat-field were somewhat different: they had must smaller large-scale structures due to the scattered light. Using the master flats generated from this analysis, we generated residual images for each input flat. Figure 1 shows three representations of the statistics of these residuals. Each panel shows one of the four filters griy. For each exposure, we measured the stdev of the residual pixel values for each chip, as well as the median flux on each flattened image. The black histogram shows the stdev of the median values across all chips. The blue histogram shows the rms values of the stdevs for each chip. We also measured the stdev after rebinning the images by 150x150. The red histogram shows the rms of the stdevs of the binned images. All three histograms show the fractional stdev relative to the median flux on the image. These input images had count levels of typically 15-20k DN. The blue histograms are rougly consistent with the Poisson noise level, though perhaps biased a bit high from the outliers pixels in the images. The black histograms show that there remain low-level chip to chip differences which will have an impact at the 5-8 mmag level. The red histograms show that the systematic floor within individual chips may possibly be at the 1 mmag level.

Astrometric Analysis

O4729g0161o.dis.0.png O4729g0161o.dis.1.png O4729g0161o.dis.2.png O4729g0161o.dis.3.png O4729g0161o.dis.9.png

For high-quality astrometric calibration of the GPC1 data, the IPP uses a two-level set of astrometric solutions: the first layer is set of polymomial transformations (currently up to 3rd order) from the chip pixel coordinates (X,Y) to a common focal plane coordinate system (L,M; currently represented in virtual pixels, or 10um units). The second layer consists of a single polynomial transformation (again up to 3rd order) from the focal plane to a common tangent plane coordinate system (P,Q). Conversion from the tangent plane to the celestial coordinates (R,D) consists of a projection about the field center with a plane scale that may be different in the P and Q directions.

This two level transformation allows us to represent a single optical distortion, with all chips contributing to the solution, as well as perturbations for each chip representing chip translations, rotations, or higher order effects such as may be induced by seeing. At the moment, we are only using integer powers of for focal plane coordinates (L,M). We justify this by noting that the basic radial optical distortion is of the form \rho = \alpha r + \beta r3. The x component of \rho is then

\rho_x = \rho cos \theta \rho_x = (\alpha r + \beta r3) cos \theta

but cos \theta is x / r, thus

\rho_x = \alpha x + \beta (x3 + x y2)

and equivalently for the y component of \rho. Thus, we expect to have the dominant power in the odd power combinations of x and y, and this is in fact what we see when we fit real data.

In order to determine the astrometric solution in a stable fashion, we actually measure and fit the gradient of the distortion term: this is not dependent to first order on the location of the chips, and can thus be solved independently of the chip-to-focal plane transformations.

Figures 2-6 (click for larger version) show the sequence of steps for an example data set. Each image shows the difference between the focal plane coordinates of the measured stars and the model-predicted focal plane coordinates of the reference star positions. The top two panels show the astrometric residuals as a function of the magnitudes.

We start with independent solutions for each chip. An artifical linear focal plane to tangent plane transformation is used to determine the effective focal plane coordiates for each chip. The residuals reflect the absence of the distortion model. We next adjust each chip-to-focal plane model to force each chip to have the same pixel scale; without compensating for this by introducing a focal plane distortion, this appears to offset the chips. The resulting pattern shows visually the distortion field. We next fit the gradient of the distortion field and apply the resulting distortion field, without adjusting the effective chip coordinates. The results is that the coordiates system for each chip becomes locally flat, but the chips are now mis-registered relative to the new focal plane syste. Next, we fit for the chip translations only, with the result that that the residuals show the relative rotations of the chips (in fact, the pattern is regular because the chips have already been fitted to have a small amount of effective rotation to follow the distortion field). We iterate between improving the distortion and improving the chip fits, and finally allow the chips to fit higher order terms. The final plots show the small residuals across the field.

When fitting relative to 2MASS, with this full two-level astrometric model, we find residuals for the bright end which are typically 60 - 70 milliarcseconds, and are limited by the 2MASS accuracy.

Sample Data Sets

Flatcorr.region.png

We provide here tarballs with several example data samples. These are all derived from a sequence of observations taken 2008.09.20 which have been used to study the flat-field correction. These observations are of a dense stellar field, and are heavily dithered. Figure 7 shows the pattern of the GPC1 chips on the sky.

In the tarball smf.files.tgz are the output SMF files from the camara stage analysis There are two sets of smf files in this directory: The plain ones use the simple linear per-chip astrometry. The ones with the extension "dis.smf" have been modelled with the full two-level analysis. This associated files which end with .dat are text tables of the stars which were matched to the 2MASS catalog. Each line of these files consists of two sets of white-space separted numbers with a pipe ("|") between them. The first set on each line are measured values from the GPC1 images; the second set are the modelled values for the reference stars. The columns are:

ID RA DEC P Q L M X Y M_inst | RA DEC P Q L M X Y M_catalog

The tarball catdir.flatcorr.tgz is the DVO database built from the LINEAR version of the smf files (so note that the astrometry will not be fantastic!).

The tarball subset.tgz gives just a few example smf files, one for each filter.

added 2008.11.05 O4729g0085o.33893.tgz (example processing logs from one exposure)