(by Eugene Magnier)

I am trying to understand errors in the flat-field correction analysis. After the first pass on this analysis (see GPC FlatField Correction 2008.10), I applied the measured correction to the r-band raw flat, the applied that flat to the images used to generate the correction. If everything was working, the measured correction from this analysis should have had zero mean and minimal scatter. Instead, although the mean was rougly zero, the amplitude of the variations were large, nearly the same in scale as the original correct (SHOW RESIDUALS). I was suspicious that the code could have a sign error in how the correction was applied, so I would not have been surprised to find the 'residual' having twice the amplitude of the correction. Instead, the residual was nearly the opposite sign of the original correction (thought not exactly).

At that point, I made one modification to the analysis of the flat-field correction within DVO: rather than allow all cells of the correction grid to float to their best solution value, I chose a single cell as a reference, froze it at 0.0, and recursively solved for the other cells which touched stars which has already be measured on a cell with a determined correction. The idea was to let the solution grow outwards from the starting cell, rather than relying on the full (somewhat over-determined) system to converge as a whole. This modification has almost no impact on the solution.

I then took a step backwards and decided that I needed to prove that the analysis worked with simulated data. To do this, I needed to be sure ppSim would generate fake data with the correct magnitudes (there had been errors reported by Eric Bell), that psphot would recover these magnitudes correctly, and that ppSim could introduce an error in the flat-field data that would result in measured magnitudes with the expected modification from the incorrect applied flat-field. I identified and fixed a couple of ppSim bugs, and eventually demonstrated all of the above (SHOW PLOTS).

I then worked on the simtest flat-field correction simulation, and also fixed errors and gaps in the portions of the pipeline which automatically measured and then applied the flat-field correction to the archival flats. The flat-field simtest suite now will generate, at least for the case of a single-chip detector, a set of flats with a known error, then measure the flat-field correction corresponding to that error (getting the correct result), apply that correction to the raw flat, then re-measure the correction, yielding a residual correction with ~zero mean and small scatter. With this confirmed to work, I now had confidence in the software and in the analysis algorithm. I also now had the ability to automatically run through all of the steps, making the analysis both quicker and more reliable. However, since I did not make any fundamental correction to the algorithm or the way the flat-fields were corrected, I was still not certain why the GPC1 version failed. My best guess was that, by having to do several of the step manually, I made an error and, for example, applied the flat-field correction to the wrong flats (maybe getting r and i wrong, who knows...).

However, on Saturday (2008.10.25), I ran the full cycle of a) measure the flat-field correction, b) apply the flat-field correction to the raw flats, c) re-measure the residual correction. The result was another failure: the residual correction image still has a significant signal in it. I find this difficult to understand: even if the correction is physically meaningless (ie, does not represent a real instrumental signature, but just an artifact of this particular set of data), the process above should still result in a near-zero residual.

At this point, I am going to go through all of the explanations I can think of for this result and try to examine them one at a time. Here is the list of possible errors:

  • in the first pass, the flat-field applied to the images is not the one which then gets corrected (wrong flat in application)
  • repeated photometry of the same image has arbitrary offsets which mimic the measured correction
  • the analysis of the correction is converging on a meaningless solution (simtest suggsts this is not the case)
  • the correction analysis for multichip (ie, GPC1) data is failing in a way that the single chip analysis does not
  • the translation of the correction to the flat-field correction uses the wrong spatial relationship (eg, the correct for chip XY01 is being applied to XY10)
  • the flat-field correction is mis-calculated (this is ruled out by the simtest suite)
  • the flat-field which gets corrected is not the one that was used for the analysis (wrong flat is corrected)
  • in the second pass, the flat-field applied to the images is not the one which has been corrected (wrong flat in application -- different reduction recipe)