Background Model and Over and Under Subtraction on GPC1 Images

(Up to the IPP for PS1)

update : looking into this further, it seems like the technique described below rejects too many real galaxies; the necessary parameters are probably too sensitive to things like seeing and background. I have done further work to address this issue using the peak/footprint culling described here: Peak Culling in psphot.

Bill identified an image from the CNP dataset which was generating large numbers of detections and taking an extremely long time to process. Checking into this, I discovered that there is a problem with the source detection, particularly in cases where the background model is unable to follow the structures in the background, and when the image quality is poor (large FWHM). At a suggestion from Michael Wood-Vasey, I have added an additional source filtering which appears to improve this situation.

Here are some images to illustrate the problem, and the improvements:

The upper-left image shows o5579g0223o.XY63 after detrending, but without background subtraction. The image has been somewhat smoothed to enhance the visibility of the background variations. Note that the variations in the background in this image are caused by clouds (and can be seen in earlier and later images from the same night). The background ranges from ~1000 counts (black) to ~1300 counts (light yellow).

The upper-right image shows the background model for this image. To the eye, it follows the background variations reasonable.

The lower-left image shows the background-subtracted image, again with some smoothing. The stretch is tighter than the upper 2 images: the dark black regions are at -10, the light red is at about +10. Clearly, the background model is not able to follow the somewhat higher-frequency structure in the image. Also note that it does not particularly help to make a finer-structure background model -- there are still errors, but on different scales.

The lower-right images shows the detected peaks from this background-subtracted image, after smoothing with the PSF. In this particular image, the PSF is large (FWHM ~ 11 pixels), which means the significance image is quite heavily smoothed by the detection process. Note that the high regions have low significance per pixel: the S/N per pixel is only in the vicinity of 0.5. In this case, it is the smoothing which enhances the background subtraction residuals.

Clearly, we are detecting too many objects in these regions. But, can we identify and filter them somehow. The common feature of these sources is that they originate in a region of low surface brightness structures. They do not have very PSF-like shapes. Perhaps we can remove them on that basis.

In the initial steps of the photometry analysis, we measure several aperture-like values: the Kron flux (an aperture flux measured within a circle 2.5 times the 1st radial moment of the source). I added a 'core' aperture flux, using a fixed aperture for all sources scaled to the PSF size. The figure below plots the log of the Core Flux S/N vs the log of the Kron Flux / Core Flux ratio. The uninteresting detections above may lie in 2 areas in this plot: they may have insignificant flux on the small scale of the PSF, or they may have a small flux in the core compared with the kron flux. The more PSF-like an object, the closer it will lie to the bottom of this figure.

I have selected 'good' and 'bad' objects using the above plot: good objects are required to have Core Flux S/N > 5.0 and Kron Flux / Core Flux < 10.0. This selects against features found on large, low-surface-brightness structures like those in the under-subtracted images above. The next few images illustrate the rejections resulting from these cuts.

The image above shows a portion of the CNP image from above. The red boxes mark all sources detected in the image; the blue boxes mark those sources which passed the above filter. It is clear that this cut rejects a large fraction of the detections coming from the poor background subtracted region. (It may be difficult to tell, but all bright stars are identified in the good sample).

The above 2 images show portions of another exposure, this time of MD04. Again, the red boxes mark all detections, while the blue boxes show the 'good' objects after filtering.

Summary: this additional filter can remove insignificant and otherwise uninteresting detections coming from low-level variations in the background flux.

One concern with this filter is that it (intentionally) rejects objects which are extended and have low surface brightness. Although most galaxies will continue to be accepted, this filter will necessarily increase the rejection fraction of low-surface brightness galaxies. After all, they have an appearance very similar to the artifacts causing the excess detections.

There are 2 points to be made regarding the detection of real, low-surface brightness features:

1) such detections should be performed on the stacks, where the non-astronomical background variations should be reduced due to the stacking. We should be able to relax the Core / Kron rejection threshold in such an analysis.

2) The detection of extended, low-surface brightness features should be performed with an additional analysis step that is not the same as a PSF-optimized search. For example, the analysis should smooth the image with a kernel larger than the PSF (to enhance faint extended features). In addition, the 'footprint' analysis, in which detections are grouped together, should be performed on the smoothed image to avoid splitting up the extended low-surface brightness structures. In this case, although we will still be sensitive to errors in the background, we will not generated dozens of detections but only a small number for each such feature.

Attachments