IPP Progress Report for the week 2009.10.26 - 2009.10.30

(Up to IPP Progress Reports)

Eugene Magnier

I've been digging into the photometry on difference images. There were some minor bugs related to the psphot implementation used by ppSub. However, more importantly, there is an inconsistency in the case of convolved images between the errors (and the resulting chi-square values) for the fits to source flux models and the per-pixel errors reported for the image. This is yet another effect of working on images with correlated errors. We believe we understand the per-pixel errors as illustrated by the histogram of signal-to-noise values in the background of the images. We believe we understand the impact of the correlated noise on the photometry measured in apertures, based on the experiments performed by Paul when we first started tracking the covariance matrix for the convolutions. This effect shows that there is a difference in the scaling between the covariance as used by aperture photometry and by PSF (or other model) fitting. In order to test this effect and the photometry analysis within ppSub, I have found it necessary to create some additional IPP programs.

psphotForced -- this does forced photometry on a collection of positions, which may either supplied as a cmf file or as a text file (currently only X,Y chip coordinates are allowed, but it would be easy to make the load apply astrometry to R,D positions). This program is a small modification relative to psphot, and makes some assumptions: the psf model must be supplied (we can possibly make this optional in the future), a text list of X,Y corresponds to positions in just the single processed chip (it is also allowed to send a list of lists, in which case there must be one for each of the chips being processed as a group). This program performs the background modelling, loads the sources and psf model, and measure photometry of the sources; it does not attempt to measure moments of the sources (is this desired?). I have tested it a bit against simtest images, and the forced photometry of sources using the positions from an earlier normal psphot analysis result in magnitudes that agree well, but not perfectly at the <0.01 mag level. There are a few issues that I need to double check related to how the aperture correction is being performed (if at all) and what size aperture is used for each object. these could certainly explain the observed errors. This program is needed to finish the 'forced photometry' analysis stage, so this effort served multiple goals.

psphotMakePSF -- this program simply generates a psf model from an image, either based on a specified collection of sources or based on newly detected sources in the image.

ppSmooth -- a stand-along smoothing program. I am finding that the chi-square of photometry of objects in convolved images is not quite what I expect, even if I take the covariance into account as I understand it (I'm probably getting it wrong). To make a more complete test suite, I decided it would be useful to have a stand-alone program that could smooth an image, and do the 'right thing' with the covariance determined from an earlier smoothing or convolution. This program does not build with the rest of the suite via psbuild; I'll add it in after it is a bit better tested.

Heather Flewelling

  • processing several labels:
    • Cal.2009106
    • SVS.Run4.r.20091027 (chip and cam only)
    • ThreePi?.Run3.r.20091027 (chip and cam only)
    • STS.20091027
    • MD08.y.20091029
    • and a few other labels for diffs and other stages
  • destreaking md08 label
  • published sweetspot label
  • investigating failures on stdscience
    • nfs problems/glitches
    • log files and other files with 0 bytes and funny write protected state blocking nebulous/pantasks from proceeding (from past nfs issues)
    • programming errors: I'm finding them mostly on pswarp, 2 different types of pswarp errors
    • diffs on sweetspot: some don't have faults?
    • some burntool failures (on the calibration images, they were not burntooled for various reasons, this was fixed)
    • no detrends found for some of the calibration images: those were taken in w.00000 band, so I just dropped those images.
    • chips on MD08: some don't have faults?
  • checked that all important (ie, raw images) that we care about have been replicated off of ipp008
  • helped natalia with some simtest questions

Bill Giebink

  • Continued on "realtest"
  • Added card/disks to ippc018/c019; swapped memory on ipp005

Paul Price

  • Stacking (branches/pap)
  • Investigating rejection behaviour: some stars rejected from all images, some features on a single image (e.g., burn) can result in pixels from all images being rejected, scattering of pixels rejected on a single image
  • Add variance based on PSF-matching chi2 when stacking: this feature was recently turned off when I thought I had the convolutions behaving, but it seems it's necessary for getting the variance right (chi2 ~ 4). With this feature on again, the rejection in the cores of stars (from which the parameters are measured!) is much suppressed.
  • Now a faint satellite streak gets through! Am I double-renormalising the variance?
  • Streak can still be seen over a much shorter length (e.g., only one other input image) before I made this change.
  • Also more other faint artifacts (e.g., burns)
  • I was double-renormalising the variance; fixed. Now tweaking parameters (COMBINE.REJ = 2.0, COMBINE.SYS = 0.1)
  • Realised that "safe" mode is not doing what we want. After rejection, pixels come in three categories: tested and good, tested and rejected, and not tested. The code currently does not recognise the third, which is a distinct state because we don't want these pixels grown, as we do for rejected pixels. This cannot be fixed merely by using the "safe" combination because that would discard the "tested and good" pixels that have only a single unrejected input but are good because they have survived the testing process. Need to add a new state into the combination process. Now I add these pixels straight into the "reject" list.
  • Rejection of stars is much reduced now. Remainder seems to be due to images having slight shifts.
  • We should be able to take shifts out through PSF-matching. However, there are shifts in the fake images as well. Can attempt to solve for dx,dy in the course of ZP measurement.
  • Measured image shifts are not large (0.3 pixels max) --> must be incoherent shifts, which we can't do anything about in the stacking process.
  • Attempting to get a handle on the variance behaviour: renormalising from the background gives us a different factor (~1.3) than the PSF-matching chi2 does (~4). How should we normalise the variance?
  • Playing with softening the errors in the PSF-matching. Perhaps the PSF-matching can give us a softening parameter for the stacking?
  • Softening parameter wasn't helping at all (I expect most of the signal in the chi2 was coming from the background). It seems that the PSF-matching had a lot of outliers that need to be removed with more iterations. Then chi2 ~ 2.6 which is a bit closer to the value from the background (1.3). I think the difference comes from the covariance matrix (~0.5).
  • But this still means we're double dipping: tweaking the variance twice (even if it's approximately the same factor each time).
  • Determined with Gene that we need to soften the variance --> variance + frac * flux2 (equivalent to imposing a minimum fractional error). Actually, was already doing that, though it was set a little low (0.03).
  • Sources near masked areas are prone to trouble. Let's throw out the CONV.POOR data from the stack. Aha! Now the masked stars seem to be due to lack of overlaps, which we can't control. Perhaps we can throw out CONV.POOR only in the case that something is inconsistent.
  • Updated combination algorithm to throw out all suspect pixels as the first step in rejection (only done if something is inconsistent).
  • Added combination case to do better testing for 3 input pixels (lots of pixels when there's only 4 inputs; and it tends to go a little insane with the general case, throwing out all inputs)
  • Reworking mammoth function combinePixels() into multiple smaller functions

Bill Sweeney

  • fixed problem in distribution related to warp versus warp diffs
  • wrote a new program to undo background subtraction in chip processed images. (This was requested by the M31 project).
  • Started some work optimizing the distribution processing. Found and reported an issue with the distribution of skycell data products on the cluster. (Since fixed by Gene.)
  • Debugged some problems with users' postage stamp requests. (I still need to provide better error reporting).
  • Built a simple web page and associated script to show the postage stamp server status.
  • worked with Gene and Heather to build and install a special tessellation for the STS survey.
  • fixed a bug in the distribution client related to database dump file format change made just before vacation.

Chris Waters

  • Fringe. Noted that fringes aren't being fully removed, with the residual correlated somewhat with time of night. The scales match well for data taken around the time the master fringe data was taken, suggesting the deviation is due to changes in signal from the sky.
  • Stack reviewing. Got up to speed on the current issues with the stacking. Noted that burntool masks weren't being respected by stacking (now fixed by Paul Price). Looked at the photometric consistancy between stacks and warps, determining that the phphot errors aren't scaling as expected between these steps. Began work on a simtest run to check this with data that doesn't have to worry about telescope reality.
  • Crosstalk. Looked at a large sample of potential crosstalk ghosts, and found that roughly 80% match John Tonry's predictions.