Return

July 2, 2012

Study of all tests run to this point:

#label data_group workdir reduction refcat psphotCode psphotV MASK NOISEMAP DARK PATTERN_ROW PATTERN_CON FINAL_FP_WIKI
czw.footprint.finalsynthtest czw.20120514 neb:///czw/czw.footprint.finalsynthtest/ DEFAULT_SYNTHCAT SYNTH.GRIZY trunk/psphot@33844 33844 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 19624
czw.footprint.finaltest czw.20120514 neb:///czw/czw.footprint.finaltest/ PS1.REF.20120503 tags/ipp-20120404/psphot@33698:33762M 33698:33762M GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 20047
czw.footprint.finalsynthphottest czw.20120517 neb:///czw/czw.footprint.finalsynthphottest/ DEFAULT_SYNTHCAT SYNTH.GRIZY trunk/psphot@33844:33890 33844:33890 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 13983
czw.footprint.test21 czw.20120521 neb:///czw/czw.footprint.test21/ DEFAULT_SYNTHCAT SYNTH.GRIZY trunk/psphot@33844:33890 33844:33890 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 13309
czw.footprint.test120525v2 czw.20120525v2 neb:///czw/czw.footprint.test120525v2/ TEST_REFCAT refcat.20120524.v0 tags/ipp-20120404/psphot@33698:33762M 33698:33762M detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T
testMEH neb:///meh/meh.footprint.test120531v2/ PS1.REF.20120524 tags/ipp-20120531/psphot@33970 33970 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 40802
test23 neb:///czw/czw.footprint.test23/ ? PS1.REF.20120524 trunk/psphot@33984 33984 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 40798
czw.footprint.test24 czw.20120604a neb:///czw/czw.footprint.test24/ DEFAULT_MASKTEST PS1.REF.20120524 trunk/psphot@33984 33984 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.851.0.XY01.fits T T 36895
czw.footprint.test25 czw.20120605 neb:///czw/czw.footprint.test25/ PS1.REF.20120524 trunk/psphot@33984 33984 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T
czw.footprint.test26 czw.20120606 neb:///czw/czw.footprint.test26/ PS1.REF.20120503 tags/ipp-20120404/psphot@33698:33762M 33698:33762M GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 19222
czw.footprint.test27 czw.20120608 neb:///czw/czw.footprint.test27/ PS1.REF.20120524 trunk/psphot@33994 33994 GPC1.MASK.20101215.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 13160
czw.footprint.test28 czw.20120608a neb:///czw/czw.footprint.test28/ DEFAULT_MASKTEST PS1.REF.20120524 trunk/psphot@33994 33994 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.851.0.XY01.fits T T 11417
czw.footprint.test29 czw.20120618 neb:///czw/czw.footprint.test29/ DEFAULT_MASKTEST PS1.REF.20120524 trunk/psphot@34030 34030 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.851.0.XY01.fits T T 13975
czw.footprint.test30 czw.201206XX neb:///czw/czw.footprint.test30 ? PS1.REF.20120524 trunk/psphot@34030 34030 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.851.0.XY01.fits T T 11961
czw.footprint.test31 czw.20120620 neb:///czw/czw.footprint.test31/ DEFAULT_MASKTEST PS1.REF.20120524 trunk/psphot@34030 34030 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.851.0.XY01.fits T T 12346
czw.footprint.test32 czw.20120627 neb:///czw/czw.footprint.test32/ PS1.REF.20120524 trunk/psphot@34090 34090 detref615.XY01.fits GPC1.noisemap.norm.959.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 38636
czw.footprint.test33 czw.20120628 neb:///czw/czw.footprint.test33/ PS1.REF.20120524 trunk/psphot@34090 34090 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 39269
czw.footprint.test34 czw.20120628 neb:///czw/czw.footprint.test34/ PS1.REF.20120524 trunk/psphot@34090 34090 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 39587
czw.footprint.test35 czw.20120629 neb:///czw/czw.footprint.test35/ PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 41629
czw.footprint.test36 czw.20120629 neb:///czw/czw.footprint.test36/ PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 41634
czw.footprint.test37 czw.20120629 neb:///czw/czw.footprint.test37/ PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 41545
czw.footprint.test38 czw.20120629 neb:///czw/czw.footprint.test38/ PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 41937
czw.footprint.test39 czw.20120629 neb:///czw/czw.footprint.test39/ DEFAULT_DARKTEST PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.DARKTEST.norm.851.0.XY01.fits T T 78641
czw.footprint.test40 czw.20120702 neb:///czw/czw.footprint.test40/ PS1.REF.20120503 tags/ipp-20120404/psphot@34112 34112 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 19523
czw.footprint.test41 czw.20120702 neb:///czw/czw.footprint.test41/ PS1.REF.20120503 tags/ipp-20120404/psphot@34112 34112 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 19732
czw.footprint.test42 czw.20120703 neb:///czw/czw.footprint.test42/ DEFAULT/CONST=T PS1.REF.20120503 tags/ipp-20120404/psphot@34112 34112 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 49675
czw.footprint.test43 czw.20120703 neb:///czw/czw.footprint.test43/ DEFAULT/CONST=F PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 41621
czw.footprint.test44 czw.20120703 neb:///czw/czw.footprint.test44 DEFAULT/CONST=T PS1.REF.20120524 tags/ipp-20120531/psphot@34108 34108 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 41808
czw.footprint.test45 czw.20120703 neb:///czw/czw.footprint.test45 DEFAULT/CONST=F/definemissing PS1.REF.20120524 tags/ipp-20120626/psphot@34116 34116 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 39460
czw.footprint.test46 czw.20120703 neb:///czw/czw.footprint.test46/ DEFAULT/CONST=T/definemissing PS1.REF.20120524 tags/ipp-20120626/psphot@34116 34116 detref615.XY01.fits GPC1.noisemap.norm.962.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 39389
czw.footprint.test47 czw.20120705 neb:///czw/czw.footprint.test47/ DEFAULT/CONST=F/IMAGE_VAR PS1.REF.20120524 tags/ipp-20120626/psphot@34116 34116 detref615.XY01.fits GPC1.NOISEMAP.norm.936.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 12379
czw.footprint.test48 czw.20120706 neb:///czw/czw.footprint.test48/ DEFAULT/CONST=F/CONST PS1.REF.20120524 tags/ipp-20120626/psphot@34116 34116 detref615.XY01.fits GPC1.noisemap.norm.965.0.XY01.fits GPC1.DARKTEST.norm.856.0.XY01.fits T T 39289

Still June 28, 2012

Identified typo/mistake in boost generation. Does not seem to fix the problem:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 30 138289 398700 102141 389926 42250 381611 11961 375320
Test 34 141740 407488 104949 396816 43018 382471 39587 382362

June 28, 2012

Re-calculating the boosts from the original noisemap (not using the previous boosted map as an intermediary) yields effectively the same results:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 30 138289 398700 102141 389926 42250 381611 11961 375320
Test 33 141428 406828 104617 396179 42825 381889 39269 381771

June 27, 2012

This test includes the newest IPP tag (ipp-20120626), which has a different weighting for photometry. Because of this change, new NOISEMAP files were required to ensure that we have sufficiently boosted the noise to prevent false positives. The results of this test are shown below

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 30 138289 398700 102141 389926 42250 381611 11961 375320
Test 32 139551 405940 102902 395309 42060 380979 38636 380870

June 19, 2012

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 29 143827 407166 106819 396687 45765 383967 13975 377744
Test 30 138289 398700 102141 389926 42250 381611 11961 375320

These test uses the corrected MASKTESTs that attempt to minimize the static camera structures due to cross talk glows and other chip defects. Test 29 does not have the new masks properly applied, as the SUSPECT bit was not passed correctly in ppImage. Test 30 resolves this, and ensures this bit is passed. However, this causes the science pixels to be removed completely for these SUSPECT regions, which is not fully the desired result.

Applying the MASKTESTs appears to yield a marked improvement over the previous masks, removing a number of camera feature sources of false positives. The mask fraction is also only increased a small amount over the old masks (0.1957 static mask fraction for test30, compared to 0.1948 for test21: the reference, and 0.2380 for the previous poorly constructed MASKTESTs).

Using this Test 30 dataset, we can compare these false positive reduction results against an alternate set of cuts. The proposed cuts for retaining a detection as likely real are: IS_BAD = 0, PSF_QF > 0.85 (alternate: PSF_QF > 0.5), and a 5-sigma significance. The following table compares these cuts against those above. The cuts are added cumulatively, such that the right most column contains all cuts applied to the left. The row labeled "Note" contains a comment about this particular set of cuts, including the PSPS ignoring proposals. The final row shows the fraction of singly detected objects to 12x detected objects, listed as a percent.

Ndetections Uncut > 5-sigma significance IS_BAD == 0 PSF_QF > 0.5 PSF_QF > 0.85 IS_POOR == 0 PSF_QF_PERFECT > 0.85
1 138289 51232 22031 21415 15861 12625 11961
2 45168 11785 11390 11126 7434 5948 5628
3 26997 8825 8591 8291 5643 5030 4689
4 17944 7277 7094 6820 5102 4738 4377
5 14950 7450 7286 6954 5336 5165 4766
6 14310 8121 7899 7586 5881 5686 5189
7 13482 7826 7644 7388 6150 5947 5683
8 14688 9164 8981 8684 7584 7380 7096
9 16281 10987 10800 10408 9263 9037 8788
10 18600 13214 13008 12684 11808 11638 11416
11 29007 23409 23127 22669 21302 20972 20517
12 398700 392087 390745 389540 385333 378350 375320
Note ALL PSPS Prop A PSPS Prop B False positive Tests
%1/12 34.69 13.07 5.63 5.50 4.12 3.34 3.19
loss % N/A 1.66 0.36 0.31 1.08 1.81 0.80
power N/A 13.0 20.7 0.42 1.3 0.43 0.19

update by EAM: I added the row loss, the percent of 12-detections lost from one cut to the next, and the row power, the ratio of the percent 1/12 drop to the percent 12/12 cut

May 28, 2012

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 12/05/25 143724 385252 108334 375898 46901 363261 16444 357395

This test included everything in Test #21, but included the new static masks that were created to attempt to minimize the effect of static crosstalk issues. After creating these masks, I realized that those areas should more properly be masked with a SUSPECT bit instead of the DETECTOR bit. These regions are not really as bad as the defects removed with the DETECTOR mask, but we should be aware that detections in these areas are not as reliable as those in other areas. This mask change increased the static masking by ~4%, which is larger than I expected to happen. Because of this loss of area, we lose a large number of probable real sources (comparing the number of all N=12 objects shows a ~5% loss rate consistent with this area change).

Looking at the ghost comparison map, it appears that a large number of ghosts are present in this reduction. This reduction used the new reference catalog, suggesting that there might still be issues with the bright stars (or a problem in the ghost threshold). This explains why the number of 5-sigma single detections is increased relative to Test #21.

May 21, 2012

The check of remaining residual detection patterns suggested that the magnitude limit was not set well enough to catch all ghosts. I reran the test with a different limit:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 21 145775 406873 108120 397013 45890 383864 13309 377687

This means a false positive rate of 13309 / 377687 = 0.035 (kcc)

Matching the test21 results against SDSS, and applying the same cuts as above yield the following results:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Test 21 SDSS match 11252 351886 10654 344675 8743 333924 1477 328491
Test 21 SDSS unmatch 114056 7092 81312 6024 30599 5718 9296 5575

The following plots show the only the objects that are unmatched in SDSS, as these are remaining false detections. The unmatched region on the left (high DEC) is not included in the SDSS data.

May 18, 2012

Star associated detection cleanup

After discussing the star associated false detections with Bill, he pointed me to psphot improvements he's been making that should minimize these false detections. I updated my psphot build to include these improvements, and reran the footprint tests:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
finalsynthtest 154929 408109 119739 401649 54689 388950 19624 382736
Bill'spsphot 145235 406848 110048 399959 47408 386648 13983 380477

This improvement to psphot does seem to remove these star associated detections, with an improvement of ~6000 5-sigma detections, and a clear residual pattern around stars in the FPA comparison plot.

Notated image for CZW to explain some residual patterns

May 17, 2012

Following up on Nigel's work on false positive/star correlations (http://ps1sc.ifa.hawaii.edu/PS1wiki/index.php/DRAVG_telecon#False_detections_on_czw.sas2.20120509), I used the most recent finalsynthtest footprint reduction to match false positives (the 19624 detections that have good flags, are qf perfect, and are above the 5-sigma limit, but are only detected one time) against the stars in the synthetic catalog (to avoid bright stars that have erroneous PS1 REFCAT magnitudes)

Given the clear distribution of detections near bright stars, I placed a cut to isolate the detections that are correlated with the bright star (log10(R) < (g*_synthcat - 10) / -7.0; g*_synthcat < 15). This cut catches 6864 of the false detections, suggesting ~1/3 of the remaining false positives are associated with false detections around bright stars.

Removing these "star associated" detections from the list, and plotting up 2D histograms of detections shows that the remaining sources do tend to have correlated positions. Taking cuts of 5 and 10 detections / arcmin2 as the clustering threshold places 5426 and 3921 detections in clumps, respectively. Plotting these clustering points shows that unmasked camera structures (including unmasked burns) seem to be the main causes:

May 15, 2012

Counts are listed with the "finalsynthtest" reduction on the May 14, 2012 table. There is a marginal change in the total number of false positives.

However, this comparison image of the synthtest and boost test shows that we are missing some ghosts from bright stars in the new reference catalog still.

May 14, 2012

Final footprint test comparison

* These are the results of IPP truck tag 20120510 = SAStest:v4 = SAS05 (PSPS) and has a false positive rate of 0.051 (kcc)

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5-sigma N=1 5-sigma N = 12
Old IPP 299374 420635 - - - - - -
CZWreduct 298849 420891 264070 414845 165624 401662
continuity 271480 417811 237525 411810 142463 398707
new dark 223680 422322 191487 416508 98467 403106
noisemap 180204 416785 149269 411294 76966 398206 25492 391296
boosttest 150485 410869 120572 406283 57458 392677 22359 386429
finaltest 152376 408932 118826 402587 54036 389858 20047 383634
finalsynthtest 154929 408109 119739 401649 54689 388950 19624 382736

This image shows the singly detected objects in both the boost test and the final test. The objects that print through as red are removed in the final test, largely reflecting the better ghost modeling in this reduction. The crosstalk spike is also seen to be masked in this final test.

May 8, 2012

Final ghost model

Tracked down a math bug in my ghost model solving that switched the ghost and reference positions while solving the model polynomial. This resulted in fits that appeared close, but were offset by some significant amount (roughly twice the shifts the ghosts have from their direct reflection position). Correcting this bug results in greatly improved mask centers. From solving the ghost model and comparing with the offsets manually measured (541 ghost positions) shows that the new results have a far smaller scatter and no mean offset.

Model X Y
Original 63 +/- 79.5 -18 +/- 148.3
Updated 0 +/- 48.1 0 +/- 62.6

May 7, 2012

Added ghost data

Taking the largest rotation angle separation available in the currently studied ghost data (processed with added verbosity to extract reference/model positions, and checked manually for true ghost centers), there does not seem to be significant disagreement among the exposures with different rotation angles (using ROT header keyword). The large offsets found on the top and bottom edges of the FPA appear in both exposures.

After further study of this image, I realized that I'd neglected to correctly account for the OTA/FPA parity issues that result in flips of coordinate systems. I've replotted the vector diagram with these parity issues resolved:

I suspect that this was the main issue preventing a useful ghost model being constructed from this data before.

May 4, 2012

Ghost center study

After studying a set of images and manually correcting the ghost position, I've arrived at the following vector chart. It's unclear how much of the shift is due to each of: incorrect radial model solution; tilts of individual OTAs altering the positions of ghosts reflected from that OTA; telescope orientation specific errors in the model.

April 24, 2012

This image shows the change in the 5-sigma false detection map between the new reference catalog (which is not correctly calculating the ghost image models) and the synthetic catalog (which is putting down some ghost masks). The marginal improvement illustrates that the ghost model is clearly not adequately masking these ghost images.

April 23, 2012

No sigma cut

5 sigma cut

April 20, 2012

Noisemap Boost Footprint Test

comparison:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 SDSS Match N=1 SDSS Match N=12 5-sigma N=1 5-sigma N = 12
noisemap/SDSS 153892 367454 126529 362838 65470 351742 10166 345433 - -
5sigma_clip 180204 416785 149269 411294 76966 398206 - - 25492 391296
boosttest 150485 410869 120572 406283 57458 392677 - - 22359 386429

I do not currently have any thoughts as to why increasing the noise by the boost has not resulted in the expected significantly reduced false detection rate. It's clear that we've improved the worst offenders, looking at this OTA67 plot:

comparison:

April 18, 2012

Proposed Noisemap Boost calculations

The method used to calculate the proposed boosts is as follows:

  1. Run a "standard" chip processing with photometry on a set of BIAS exposures (N=10 for this case). All detected objects are presumed to be false, as a BIAS has no light on it. (Flats were not applied, as OPEN filter doesn't have a flat. PSF for photometry chosen from a science exposure).
  2. Collate all detected sources, and count the number of sources above some threshold: 1 / MAG_PSF_ERR > sigma_threshold.
  3. Assuming all of these detections represent a mis-estimate of the background noise, we can derive the effective threshold as sigma_effective = sqrt(2) * inverfc((2 * N_false) / (dX * dY * Nexp / A_psf)) using N_false, the number of detections above the threshold; dX,dY, the size of the area we counted the false detections; Nexp the number of BIAS exposures measured; A_psf, an estimate of a PSF footprint area, chosen to be ~16 pixels.
  4. Calculate the boost factor: B = sigma_threshold / sigma_effective

This plot shows the boost calculation for three OTAs, one that is known to have significant false positives (OTA67) and two that have much lower false positive rates (OTA16 and OTA31). The calculation was done on 20x600 stripes oriented along the chip y-axis (to map the variation in the noise in the x-axis). The large plume visible for OTA67 shows that many of these stripes have false positives, and would require a boost to have the effective sigma match the desired threshold. A similarly lacking plume for OTA16/OTA31 shows that these OTAs do indeed have less of a false positive problem.

One concern is the obvious quantization of the boost. A single false positive in this data will suggest a boost factor of 1.37. Adding more input bias frames would reduce this quantization, and more calculations are running to resolve this problem.

April 17, 2012

Flags and thresholds

These are the current checks and cuts I'm using to exclude detections from consideration based on psphot parameters:

Name Check Result
IS_POOR bitand(FLAGS,0xe0440130) Excluded as a bad detection if IS_POOR != 0
IS_BAD bitand(FLAGS,0x1003bc88) Excluded as a bad detection if IS_BAD != 0
IS_QF_PERFECT PSF_QF_PERFECT > 0.85 Excluded as a bad detection if IS_QF_PERFECT != 1

April 13, 2012

This plot shows the XY67 slope as an estimate of dark model quality as done below, using the newest proposed set of darks. This seems to largely mitigate the issues we had before, and suggests these new dark models better match the detector than the old models.

April 9, 2012

Easter weekend questions

  1. From the fixedgain_improvement.png plot, most cells appear to be in XY67, are the other cells from a single chip? No, although the cells do seem to be clustered in some bad OTAs. Taking a minimum improvement of 10 as the cutoff:
OTA XY17 XY27 XY15 XY55 Xy34 XY06 XY11 XY23 XY24 XY33 XY43 XY54 XY62 XY73 XY74
Ncells 9 7 6 5 4 2 1 11111111
  1. I guess the points in that figure with Gain/1.1 >> 1 and delta of 0.0 are otherwise bad, masked, and thus have no detections, right? Correct. Examples are OTA76:xy66, OTA17:xy50, etc.
  2. There is a single odd cell at Gain/1.1 ~ 80 and delta ~ 200 -- who is that? OTA73:xy33. This appears to be a bad cell in general, with a defect noticable in noisemap/dark/science images. This cell might be better off fully masked.
  3. 2D plot of XY67 to confirm our expectations (that the terrible cells are no longer so terrible -- if we still see the same xy00 sticking out badly, despite losing 500 detections, then we are only getting a tweak on this chip as well as the others) There are clear improvements, but xy00 is still the worst, and the position dependence is still visible (see q6 below).

  1. 2D plot of the full mosaic false detections, showing only the >5 sigma detections (and maybe a plot with the SDSS matches removed, though another plot showing the locations of the SDSS matches would be needed to show the overlap). This is somewhat difficult to interpret, but the significant unmatched SDSS sources seem to cluster tightly around very bad cells (with gradient effects visible), bright stars, and satellite trails.

  1. one of the 1D plots to see if we still have the position dependent problem (I'm guessing yes) Yes, fixing the gain to a constant value still has position dependence issues.

  1. For the noisemap data, can you make a CDF of the N_1 detections vs 1/MAG_PSF_ERR? If we decide to reduce the false positives by tossing out objects below some cut, that plot would tell us the S/N hit we will take to reach a given level of confidence.

  1. Finally, what are the N_1 vs N_12 numbers only for the region of SDSS overlap if you exclude < 5sigma and SDSS matches? That really is our current bottom-line mean false positive rate (though if the SDSS overlap avoids a few really bad chips, it could be somewhat under counted).

April 5, 2012

Detrend plan

  1. Construct new "A" dark for dates 2010-01-23 - 2011-05-01. (det_id 862)
  2. Register "B" dark for B-mode dates between 2010-01-23 - 2011-05-01 (table attached below; det_id 852).
  3. Construct three new darks: 2011-05-01 - 2011-08-01 (det_id 856) ; 2011-08-01 - 2011-11-01 (det_id 857); 2011-11-01 - 2012-04-01 (det_id 858). These will minimize the dateobs trend observed over this range.
  4. Construct noisemaps for 2009-01-01 - 2010-09-01 (det_id 859); 2010-09-01 - 2011-05-01 (det_id 860) to match the variance shifts shown below.

Fixed Gain test

I reran the g-filter footprint, using all previous improvements with the addition of fixing the gain for every cell of the entire detector to a single value of 1.1. The same processing and analysis was done, generating a higher number of false positives than the previous (noisemap) reduction:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12
fixedgain 196192 419942 166336 415331 87839 402359

However, looking at a plot of the change in false positives for each detector cell as a function of the gain ratio (header quoted gain / chosen fixed gain of 1.1) shows that this increase is largely due to a number of detectors going from a gain less than 1.1 to this fixed gain. If we preferentially select only those cells that have quoted gains larger than 1.1, we generally see a dramatic improvement in the false detection rate, at only a minor change in the number of 12-detection measurements (red points).

This suggests that making the change to use any quoted gain below a threshold, and clipping all others at that threshold may make an improvement in the false positive rate.

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12
Simulated gain clipped - - - - 72592 397913

Simulating this with the noisemap/fixedgain test data gives only a negligible change. This is likely due to the fact that only 346/3840 cells have gains higher than 1.1. These cells account for 14831/76966 single detection sources in the noisemap data, and 11456 sources in the fixed gain data. Therefore, even though these cells do have an larger than average false detection rate, the total rate is not significantly improved by correcting the gains.

Glycol check

Extracting the glycol status via the "GLYSTAT" header keyword and plotting this value against the A/B mode slopes does not reveal any obvious connection between the glycol status and the dark mode chosen.

SMF 5-sigma cutoff

Using the noisemap data, I added a further cut that demands that we only believe measurements with MAG_PSF_ERR < 0.2, which enforces a 5-sigma detection threshold on the measurements:

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 5sigma N=1 5sigma N=12
5sigma_clip 180204 416785 149269 411294 76966 398206 25492 391296

April 4, 2012

Dark model

To determine the date ranges we should use the A/B mode darks, I processed OTA67 darks over the entire range of usable dates. All darks were selected to be 30s exposures that were either the 2nd or 7th of that night's sequence. I then measured the slopes in the residual image as done previously, and used this slope as the proxy to determine the dark quality and which dark is the best fit. The following plot summarizes this data, along with a set of bars indicating the current time range of various active darks.

The red points show the slopes allowing detselect to choose the current appropriate model. As we have no new models valid for data prior to 2010-01-23 (as the headers prior to that date have a different keyword for the detector temperature), these data are only fit with their default models. The large slopes and scatter in this data suggest new darks would be useful for this older data.

For data taken after 2011-05-01, the A-mode dark is the current model, and has been previously proven to match the data better than previous models. This data is included for completeness.

The intermediate period (2010-01-23 to 2011-05-01) is currently fit with det_id 845, which is the "average" dark that was constructed without knowledge of the A and B modes. This data does separate reasonably well into the two modes, with the B-mode dark minimizing the slopes for those dates that seem to be in that mode. The A-mode dark does not appear to be a significant improvement over det_id 845, making the choice of which dark to use for these dates unclear.

Before constructing any new darks, I am going to register a set of B-mode darks that have date ranges that cover only that data that is clearly in the B-mode, such that slope_B < slope_845.

Noisemap

I initially thought that the break in the detector noise values was caused by the dark model being applied, but that does not seem to be the case. Therefore, something seems to have happened to suddenly shift the noise on certain cells down relative to where they were before ~2011-05-01.

March 30, 2012

SDSS match comparison

* These are the results of czw.SAStest 20120330 = SAS04(PSPS) with false positive rate 65470/ 351742 = 0.18 (kcc)

Using Nigel's SDSS catalog, I matched the noisemap results against the SDSS Stripe 82 data, and did the same detection histograms as before.

The numbers are smaller than the previous table, as Stripe 82 only extends to +/- 1.25 degrees declination.

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12 SDSS Match N=1 SDSS Match N=12
noisemap 153892 367454 126529 362838 65470 351742 10166 345433

March 27, 2012

Normalized single-detection x-position CDF plots for all exposures used in the czw.footprint.noisemap test:

This still shows the ramp effect caused by position dependence in the false positives. I've checked that the noisemap was used, and it was, and an examination of the variance map shows the added variance due to the noisemap so it appears that the detrend creation and the recipes are correct.

Comparing the detection histograms for the various reductions (dropping the oldipp reduction, as it appears to have different flag definitions than later reductions):

The noisemap does remove a significant number of false detections, as shown in the following table, although even with the best exclusions, there is still a ~20% contamination rate.

Reduction All N=1 All N=12 Good flags N=1 Good flags N=12 QF Perfect N=1 QF Perfect N=12
oldipp 299374 420635 38 30 194169 (ignore flags) 404422 (ignore flags)
czwreduction 298849 420891 264070 414845 165624 401662
continuity 271480 417811 237525 411810 142463 398707
new dark 223680 422322 191487 416508 98467 403106
noisemap 180204 416785 149269 411294 76966 398206

March 22, 2012

Normalized single-detection x-position CDF plots for all exposures used in the czw.footprint.dark test:

Example NOISEMAP detproc image showing the gradients in observed noise as a function of OTA/cell/position. The cells that show jumps in the above CDFs also have gradients in the image noise as shown below.

March 21, 2012

As the observed image noise does not match the noise model as stored in the variance image, a noisemap detrend is being constructed to include the correct position dependent noise. This noisemap measures the local pixel sigma on a 20x20 grid of positions across each cell. Bias frames are used for this measurement, as they do not have any sky or dark signal that would influence this measurement. This noisemap is then used in the processing of science images instead of the read noise, as the direct measurement includes any read noise that would be observed.

March 20, 2012

Variance comparison

To investigate why we seem to have position dependent false positive rates, with biases to one side of the detector cells over the other (as shown below), I made profiles of the image variance in two ways. First, I calculated a mean profile of the chip stage variance image. Second, I calculated an "observed" variance profile by calculating the clipped standard deviation of the chip science image. The results are shown below:

The second image shows the ratio of the "observed" to variance image variances, along with a CDF of the singly detected objects present in the box considered for the profile as a function of x-position. It appears that we find more of these objects (which are believed to be largely false detections) in areas where the variance image underestimates the true image variance. This can be seen in an example cell from OTA67 (from which the above plots are taken): As the x-coordinate increases, the row-by-row variations increase as well, creating a higher-than expected variance.

March 19, 2012

False detection study

March 8, 2012

The new dark has finished, and although there does seem to be a weak trend in the residual slopes with dateobs, these slopes have been greatly decreased relative to the previous dark iteration:

New dark footprint results

I've reprocessed the g-filter footprint data with the new dark (including the continuity correction), and it is currently stacking (label czw.footprint.dark,data_group = czw.20120308.footprint.g). The individual frames appear to have a smoother background, and this is reflected in an improved number of orphan detections. Directly comparing to the previous footprint reductions as before:

shows that the new dark model significantly decreases the number of singly detected objects.

Footprint stack comparison

Comparing the footprint stacked images of skycell.1315.071 from the previous reduction (which enabled continuity correction) shows that the cell level gradients have largely been eliminated.

March 7, 2012

I've reduced a series of dark exposures taken since February 01 2011, as this was the time the current dark model was constructed. I've fit the slope in cell xy10 of OTA67 as a proxy measure of the dark quality, as this cell shows an introduced gradient that appears to contribute to false positives. Shown below is a profile cut across this cell and the others in that OTA and cell row (cells xy10-xy16). The code used to extract the profile does not normalize by the width, so the profile residual values and slopes plotted below need to be divided by 300 to switch to counts. The two profiles shown are from subsequent nights at the beginning of the date range considered, and show different residual patterns. Broadly, we can group nights with increasing gradients with x-pixel on a cell as Mode A, and those with decreasing gradients as Mode B.

The slopes of the first cell were checked for four data samples:

  1. A random sampling of dark frames from the night long dark series (night of UTC 2012-03-07).
  2. A selection of 30s dark frames from each night, chosen from the second dark sequences of the morning and evening.
  3. A random sampling of all parameter space.
  4. All darks taken on MJD 5591 and 5592, to investigate inter-night changes.

It appears from these data that prior to about 2011-05-01, the camera (or at least OTA67) flipped between the two dark modes without any obvious pattern. The data from MJD 5591 and 5592 show that even adjacent nights can have different patterns. However, after 2011-05-01, the camera settled into what appears to be a single mode, which has persisted since.

Based on this observation, I'm currently constructing a new dark master from data taken after 2011-05-01, which should fit all exposures after this point. However, there is no clear idea why this mode has become dominant. A voltage change was done around this time to solve the STS Astrometry bug, but OTA67 was not one of those with a change.

March 5, 2012

As discussed in the March 2 status, I've been looking into how the dark model may be introducing the gradients observed in the science data. To see what (if any) functional dependence these gradients had, I selected dark frames used in the construction of the current dark model, and calculated the detproc (overscan corrected, no dark applied) and detresid (overscan and current dark model applied) images. I used only XY67 in this study, as it has clear science image gradients, and was a useful test case. For each variable that could influence the dark, I selected two exposures that spanned the range, while attempting to keep the remaining variables constant. Here are the profile plots for these tests (profile the same as that used in the previous study of XY67: a large 300 pixel box covering the row of cells xy10-xy17):

dateobs

exp_name dateobs exp_time ccd_temp detproc detresid
o5605g0022d 2011-02-13 10 -78.455
o5743g0645d 2011-07-01 10 -79.6983

exp_time

exp_name dateobs exp_time ccd_temp detproc detresid
o5641g0690d 2011-03-21 0.001 -87.2183
o5638g0035d 2011-03-18 300 -87.0017

ccd_temp

exp_name dateobs exp_time ccd_temp detproc detresid
o5630g0494d 2011-03-10 30 -88.4883
o5736g0606d 2011-06-24 30 -77.745

ccd_temp2

exp_name dateobs exp_time ccd_temp detproc detresid
o5654g0593d 2011-04-03 10 -86.765
o5612g0382d 2011-02-20 10 -72.1067

30s test

exp_name dateobs exp_time ccd_temp detproc detresid
o5666g0013d 2011-04-15 30 -84.43
o5666g0645d 2011-04-15 30 -84.37

Results

The above profiles suggested that 30s darks were somehow different than other exposure times. This seems to be an unfortunate coincidence of the darks selected. Processing a set of evening darks (of all times) from 2011-04-15 all show the downward slopes, and all from 2011-04-03 show the upward slopes. Given this behavior, I selected a set of darks from various dates, processed them, and extracted the slope in the first cell. The following plots show the results of these slopes. The exposure time was chosen to be the same for all of these exposures, and their position in the night was selected to be the same as well (these are the second 30s evening dark taken on each date between the beginning and end of the current dark model inputs).

I've requested that a sequence of darks be taken to more finely cover the range of exposure times, and will use this data to develop a more complete dark model that will hopefully not introduce any residual gradients.

Attachments