IPP Progress Report for the week 2011.04.04 - 2011.04.08

(Up to IPP Progress Reports)

Eugene Magnier

I spent this week working on the 3pi reference photometry database. I have implemented DVO scripts to measure the quality of the photometric calibration by using the location of the stellar locus in 2color plots. I extracted a stellar locus from the MD fields, so this essentially measures discrepancies between the 3pi dataset and the MD fields. The stellar locus measurement does not allow us to completely constrain the system zero points : a single free parameter may remain, and the accuracy is only ~2-3% for the g-r color (which is constrained by the turn up at late spectral types). However, it does give an independent check on the photometry and allows me to catch problems with the relative photometry analysis.

I quickly learned that some of the database had very poor relative photometry results: forget the zero points, the relative photometry calculation was not resulting in generally consistent values for stars in certain areas. I tracked this down to a bookkeeping problem associating the images appropriate to the analysis with the region of interest. With this fixed, I found much more sensible relative photometry results. However, the stellar locus test showed that the derived photometry was not in good agreement with the MD fields. Looking into this, I realized that the relative photometry analysis was allowing negative clouds as a valid solution. I updated the code to force images to have only positive clouds (within a small error margin of 2.5%), propagating the effect back to the images with real clouds. This significantly cleaned up the photometry across large areas.

The one drawback is that the analysis does a better job of tying down the cloud levels if a particular analysis has enough images with photometric data. This drives me to use large portions of the sky, rather than looping over small area. To enable this, I have made 3 changes to relphot. First, I have allowed a user-set limit on the density of stars used for any region in the sky: this fix prevents the Galactic Plane regions from dominating the memory and forcing 100s of GB of data in active memory. Second, I have adjusted the code to use an internal structure for detections and average objects with a limited subset of the parameters stored in the database. This allows me to nearly triple the number of stars used in the analysis, while keeping the memory footprint manageable. Finally, I updated the code to allow the relative photometry analysis to be performed on multiple filters at a time (previously, only a single average filter system could be processed in one pass). When processing the data on the entire sky, the analysis spends most of the time on I/O. Since it has to read the entire database for each filter, allowing multiple filters means we can spend that time once.

Serge Chastel

Heather Flewelling

  • continued merging minidvodbs. monday morning: 140/185.
  • addRun modifications are done, but not fully tested.
  • MD10 and MD09 stacks (for JT) are done
  • started MD04 dvodb for Roy

Roy Henderson

  • Some design, discussion and a little coding for the implementation of Daniel's query builder into PSVO. Should be ready for Boston....

  • Lots of development on ippToPsps, culminating in the loading to PSPS of the first-ever stack format batch, albeit with bogus object IDs. The road to this included:
    • now using local, rather than remote, MySQL database to enable a huge speed increase when loading tables from FITS
    • wrote a method to report which columns of which tables are fully or partially NULL, and to replace those NULLS with weird PSPS '-999' pseudo-NULL value
    • many more stack fields populated after discussions with Jim/Gene
    • implemented XML config file for use with Jython code: includes all Db parameters, paths, DVO and datastore settings etc
    • wrote a python class to handle gpc1 queries, including nebulous calls to return file paths
    • 'DVO' table in database to store all IDs (objID etc). Filled with fake values and used to test load
    • jython code now interfacing with ippToPsps db, getting new batch ID, saving manifest files and publishing to datastore
    • worked with Thomas and Sue loading the stack to the ODM. A few problems encountered, but it loaded eventually
    • started work on Jython version of detection batches, though this is low priority
  • Meetings and discussions about IPP processing required before Boston meeting to ensure a good PSPS demo.
  • Worked through a list of exposures missing from PSPS as reported by Dave Monet. Figured out a good reason for most, if not all.

Bill Sweeney

  • Completed the masktest analysis using the IPP trunk. The new code base is ready to go.
  • Created new production tag. Did some testing of it that pointed out some source files that hadn't been updated (my fault as it turned out.)
  • Finally finished running the ThreePi?.rerun data through the system (several hundred exposures of interest to Bertrand Goldman that didn't get processed by nightly science for various reasons. This was made more difficult than it should be due to some errors I made in queuing the processing originally.
  • Copied 2.7 TB three pi database to 2 2TB usb disk drives for Dave Monet. This simple task was made complicated because the dvo database had many directories with more than 32768 files on them. This doesn't work with NTFS. Some scripting took care of this.
  • Modified the script that performs the static sky analysis to support analyzing a single input. This involved a miserable afternoon investigating a bug in the perl version of the metedata config file parser. Never did find a fix to the problem, but I found the root cause and a workaround.
  • Modified the skycells program to optionally have East to the left. Used it to create the new ThreePi?.V3 tessellation which we plan to use for 3pi processing.
  • Started reprocessing the STS data with the new codebase, tessellation and psastro recipe.

Chris Waters

  • Begin work on fixing detectability server.
  • Created updated masks for OTA47 and OTA06, to correct specific camera changes.
  • Worked on ppStack to allow quick stacks to be made without convolving inputs. Quickstacks do not seem to reject pixels as efficiently, so spent time trying to improve that. Built test quickstack for reprocessing test.
  • Reprocessing test: processed a set of exposures using both warp/quickstack diffs and TTI pair diffs. After magic and destreaking, stacked the input warps for both sets into final (convolved) stacks, and compared coverage and stack quality.