IPP Progress Report for the week 2010.11.15 - 2010.11.19

(Up to IPP Progress Reports)

Eugene Magnier

I spent most of the week continuing work on the formerly-corrupted ThreePi? database. I finished the repair of the database, and then merged in the subset merged database of the images taken since the corruption occurred (up through the end of October). I copied this database back to the operational location so we can continue forward with that database. I am going to use this database for the reference astrometry & photometry catalog for the Grand Reprocessing. To that end, I merged in the full 2MASS database, which allows me to have astrometric ties across the sky even so the image edges are constrained. I also merged in some of the pre-DemoMonth? data to fill out the coverage somewhat.

The database manipulation above is now getting to be somewhat slow: the full database is 2TB, and any operations which require a full database scan (re-indexing with addstar -resort; average updates with relphot -average; large-scale dvomerges) can take many hours. I spent part of last week adding multithreading capability to addstar -resort and dvomerge. This speeds up the processing because the threads can share the cpu-intensive portion of the work and intersperse the disk I/O intensive portions.

I also added compression support to the IPP metadata file I/O. These files are getting to be a significant portion of our storage, and they compress well.

Serge Chastel

  • Tessellation: new best found for 4432 boresights (+0.3% coverage for +0.2% boresights)
  • IPP Czar on Wed/Thu
  • MOPS Czar (mainly MD fields)
  • Added OC133.OSS to processing with magic
  • Talked with Gavin concerning a local MySQL upgrade: bad idea because it would likely require a complete cluster upgrade
  • XFS defrag results run on ipp022: 0.7% instead instead of about 30%.

Heather Flewelling

  • ifaps1 work
    • checked out the current op tag and the current trunk and compiled for ifaps1 users (the default is the current op tag)
    • cleaned up the ifaps1 machine to get back space
    • answered questions: gaidos, kaller, dixon.
    • transferring the current md04 refstacks to ifaps1
  • manoa - checked out the current op tag and the current trunk and compiled for the ipp user.
  • addstar
    • edited the db so that addstar is ready to run once the ThreePi? db is fixed
  • skyprobe
    • relphot - more investigations as to why relphot does not work on skyprobe. investigations with dvo make me suspect astrometry issues.
    • psastro - the sigma_ra is high for skyprobe - tried a number of things to improve this (so far the easiest and best is just increasing the number of iterations from 3 -> 10). For the worst one with an astrometry solution, sigma_ra went from ~4 -> ~.25. About 1/4 or so did not get an astrometry solution, and I am still investigating this.
      • psastro's visual option didn't work for me.
  • czar
    • czar friday
    • cleaned up some of the sticky bits using Bill's checkfit and runwarp scripts.
    • investigated the burntool failures on the SAS reprocessing label - they were 0 byte burntool tables.
    • show Serge how to manually queue up a chiprun
    • queued up the oss diffs using a script I wrote. At Serge's request I committed these scripts to tools/heather so others can do something similar.
    • started copying the oss reprocessed warps and chips to /data/ipp005.0/heather/chips.oss133 and warps.oss133. These are the unmagicked ones, the diffs need to be transferred once they are completed.

Roy Henderson

Vacation all week.

Bill Sweeney

  • Worked on changes to ippScripts and supporting perl modules to support replicating output files. Implemented and tested for chip and camera stages.
  • Cleaned up some filesets found on the on the data store by a user for which the constituent files had been deleted yet were still visible. (This was a side effect of a bug fixed in October)
  • Started reprocessing SAS data.
  • Selected exposures for and built a new M31 reference stack.

Chris Waters

  • Real time burntool: Wrote SQL and perl code to determine what should be done for each exposure known on the summit. This will provide the foundation of the updated registration code that will burntool each exposure as it is downloaded (which will in turn cut many hours out of the summit->processing latency).
  • Linearity: Regenerated data using median statistics instead of mean, improving the scatter. Finally discovered that we need to include the best fit bias offset in our correction, and that when this is done, the cell-level correction looks good. Still working on fixing some issues with the sagging edges (most likely a result of outliers).