IPP Progress Report for the week 2011.09.19 - 2011.09.23

(Up to IPP Progress Reports)

Eugene Magnier

Serge Chastel

  • Set up a dedicated condor user ('ippdor') on the MHPCC cluster. Thanks Cindy
  • Installed condor on Maui cluster. Wrote a deployment script. Thanks Larry
  • Ran isp chip stage using condor: able to process a full night (1527 exposures) in 7 to 30 exposures depending on the cluster load). Thanks Heather
  • Started implementation of the IPP stack processing: ran into various concerns (ppImage crashes at the end ("pipe has died" message); databases updates made at different levels in the processing chain; ...). Thanks Heather again
  • Fixed MySql? replication. Thanks Roy 8op

Heather Flewelling

  • czar 2 days
  • helped roy with LAPdvodb, restarted LAPdvo, investigated LAPdvo.
  • helped serge with condor
  • investigated dravg concerns with missing MD skycells - this is caused by using ancient templates, this should go away when we make (and use) the newer refstacks.

Roy Henderson

  • sick 1.5 days, worked from home 1.5 days
  • czar Monday and a bit of Tuesday:
    • unstuck registration on Tuesday
    • worked on scaling issue in czartool rate plot
  • ippToPsps:
    • monitored loading of LAP
    • metrics program now querying gpc1 to get total exposures processed under the given label (good to know what's coming)
  • PSVO:
    • now forcing user to choose a catalog for each plug-in query
    • now getting list of available catalogs via web-service
    • new menu options for news and help that open pages in default browser
  • documentation:
    • updated PSPS news page with increasingly complex LAP schedule
    • updated PSVO user documentation
  • other:
    • investigated rogue frame in SA1 with Sue
    • worked with heather on LAP/DVO stuff, specifically, confusion over what's processed/what's in DVO
    • mailing list stuff

Mark Huber

  • Processing and throughput monitoring:
    • adding extra stats for monitoring ing ganglia. As example, a set of MYSQL stats were added to ganglia for ippdb01 (connections, inserts, deletes, etc). Looking into others that would be useful, NFS, Apache.
    • looking into RAID throughputs. Adding iostat to ganglia would be useful.
    • learned how to extract ganglia cluster load stats for Roy to then load into the czardb
  • MD.GR0 refstack write-up continued.
  • psphotStack tests on SAS2 images.

Bill Sweeney

  • Investigated causes for unacceptable M31 magic masking in 2011 observations. It appears that even though the input warps have good seeing ~4-4.5 pixels, the resulting stack has PSF fwhm greater than 13 pixels. This causes lots of false detections. The fat PSFs are seen in the medium deeps as well but apparently the large number of objects in M31 i-band. This happens with the old template as well.

Chris Waters

  • Processing speed: Removed useless ccd_temp query from chip stage processing, decreasing the average load on the database server from 20 to 3. This helps eliminate database overhead as a source of slowdowns.
  • Diskspace : continued running disk targetting code for OTAs. Merged new ipphosts rules into working tag to ensure that new data from the summit are placed on the hosts we want them to be on.
  • LAP: finished coding and testing new LAP scripts. Improvements:
    • Enforce that each exposure only has a single chip+ processing during a LAP sequence. This prevents redoing the same work multiple times, and eliminates confusion at the DVO stage.
    • Allow cleaned data to be updated instead of recreated from scratch. This allows the single processing rule to work even if we process adjacent projection cells at different times.
    • Save exposures that need a warp-stack diff until all necessary stacks have been constructed before making the diff. This yields a single full exposure magic mask that can be used to destreak.
    • Supplying the cleanup phase of LAP processing with a list of future runs to do allows the cleanup phase to also queue replacement jobs. This removes the requirement for a human operator to add new runs and cleanup the old, and allows the LAP processing to run largely unattended.
  • Stare night: helped organize processing and nebulous to download and register the stare data without filling disks that we are trying to free space on. Made untested changes to summit copy and registration to prevent the need to completely stop processing next time we have a full night of stare observations.