Numbers refer to line numbers as of SVN revision r32364 (checked out version r32675, 2011-12-07).


usage: detect_query_create --input <input_text_file> --output <output_fits_table>


usage: detect_query_read --dbname <gpc1> --input <request_fits_file>

returns list of request coordinates/options

Read the request, determine what needs to be updated, and do the forced photometry.

  • Config
    • 41 parse options from command line
    • 62 set up commands to run
    • 80 define project/imagedb
    • 87 load blank hashes to store query information
    • 90 Look for wisdom file containing previously parsed query information, and load that into hash if it exists !! should use PS METADATA object, but doesn't.
  • Parse query request
    • 113 run detect_query_read on request, and store that information in the query hash.
    • 145 begin FPA based information scanning
      • 152 confirm that if the fpa_id isn't set, we have a unique set of stage/filter/mjd values to use to calculate it
      • 179 allocate an empty query structure for each query row, and setup the rowList array of hashes to pass to pstamp code
      • 206 determine what kind of query to pass to the pstamp code to find the images needed
      • 220 set the components to obtain from the pstamp code.
      • 234 call to pstamp code to get list of images required for this request.
      • 242 parse pstamp code request
        • 242 foreach image returned, foreach key(?), foreach row_index on that image
        • 251 assign values into the query structure from the results returned by pstamp code
        • 289 Check the states and data_states to determine if the data exists or can be updated. If the data exists, leave query/FAULT = 0, else set to the appropriate value.
    • 322 end information scanning
  • Calculate results
    • 323 we expect that we now understand the state of the data on disk now, and so then attempt to store this in the wisdom file, so we don't need to repeat this process. I believe this is partially where the problem lives, as I think we store in the wisdom file that we need to do the update, and so fail to realize when we've updated this data that we're now in a different state. Another issue is that since we don't use PS METADATA, we are susceptible to missing fields corrupting the wisdom data.
    • 334 If there is a problem with this fpa_id, store the information needed for update in a hash
    • 344 Use the hash information to store the files that need to be updated in an update_request file. I'm unclear why we don't spawn the updates ourself.
    • 369 Print information, looks diagnostic only.
    • 380 foreach fpa/image, load the information returned earlier.
    • 391 skip to next image if we can't process this one. (bug? not correctly checking all rows for this image?)
    • 403 foreach row convert ra/dec to pixel xy, and save this to a target list.
    • 442 Run psphotForced on target list/image/etc.
    • 461 Read output CMF, and store required results into the query structure. (we're also supposed to bundle full CMF files in the response now)
  • Finish
    • 493 Write output fits table with CMF results.

Top level script that passes the script to and spawns the dependent jobs.

  • 78 finished parsing options/config
  • 81 save current argument list to file
  • 90 check that the request file has the required entries.
  • 106 set up workdir
  • 115 call
  • 129 catch fault that triggers updates, and check that update request file exists
  • 141 Add a dummy job if we have no work to do
  • 160 Scan update request and submit jobs for each entry.
  • 199 pst -updatereq...unclear?

bundle up results for the datastore (this wasn't the problem, I don't think. Do I need to add the cmfs here, though?)

  • 64 finished parsing options/config
  • 66 if there's nothing to do, stop the request
  • 80 check output directory, and fault the request if there's a problem
  • 95 get the list of jobs spawned from this request