GPC1 Test Suite

Installation:

Copy the gpc1_test.auto and gpc1_test.pro to the pantasks module directory and the mk_project_database.pl script.

Find data:

Identify which exposures you want to combine (probably by looking at ippMonitor and reading observing logs). Copy the data to a working directory. The script "fetch_rawExp.pl" will do this, assuming it has access to the nebulous server and database (in other words, you're running it on the Maui cluster). It reads in a list of exposure names ("exp_name" in the "rawExp" table), finds the individual image files that comprise it, and copies them to a local directory (named after the exposure ID). Since you shouldn't mess with the original image files, this is an easy way to make your own copy. As an example, I've been working with the MD07 data from 2009/06/02. My input list is:

o4984g0099o
o4984g0100o
o4984g0101o
o4984g0102o
o4984g0103o
o4984g0104o
o4984g0105o
o4984g0106o

and I can copy them by simply running:

fetch_rawExp.pl inlist

The next step is to hunt down the correct detrend data for those exposures. My current solution is to pass a single raw image through ppImage as:

ppImage -file <raw_image> <output_name> -dbname gpc1 -recipe PPIMAGE CHIP

and then read out the detrend files used from the header of the output. These should be named obvious things like "DETREND.MASK"," DETREND.DARK", and "DETREND.FLAT". These can then be fed to the "fetch_detrend.pl" script by supplying the largest unique string in those detrends. For the above example data, the header in the output of ppImage said:

...
HIERARCH DETREND.MASK = 'GPC1.MASK.20090423.v1.XY14.fits' / Mask filename       
HIERARCH DETREND.DARK = 'GPC1.DARK_PREMASK.norm.104.0.XY14.fits' / Dark filename
HIERARCH DETREND.FLAT = 'GPC1.FLAT_PREMASK.norm.107.0.XY14.fits' / Flat filename
...

so I ran fetch_detrend.pl as:

fetch_detrend.pl -m GPC1.MASK.20090423.v1 -d GPC1.DARK_PREMASK.norm.104.0 -f GPC1.FLAT_PREMASK.norm.107.0

In addition to these detrend images, you also need an astrometry table and a tessellation database. At the time of this writing, there are only two files with an ASTROM type in the GPC1 database. Simply select the most recent one to match your data. For this example:

mkdir ASTROM && cp `neb-locate --path neb://gpc1/detrend/astrom/gpc1.20080909.asm` ASTROM/

The tessellation database can be generated easily by:

skycells -mode LOCAL -scale 0.2 -nx 12 -ny 12 -size 4 4 -fix-ns -center 213.146 53.417 -D CATDIR TESS/

where the "213.146 53.417" are the RA and Dec of the MD07 field.

At this point, you should have a directory full of both data and detrends, and be ready to push them through pantasks. If you want to remove the nebulous colon-separated directories from the filenames, the perl rename command (sometimes called prename) makes this easy:

rename 's%/.*:%/%' */*

although if you have the standard linux rename, this will not work.

I would like to write a single script that would automatically identify the correct detrends to use as it's copying over the data files, but that requires more knowledge of the GPC1 database than I currently possess. When such a script exists, I will update this wiki page.

Running pantasks

All the previous data copying steps needed access to the main GPC1 database and the disks in the Maui cluster. Once you have a copy of the data, this is no longer true, and the pantasks steps can be run anywhere.

If all the modules and scripts are in the right location (gpc1_test.pro and gpc1_test.auto in the pantasks/modules/ directory, the mk_project_database.pl in the place referenced by gpc1_test.pro), then running the reduction through the stack stage should require nothing more than:

module pantasks.pro
module gpc1_test.pro
gpc1_test <database_name> <your_hostname> <the_directory_where_you_put_the_data> new

The module will setup the database to receive the data and run the mk_project_database.pl script. This script scans the directory supplied to find all the data and detrends. It expects to find things in the locations described above (and created by the fetch_detrend.pl and fetch_rawExp.pl scripts), so be aware of this fact if you prefer a different directory structure. It will add a single controller on the host computer, and finally set pantasks to run.

Bugs and Issues

There shouldn't be any serious bugs in the scripts and modules. It does currently only work on a single filter of data, but that's simply because mk_project_database.pl only knows how to inject a single filter at a time. Manually running mk_project_database.pl to inject new data (with possibly different filters) should not hurt any currently running pantasks instance, and should simply refold in more jobs to execute (although this is based on only a single test).