(Up to PS1 IPP Czar Logs)

Monday : 2012.11.26

Serge is czar

  • 09:30 MEH: pileup of warps in MD04 and LAP processing, nightly science done, chip.off
  • 09:50 Set o6256g0498o quality to 0 (exposure time is 0). Reverted the failing 54 stack
    UPDATE rawImfile SET quality=8007 WHERE exp_name = 'o6256g0498o';
    UPDATE warpSkyfile SET quality=8007 WHERE warp_id = 666931;
    DELETE FROM stackInputSkyfile WHERE warp_id = 666931;
    
  • 10:25 Serge: Queued about 500 exposures (Fri + Sat + Sun nights) for MOPS PS1_DV3 tests
  • 12:05 Serge: Stopping replication from ippdb00 to ippdb02. Rsync /export/ippdb02.0/mysql to /export/ippc63.1/backup_nebulous/20121126 (from ippdb02, screen session: mysql_rsync)
  • 12:25 Bill: set deepstack pantasks to stop. I want to modify the staticsky processing order a bit.
  • 13:10 MEH: running local deepstack pantasks with a compute3 group to do the ref/deepstacks for the upcoming MD04 field -- moved over to using wave4, took 2x wave4 out of stack pantasks
    • in setting wave4 up, have noticed ipp055, ipp065 do not have full RAM after reboot many months ago and a few months ago respectively. Has happened before on other machines -- after rebooting systems, must check that all RAM is present and accounted for
  • 13:42 Bill: set all lap staticskyRuns to state wait with ra > 150.5 and ra < 120 degrees except for the Kepler field. Queued 5 filter skycells 120 < ra < 150.5 set deepstack pantasks to run, then stop. Once the skycells near the galactic center finish processing, we will restart pantasks 2 x compute3.
    mysql> select state, count(sky_id) from staticskyRun left join staticskyResult using(sky_id) 
    where label ='lap.threepi.20120706' group by state;
    +-------+---------------+
    | state | count(sky_id) |
    +-------+---------------+
    | full  |         11897 | 
    | new   |          6417 | 
    | wait  |         34000 |           for simplicity, may drop these and re-queue later. haven't decided
    +-------+---------------+
    3 rows in set (0.47 sec)
    
    
  • 14:00 Bill Decided to kill off the currently running psphotStack's so we can restart. Restarted deepstack pantasks with 2 x compute3. No pending runs are at low galactic latitude
  • 18:10 MEH: chip.off still to push through more of the pile of LAP warps since no data tonight and stdscience at ~50% normal processing power
  • 23:40 MEH: LAP warps down to ~1000, chip.on. MD01 at same priority, should go by _id

Tuesday : 2012-11-27

Heather is czar today

  • 09:30 Serge: Restarted replication on ippdb02 (77310 seconds behind master). Monitoring how it catches up ~schastel/dev/ScanReplication/ippdb02.log (hopefully it will catch up).
  • 10:00 Serge: Stopped publishing a MD03 (diff_id = 353088) is failing repeatedly
  • 10:10 Serge: It's the exposure with a exp_time of 0. Dropped publication
    UPDATE publishRun SET state='drop' WHERE stage_id = 353088;
    
  • 10:23 Serge: Started cleanup (I'd like to see how it influences replication rate).
  • 11:20 processing shutdown for Haydyn to reboot ipp019, ipp055, ipp065 to attempt in recovering lost memory
  • 11:47 Bill rebuilt with bug fixes to psphotDeblendSatstars and psphotReplaceAllSources. Improved a couple of assertions in psLib.
  • 11:50 Serge: Setup wiki page for IPP-MOPS ICD comments
  • 14:09 Bill ran check_system.sh stop because nebulous is not responding
  • 15:20 Serge: Stopped apache on c01-c10 then restarted mysql server on ippdb00 (there were still pending connections after apache servers were stopped) then restarted apache servers.
  • 20:00 MEH: looks like ipp031 is stalling jobs (summitcopy, registration, stdscience), taking out of processing. same with ipp052 -- both having mount troubles

Wednesday : 2012.11.28

Mark is czar

  • 07:30 MEH: looks like ip040 and ipp047 have stalled registration for 5 hrs.. taken out of processing as well, restarted summitcopy, registration, stdscience as well. some 291 exposures left to register and process.
  • 09:10 deepstack stop to reallocate compute3 back to stdscience to get backlog of nightly science through before noon
  • 09:20 MEH: MD04 refstack continues to run using stacks wave4 allocation, will be another 24 hours or so. Then the MD05 refstack will need to use them..
  • MEH: ipp031,040,047,052 may need to be rebooted after nightly science finished
  • stdscience stopped also to fix more mount issues
  • 12:20 Chris modified replication config to not do anything for the time being since ippb machines are full, restarting now.
  • 12:40 MEH: ipp031 couldn't clear or fix mounts, rebooting.
  • 13:05 same for ipp052, processing moving along more now -- while other systems were able to access the disk, looks like maybe for reading only. likely needed to set all problem ones to repair (normally do) or down. restarted registration.
  • 13:10 ipp040, like previous 2, nfs reports tainted on a restart. rebooting
  • 13:25 ipp047 mounts are back, so set neb-host back up and keep eye on
  • 13:40 MEH: ipp010 and ipp011 also having mount troubles. ipp012 as well.
  • 14:00 MEH: noticed after trying rpc.statd stop on ipp010, hung on stopping statd because needed to clear all the lost rpc.statd. restart rather than stop/start on rpc.statd, nfs may behave differently. ipp010,011,018 have excessive swap cleared now.
  • 14:20 mounts recovered without any additional reboots, will try stop/start on nfs for future mount troubles to look at further. now just want to get nightly science through.
  • 15:20 MEH: still using all compute3 for nightly stdscience, LAP and MD01 label out to only do nightly science
  • 15:10 Chris made modification to cleanup input file, boost.stack (unboost.stack) to set poll period shorter to load more jobs
  • 15:50 Bill rebuild psLib
  • 16:00 MEH: nightly science finished, all pantasks restarted for the night. in cleanup do boost.stack to reset poll period faster.
    • the orphan rpc.statd can be cleared by restarting rpc.statd/nfs twice. will see if this helps clear any mount problems in future. ipp007,010,012,013,015,016,108,020 should be cleared
      sudo /etc/init.d/rpc.statd restart
      sudo /etc/init.d/rpc.statd restart
      --> will hang a bit on second restart as it clears the floating rpc.statd processes. mounts may take a bit (5 min?) to come back.
      
  • 16:17 Bill tweaked the outstanding staticsky runs: set state to 'wait' for those not in 270 < RA < 315 and abs(glat) > 10. Queued remaining skycells in this region 18 - 21 hours RA
  • 16:40 MEH: Gene suggested adding the stsci nodes as hosts in stdscience and then turning them off to avoid disk/data access load spikes on those machines. Will be watching with LAP+MD01 processing.
    • ~1hr of processing shows no spikes, looks to have helped
  • 17:00 chip.off to push LAP warps for stacks until nightly science starts

Thursday : 2012.11.29

Mark is czar

  • 06:45 Bill: ipp011 is not a good nebulous host. Killed stuck registration jobs there. restarted registration. In the process of checking the burntool states
  • 07:00 MEH: ipp011 was another lm_sensors module/kernel issue crash ipp011-crash-20121129 -- don't reboot until after stdscience is done, to avoid breaking anything since data disks are still accessible if necessary
    • tally of similar machines having this problem recently -- ipp011, 012, 010, 016, 013, 014, 018, may be others but not enough info added to ippXXX_log or czarlog pages to tell
  • 07:02 Bill: stopped pantasks. summit copy is totally clogged with stuck ipp011 jobs. Will restart once the jobs clear.
  • took ipp011 to repair in nebulous killed all glockfile processes
  • 07:14 registration and summit copy restarted.
  • 07:30 MEH: started all pantasks again, making sure ipp011 is off.
    • stdscience restart reveal typo in i had in pantasks_hosts.input for adding off the stsci nodes, they were back to load spiking again. fixed and restarted again.
    • LAP+MD01 labels out so only nightly science is processing
  • 09:27 Serge notes that a common czartool/page stall is from one of the mysql replication servers being stuck (usually will get email about "Copy failed") because the partition where it is at is full. Was the case for ipp001 today, freeing space on the partition solved the problem.
  • 10:35 Serge suggests stopping cleanup to see if speeds up nightly science -- after ~1hr not a clear improvement in dtime_, maybe 10-20/hr on czar rate plots. The significant impact in processing rate is the loss of 5-6x compute3 for 150-200 nodes to LAP staticsky.
  • 11:40 There are 1843168237 entries in the nebulous instance table and the maximum ins_id is 2956304195.
  • 12:40 MEH: nightly science finished
    • ipp011 power cycled, neb-host repair->up, adding back into processing.
    • LAP+MD01 label added back to stdscience, chip.off to push backup of warps through again
    • cleanup back on
  • 13:18 Bill added label for STS reprocessing to stdscience and distribution and turned on chip processing
    • MEH: with STS ->warp mainly running, won't be any LAP stacks to do so putting those nodes into stdscience until tonight -- aborted for DB updated..
  • 13:25 Bill removed STS.rerun.20121129 because I forgot to set the reduction to STS_DATASET
  • 14:00 Bill and Chris making gpc1 DB changes, all processing stopped. stdscience and stack shutdown since manual modifications were made and don't want them to be turned on in conflict.
  • 15:35 CZW: gpc1 database changes finished:
     mysql> ALTER TABLE warpSkyfile add column background_model SMALLINT after maskfrac_advisory;
    Query OK, 53661764 rows affected (1 hour 10 min 15.15 sec)
    Records: 53661764  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE stackSumSkyfile add column background_model SMALLINT after software_ver;
    Query OK, 1847441 rows affected (2 min 31.59 sec)
    Records: 1847441  Duplicates: 0  Warnings: 0
    
    mysql> UPDATE dbversion set schema_version = '1.1.72', updated= CURRENT_TIMESTAMP();
    Query OK, 1 row affected (0.03 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
  • 15:38 CZW: isp database changes partially finished with errors:
    mysql> select * from dbversion;
    +----------------+---------------------+
    | schema_version | updated             |
    +----------------+---------------------+
    | 1.1.71         | 2012-01-31 15:42:51 | 
    +----------------+---------------------+
    1 row in set (0.02 sec)
    
    mysql> ALTER TABLE warpSkyfile add column background_model SMALLINT after maskfrac_advisory;
    Query OK, 159282 rows affected (5.34 sec)
    Records: 159282  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE stackSumSkyfile add column background_model SMALLINT after software_ver;
    Query OK, 80 rows affected (1.05 sec)
    Records: 80  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE skycalResult ADD column n_detections INT after n_astrom;
    ERROR 1146 (42S02): Table 'isp.skycalResult' doesn't exist
    mysql> UPDATE dbversion set schema_version = '1.1.72', updated= CURRENT_TIMESTAMP();
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
  • 15:39 CZW: ssp database changes finished:
    select * from dbversion
        -> ;
    +--------------------+---------------------+
    | schema_version     | updated             |
    +--------------------+---------------------+
    | 1.1.71717171717171 | 2012-01-31 19:52:32 | 
    +--------------------+---------------------+
    1 row in set (0.05 sec)
    
    mysql> ALTER TABLE warpSkyfile add column background_model SMALLINT after maskfrac_advisory;
    Query OK, 0 rows affected (1.10 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE stackSumSkyfile add column background_model SMALLINT after software_ver;
    Query OK, 0 rows affected (1.00 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE skycalResult ADD column n_detections INT after n_astrom;
    Query OK, 0 rows affected (0.81 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE skycalResult ADD column n_extended INT after n_detections;
    Query OK, 0 rows affected (0.63 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> UPDATE dbversion set schema_version = '1.1.72', updated= CURRENT_TIMESTAMP();
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
  • 15:40 CZW: No changes made to megacam database:
    mysql> select * from dbversion;
    +----------------+---------------------+
    | schema_version | updated             |
    +----------------+---------------------+
    | 1.1.69         | 2012-03-22 11:30:59 | 
    +----------------+---------------------+
    1 row in set (0.08 sec)
    
  • 15:42 CZW: uip database changes finished:
    mysql> select * from dbversion;
    +--------------------+---------------------+
    | schema_version     | updated             |
    +--------------------+---------------------+
    | 1.1.71717171717171 | 2012-02-29 12:02:53 | 
    +--------------------+---------------------+
    1 row in set (0.08 sec)
    
    mysql> ALTER TABLE warpSkyfile add column background_model SMALLINT after maskfrac_advisory;
    Query OK, 0 rows affected (1.04 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE stackSumSkyfile add column background_model SMALLINT after software_ver;
    Query OK, 0 rows affected (1.00 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE skycalResult ADD column n_detections INT after n_astrom;
    Query OK, 0 rows affected (0.84 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> ALTER TABLE skycalResult ADD column n_extended INT after n_detections;
    Query OK, 0 rows affected (0.65 sec)
    Records: 0  Duplicates: 0  Warnings: 0
    
    mysql> UPDATE dbversion set schema_version = '1.1.72', updated= CURRENT_TIMESTAMP();
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
  • 15:55 MEH: processing back on.. and system has fallen over.. too much something for ippdb01. Serge found many queries with ISP but nothing being done.
  • 16:50 processing looks to be back up and stable again
  • 17:00 MEH: setting neb-host repair for ipp027--030 while Gene is running rsync on them
  • 18:30 MEH: registration sticking -- burntool got lost for ipp058? restarting pantasks cleared..
    • ipp058 very busy with 2x MD04 refstack running, set refstack down to 1x wave4 for the night
  • 21:00 MEH: noticed no new data for a bit but data so far piled up in warp while other reprocessing chips running. chip.off for a while
  • 23:51 Bill: This is rather out of context but I added access to MPIA and MPG to misc.ipp.ifa.hawaii.edu on the proxy ippops1 so that LAP smf files would be acessible.
  • 00:00 Bill: memory usage for the currently queued staticsky runs appears to be under control (we're not working in the plane yet) Feeling brave I added another set of compute3 nodes to deepstack. If it falls over it should only affect that processing.

Friday : 2012.11.30

Serge is czar

  • 07:00. Nightly science download is stuck. Investigating...
    • Killed a bunch of stuck ppImage running on ipp028. Removed ipp028 from stdscience and summitcopy.
    • Manually (and successfully) ran
      summit_copy.pl --uri http://conductor.ifa.hawaii.edu/ds/gpc1/o6261g0166o/o6261g0166o24.fits --filename neb://ipp016.0/gpc1/20121130/o6261g0166o/o6261g0166o.ota24.fits 
      --summit_id 547976 --exp_name o6261g0166o --inst gpc1 --telescope ps1 --class chip --class_id ota24 --bytes 49432320 --md5 72520f7005924d4bae6c7bc6fc935611 
      --dbname gpc1 --timeout 600 --verbose --copies 2 --compress --nebulous
      
      which was repeatedly failing.
    • Stopped registration, killed all remaining running (and apparently stuck) processes. Removed ipp027-030 from it. Restarted registration
    • summitcopy is repeatedly failing
      summit_copy.pl --uri http://conductor.ifa.hawaii.edu/ds/gpc1/o6261g0166o/o6261g0166o24.fits --filename neb://ipp016.0/gpc1/20121130/o6261g0166o/o6261g0166o.ota24.fits --summit_id 547976 --exp_name o6261g0166o --inst gpc1 --telescope ps1 --class chip --class_id ota24 --bytes 49432320 --md5 72520f7005924d4bae6c7bc6fc935611 --dbname gpc1 --timeout 600 --verbose --copies 2 --compress --nebulous
      
      
      Starting script /home/panstarrs/ipp/psconfig/ipp-20121026.lin64/bin/summit_copy.pl on ipp017
      
      Running [/home/panstarrs/ipp/psconfig/ipp-20121026.lin64/bin/dsget --uri http://conductor.ifa.hawaii.edu/ds/gpc1/o6261g0166o/o6261g0166o24.fits --filename neb://ipp016.0/gpc1/20121130/o6261g0166o/o6261g0166o.ota24.fits --compress --bytes 49432320 --nebulous --md5 72520f7005924d4bae6c7bc6fc935611 --timeout 600 --copies 2]...
      downloading file to /tmp/o6261g0166o.ota24.fits.loHF56UY.tmp
      Running [/home/panstarrs/ipp/psconfig/ipp-20121026.lin64/bin/neb-locate --path --all neb://ipp016.0/gpc1/20121130/o6261g0166o/o6261g0166o.ota24.fits]...
      /data/ipp016.0/nebulous/87/62/2958196185.gpc1:20121130:o6261g0166o:o6261g0166o.ota24.fits
      /data/ipp059.0/nebulous/87/62/2958196186.gpc1:20121130:o6261g0166o:o6261g0166o.ota24.fits
      Running [/home/panstarrs/ipp/psconfig/ipp-20121026.lin64/bin/pztool -copydone -row_lock -summit_id 547976 -exp_name o6261g0166o -inst gpc1 -telescope ps1 -class chip -class_id ota24 -uri neb://ipp016.0/gpc1/20121130/o6261g0166o/o6261g0166o.ota24.fits -hostname ipp017 -dbname gpc1 -md5sum 916f923ddd84d65273f13967fad8bf73 -bytes 22285440]...
       -> p_psDBRunQueryPrepared (psDB.c:956): unknown psLib error
           Failed to execute prepared statement.  Error: Duplicate entry '547976-chip-ota24' for key 1
       -> psDBInsertRows (psDB.c:594): unexpected NULL found
           insert failed
       -> psDBInsertOneRow (psDB.c:564): unknown psLib error
           Failed to insert row.
       -> copydoneMode (pztool.c:479): unknown psLib error
           database error
      Unable to perform /home/panstarrs/ipp/psconfig/ipp-20121026.lin64/bin/pztool -copydone -row_lock -summit_id 547976 -exp_name o6261g0166o -inst gpc1 -telescope ps1 -class chip -class_id ota24 -uri neb://ipp016.0/gpc1/20121130/o6261g0166o/o6261g0166o.ota24.fits -hostname ipp017 -dbname gpc1 -md5sum 916f923ddd84d65273f13967fad8bf73 -bytes 22285440: 1
      
  • 08:45. A poltergeist fixed the summitcopy issue.
  • 09:45 ~ipp/src/ipp-20121026/tools/regpeek.pl is useful to fix the mess
  • 11:30 Following Mark's suggestion, in stdscience
    del.label STS.rerun.20121129
    del.label LAP.ThreePi.20120706
    del.label MD01.GR0.20121122
    
  • 11:45 Restarted registration (and fixing many errors)
  • 12:20 Summitcopy and registration are finished. Adding back labels to stdscience
    add.label STS.rerun.20121129
    add.label LAP.ThreePi.20120706
    add.label MD01.GR0.20121122
    
  • 14:20 Nightly processing still not finished. chip.off and warp.off (following Mark's suggestion)
  • 14:41 Bill turned one set of nodes off in deepstack
  • 15:00 chip.on, diff.on
  • 15:10 chip.off, warp.off... Give ns a chance to finish
  • 15:30 chip.on, warp.on. NS finished (but a few diffs/pubs)
  • 17:44 Bill reduced deepsky to one set of compute3. The current jobs are using too much memory for 2 x
  • 18:30 Bill set label STS.rerun.20121129 to inactive for the night
  • 18:35 MEH: seeing 3 stalled register_imfile.pl -- stsci03 (00,01) having trouble with delstar it looks like. emailed Gene

Saturday : 2012.11.30

Bill woke up and had to look

  • 03:32 registration stalled since 11:02 pm. --exp_id 552462 --class_id XY63 in data_state=check_burntool burntool_state=-1
  • 03:42 stsci00 has two delstar_client processes using over 50GB of virtual memory. Set to repair in
  • 04:07 turns out there were 24 other chips in stuck burntool state.
  • 04:15 removed LAP and MD01.GR0.20121122 labels from stdscience so that nightlyscience has the cluster (sts label is inactive)
  • 04:34 queue is loaded with nightlyscience adding labels back in
  • 05:08 The highest virtual usage by staticsky is around 30G. Adding one set of compute 3 hosts to stdscience
  • 06:09 that might have been a mistake. glockfile sluggishness appears to have started right after adding those nodes in.
  • 06:15 The last science exposures have been taken summit copy is 55 exposures behind
  • 06:15one node after another clogs queues with glockfile requests. removing LAP And MD01.GR0 label back to bed
  • 09:30 added removed labels back in with adjusted priorities MD01.GR0.20121122=202, STS.rerun.20121129=201,lap=200(unchanged)
    • MEH: putting back MD01.GR0.20121122 to 200 with LAP
  • 10:15 Bill restarted stdscience pantasks and is going off duty
  • 10:30 MEH: if LAP isn't kept running, the stack pantasks nodes have nothing to do. don't want to shift nodes on weekend, so readjusting the prio at least for an hour or two with (warp only)
  • 12:50 Bill is putting STS top priority.
    • MEH: since diff isn't needed for STS, setting to off until tonight
  • 13:10 MEH: MD04 refstack finished, can return 2x wave4 back to stack, might as well reshuffle nodes some more since will need to prep MD05 for Sunday/Monday and stdscience is well underpowered right now.
    • stare nodes were taken out of stdscience and put into stack a few weeks ago -- putting back into stdscience for now while compute3 is out.
    • 2x wave4 taken out of stack for MD04 refstacks put into stdscience rather than stack (stack with compute, compute2 has ~60 nodes)
  • Bill: Gene killed the rouge delstar processes on stsci00 set it back to up in nebulous
  • 17:52 Bill: removing STS label for the night
    • MEH: then will pre-load stack with LAP, chip+diff.off until nightly science starts -- with ~60 nodes LAP stack rate ~150-200/hr, should be sufficient for nightly science as well
  • 23:00 MEH: MD05 updates and reprocessing of exposures added for refstack

Sunday : 2012.12.01

  • 08:55 Bill: stdscience blocked with jobs stuck on glockfile to ipp016 It's panic'd set it to repair in nebulous. power cycled
    BUG: unable to handle kernel paging request at ffffffff800073d4
    <Dec/02 05:15 am>[1457104.714295] IP: [<ffffffff80589612>] xprt_autoclose+0x19/0x4c
    <Dec/02 05:15 am>[1457104.714295] PGD 203067 PUD 207063 PMD 0 
    <Dec/02 05:15 am>[1457104.714295] Oops: 0000 [#1] SMP 
    
    <Dec/02 05:15 am>[1457104.714295] last sysfs file: /sys/class/i2c-adapter/i2c-0/0-002f/temp6_alarm
    
    
    • just a reminder, issues like this can be a headache to track in the czarlogs. adding to the ipp016_log respectively.
  • 10:00 MEH: MD05 warps languishing, chip.off for bit
  • 12:00 MEH: doing restart of stdscience and to add MD04 -- backlog of diffims caught up and before warps got cleaned..
  • 13:30 MEH: faulting couple STS camera/psastro on ipp017 to ease the memory overuse
  • 14:20 MEH: like MD04, MD05 refstack running on wave4 hosts in standalone pantasks, for now..
  • 14:30 MEH: ipp016 sudo /etc/init.d/gmond restart to fix the ganglia memory plot after rebooted this morning
  • 15:10 chip.off for MD05
  • 15:50 chip.on again, don't want to babysit STS camera memory issues so set.camera.poll 25->10 to reduce number that can run since plenty of warps already to do
  • 18:18 Bill: set STS label to inactive for the night
    • MEH: then setting camera.poll back to default 25
  • 18:25 MEH: restarting summitcopy, registration to clear out some old error msgs. will preload some LAP stacks,
  • 19:45 still no nightly science, running MD01 that have been on/off past 10d warp only after LAP stacks pre-loaded to run.
  • 21:50 still no nightly science, MD01 finished. while setting up for remaining MD01, loading more LAP stacks and activating STS to keep eye on.
  • 23:55 MEH: setting up MD01.GR0 nightly stacks to run when LAP stacks aren't available. LAP back on in a bit