# Intro

## Input selection

Each input warp has an associated PSF file. These PSFs are read, and the associated FWHM value is stored for each image. These values are then used to determine which warps to use in constructing the stack. MD field stacks have a variety of different cuts depending on if the final stack is being optimized for depth or image quality. For ThreePi? stacks, the following methods are used.

In PV1, the warp rejection was performed in two passes. First, any warp with input FWHM greater than a specified hard limit maxFWHM (10 pixel) was rejected. Following this, the ensemble mean and sigma were calculated, and any warp with FWHM greater than a clip threshold, clipFWHMnSig, (1.5 x sigma) were rejected as well. This algorithm was rather harsh, and was poorly tuned for situations in which the distribution of warp FWHM values was not unimodal.

PV2 has introduced a new rejection algorithm that was designed to handle such cases better. The first step is to determine a minimum and maximum value. The minimum value is the "threshFWHM", a value below which no input can be rejected. The maximum value is the maxFWHM, and corresponds to the hard limit in the PV1 algorithm. At this point, a clipped rejection can be used to mimic PV1 stacks, although the threshFWHM is still respected. The alternative is to use a mixture-model analysis. This attempts to determine if the distribution of FWHM values is best represented by a single Gaussian population or by a combination of two Gaussians. If the distribution is unimodal (Puni > 0.5), or contains a small number of values (Ninput <= 4), the maxFWHM value is used as the upper limit. If the distribution is bimodal, the limit is set as equal to mean_popular + clipFWHMnSig * sigma_popular, where the "popular" statistics are selected for the Gaussian with the largest fraction of the input values. This limit is again checked against threshFWHM and maxFWHM, and used to reject inputs.

The PV2 value of maxFWHM is set to 10 pixel, which retains many more inputs on average than the PV1 algorithm. This may be revised for PV3, but has been kept high to ensure that the final stacks are constructed with the most inputs.

## Image Matching

After the initial input rejection, a target PSF is necessary to know how much each input must be convolved to match the other inputs. Once this target has been identified, individual kernels are constructed for each input warp to convolve those images to the target. This step is essential to ensure that the pixel level rejection operates on images that have the same object shapes. Following this, a source list is compiled by finding the matched set of objects on the individual frames, with magnitudes calculated from the average of all images that have detections of that source.

For PV1, the target PSF was constructed as the "envelope" of the set of input warps. Briefly, this process attempts to construct a PSF that approximately the maximum of each inputs. Once this target has been constructed, each input image is matched to a target frame constructed by simulating the object catalog with the target PSF using the Alard-Lupton algorithm. The kernels for this case are ISIS kernels, a sum of Gaussians and polynomial terms. PV1 had some difficulties in this matching process. First, the envelope PSF could be derived with a width larger than that of any input image PSF. This was not ideal, as it increased the level of smoothing present on the final image. Secondly, the Alard-Lupton process with ISIS kernels can construct odd weightings for individual frames, and the large number of parameters ... czw: I can't remember all the issues.

PV2 has attempted to solve these problems by using less complex methods. The target PSF is now constructed to be a single Gaussian with FWHM equal to that of the largest input PSF. This FWHM can be incremented by a small amount (currently 0.1 pixel) to ensure that no input needs to be deconvolved. With the target PSF represented by a single Gaussian, the convolution kernels for all input warps are defined to also be Gaussians, with FWHM_kern values equal to the value that would be required if the input PSF were a Gaussian with the same FWHM. Therefore (FWHM_kern)^{2} = (FWHM_target)^{2} - (FWHM_input)^{2}.

In both PV runs, after determining the kernels, the input image/mask/variance images are convolved by the appropriate kernel.

## Pixel Rejection

The following conventions in the remainder of this discussion. Lower case values are for a single pixel on image "i", upper case values are values that are constant for a given image "i" (i.e., all pixels on image i have that parameter), greek values are combined values (the stacked values).

# Set up For a given pixel, define four parameters: f_i pixel value v_i pixel variance value m_i input mask value, 0 = bad, 1 = good s_i suspect mask value, 0 = not suspect, 1 = suspect W_i image weight

There are also a set of configurable options: NMIN = 5 Minimum number of values for KMM test MIN_UNI = 0.05 Minimum probability to accept a bimodal solution SYS_FRAC = 0.1 Systematic variance fraction NSIG_REJ = 3.5 N-sigma rejection threshold DISCARD = 0.2 Fraction of values to exclude in Olympic mean ITER = 0.5 Number of rejection iterations per input

After convolution to the target PSF, we add an additional variance
(ADDV_i) term equal to the chi^{2 value found during the PSF matching.
For 1d Gaussian convolution, we have no real concept of this value, so
all chi}2 (and ADDV_i) are defined to be 1.0;

v_i = v_i + ADDV_i

The image weights are calculated by taking the inverse of the robust median of the variance image:

W_i ~ 1 / sum_{x,y} v_i_{x,y}

# Calulate initial stack values

- Calculate the weighted mean:

mu = sum_i( f_i * W_i) / sum_i(W_i) sigma = 1 / sum_i(1 / W_i) nu = sum_i(m_i)

# Rejection loop

This loop is repeated ITER * Ninput times (minimum of once).

- Determine distribution values:

if (nu > NMIN) do Mixture Model test:

mu_KMM mean of most popular gaussian mode sigma_KMM sigma of most popular gaussian mode pi_KMM fraction of inputs in most popular gaussian mode rho_UNI Probability that inputs are unimodal

- Set rejection limits for each input:

if (nu > NMIN) sys_var = sigma_KMM^{2
else sys_var = SYS_FRAC * f_i
}

limit_i = (NSIG_REJ)^{2 * (v_i + sys_var)
}

- Determine distribution median

if (nu > NMIN) mu_median = mu_KMM else mu_median = olympic_mean where: olympic_mean = sum_i(f_i * W_i) / sum_i(W_i) from int((DISCARD * nu + 0.5)/2.0) to int((DISCARD * nu + 0.5)/2.0) + int(nu) - int(DISCARD * nu + 0.5)

- Determine most discrepant point

if ( (f_i - mu_median)^{2 > limit_i ) AND
}

( (f_i - mu_median)

^{2 / limit_i > delta ) THEN }

delta = (f_i - mu_median)

^{2 / limit_i delta_index = i }

- Do rejection. If we have suspect pixels, throw those out first (so

they'll be skipped in the next iteration). Otherwise, remove the most discrepant point

if DEFINED(dev_i)

if (s_i != 0) m_i = 0 else if (i = delta_index) m_i = 0

- End rejection loop.

This completes the initial stack stage of ppStack. We then use the set of rejected pixels to identify the source pixels in the unconvolved input images. These are masked out, and the images are reconvolved and a final stack is constructed using the same algorithm, however the rejection loop is skipped (so the weighted mean is all that is performed).