difference image concerns:

- normalizations:
- are they correct?
- significant coupling between normalization and other kernel terms

- ringing -- there is significant ringing left behind in difference of images with PS1_V1 PSF shapes
- is the basis function able to represent the actual observed differences sufficiently?
- is the observed ringing the result of the coupling to the normalization?
- is the observed ringing due to the degeneracy between basis terms?

- basis function orthogonality
- does the non-orthogonal nature of the Alard-Lupton kernels increase sensitivity to noise and false positives?
- can we choose a more-orthogonal basis set?

- span of basis set
- how can we choose a set of kernels that spans a sufficient range?
- can the Alard-Lupton basis set cover a wide enough range of structures?

- weighting?
- windowing?
- dual convolution -- strange results with power going to non-radial terms

Some specific bugs identified

- normalizations were failing for 1D convolutions

- the wrong preCalc element was being used in a certain context related to Dual

- the peak pixel is rediscovered, changing the center by 1 pixel for well-centered stars (this is not technically a bug, but makes it difficult to interpret the chi-square images)

### Examples with Gaussian PSFs

### Examples with PS1_V1 PSFs [f ~ (1 + Ar^{2} + Br^{3.3})^{-1}]

### Dual convolution following fix (r26562)

Following simplification of the math (and corresponding simplification of the code), dual convolution is now working.

In the presence of noise, it is difficult for the least-squares solution to provide kernels that are compact (since convolution of one image by a large kernel can be matched with the convolution of the other image by a similarly large kernel). One solution is to add a penalty term into the least-squares problem (c.f. Yuan & Akerlof, 2008ApJ...677..808Y). Another solution is to solve the equation once, and then compare the corresponding terms in each of the kernels, masking the smaller (or both if small). Below we test these solutions using the cross-directed Gaussian and PS1_V1 PSFs (5.2pix, A/R = 1.2, Theta = +30 vs 5.2pix, A/R = 1.2, Theta = -30).

- NONE: No attempt was made to force the kernels to be compact.
- MASK: We compared corresponding terms in each of the kernels, masking the smaller; we also masked any with a value less than 10
^{-3}of the derived normalisation. - PENALTY: We added a penalty function following Yuan & Akerlof, scaled to match the background diagonal term in the least-squares matrix.
- BOTH: We applied both the MASK and PENALTY methods.

The below images are a montage of the residuals and the convolution kernels for the above methods, laid out:

NONE | MASK |

PENALTY | BOTH |

The conclusion is that the PENALTY method works well. NONE blows up the kernel too much (as expected). MASK does not allow sufficient flexibility in the kernel to provide cross-directed kernels for both images. BOTH suffers from the weaknesses of the MASK method.

I also tried using Singular Value Decomposition to solve the least-squares equation, with masking of low-significance singular values, but found that the quality of the subtraction is strongly dependent on the choice of the threshold, so am putting this off further evaluation of this until later.

#### Gaussian PSFs

Method | Normalisation | Mean dev. | Peak-peak residuals |

NONE | 1.000113 | 0.002095 | No apparent residuals |

MASK | 1.006291 | 0.427949 | +90,-92 |

PENALTY | 1.000060 | 0.015114 | +9,-11 |

BOTH | 0.995888 | 0.206492 | +90,-74 |

#### PS1_V1 PSFs

Method | Normalisation | Mean dev. | Peak-peak residuals |

NONE | 1.000116 | 0.042257 | +10,-9 |

MASK | 1.008349 | 0.169106 | +38,-53 |

PENALTY | 1.000726 | 0.081741 | +18,-17 |

BOTH | 1.003305 | 0.557793 | +104,-110 |