Dan Briggs' Dissertation

High Fidelity Deconvolution of Moderately Resolved Sources

Abstract

This dissertation contains several topics related to high fidelity imaging with interferometers, including deconvolution simulations to show quantitatively how well existing algorithms do on simple sources, a new deconvolution algorithm which works exceedingly well but can only be applied to small objects, and a new weighting scheme which offers mild improvement to nearly any observation.

Robust weighting is a new form of visibility weighting that varies smoothly from natural to uniform weighting as a function of a single real parameter, the robustness. Intermediate values of the robustness can produce images with moderately improved thermal noise characteristics compared to uniform weighting at very little cost in resolution. Alternatively, an image can be produced with nearly the sensitivity of the naturally weighted map, and resolution intermediate between that of uniform and natural weighting. This latter weighting often produces extremely low sidelobes and a particularly good match between the dirty beam and its fitted Gaussian, making it an excellent choice for imaging faint extended emission.

A new deconvolver has been developed which greatly outperforms CLEAN or Maximum Entropy on compact sources. It is based on a preexisting Non-Negative Least Squares matrix inversion algorithm. NNLS deconvolution is somewhat slower than existing algorithms for slightly resolved sources, and very much slower for extended objects. The solution degrades with increasing source size and at the present computational limit (~6000 pixels of significant emission) it is roughly comparable in deconvolution fidelity to existing algorithms. NNLS deconvolution is particularly well suited for use in the self-calibration loop, and for that reason may prove particularly useful for Very Long Baseline Interferometry, even on size scales where it is no better than existing deconvolvers.

The basic practice of radio interferometric imaging was re-examined to determine fundamental limits to the highest quality images. As telescopes have become better, techniques which served an earlier generation are no longer adequate in some cases. Contrary to established belief, the deconvolution process itself can now contribute an error comparable to that of residual calibration errors. This is true for even the simplest imaging problems, and only the fact that the error morphology of deconvolution and calibration errors are similar has masked this contribution until now. In cases where it can be applied, these deconvolution problems are largely cured by the new NNLS deconvolver. An extensive suite of simulations has been performed to quantify the expected magnitude of these errors in a typical observation situation.

The new techniques have been demonstrated by observational projects with the Very Large Array, Australia Telescope Compact Array and Very Long Baseline Array on the sources 3C48, SN1987A and DA193 respectively. The 3C48 project was designed to trace or exclude extended emission from a VLBI scale disrupted jet, and yielded a null result at a noise limited dynamic range of 180,000:1 from an extended object. The SN1987A project was designed for the highest resolution imaging possible and yielded high confidence level astrophysically important structure at half the synthesized uniform beamwidth. The DA193 project was primarily a test of the new VLBA telescope, but yielded as a by product the highest dynamic range images ever produced by VLBI. There are no comparable observations on other telescopes for comparison, but the observed 115,000:1 exceeded the previous record by more than a factor of 10.


Postscript Files

The entire document including figures is available in postscript format, and may be downloaded either chapter by chapter, or as a single (large) file. The preferred format for downloading is the gzip format, where the file is unzipped at your end. Unfortunately, most clients will probably not recognize this encoding when generated on the fly as here, so you will probably need to save the binary file and run "gzip -d" manually. If this is not available on your system, you may use the compress format commonly found on unix systems. "compress -d" will likely reverse the compression. For on-line browsing of the small chapters and a last resort for the large ones, you may download the files in ascii postscript. Do note that the full uncompressed file is 30.3 megabytes, however, compared to 8.6 for gzip.

ascii postscript transfer format

(links should spawn a postscript viewer automatically)

If you've got comments, I'd love to hear them.

dbriggs@rira.nrl.navy.mil