next up previous contents
Next: The user interface for Up: Conclusions Previous: Future directions   Contents

Some thoughts on the role of software for the GMRT

Mapping at low frequencies is inherently a more compute intensive job. The algorithms for eliminating the distortions due to large field of view require substantially higher computing. The yet to be developed technique for phase calibration in the presence of non-isoplanatic ionosphere is bound to further increase the requirement on computing power. These computational challenge can be handled by parallelizing the imaging and phase calibration algorithms. This can be done either on a dedicated high performance computers (work on this is already in progress in AIPS++) or by using a network of stand-alone, reasonably fast computers connected via fast data link (distributed computing). The advantage in the second approach is that it will be accessible to a larger community at a relatively cheaper price.

Deconvolution errors in the presence of extended emission, abundantly present in the Galactic plane, will be significant (Briggs1995). Apart from generating algorithm related artifacts, the residuals are also correlated with the sources in the field indicating systematic errors in the model image (or equivalently, deconvolved image). One of the important pieces of information, not used in the deconvolution algorithms is that the extended emission has finite correlation length, which can vary across the image. Any algorithm which uses this information is likely to perform much better in terms of image fidelity. Also, most iterative deconvolution algorithms stop the iterations when the brightest component is comparable to the expected thermal noise. This stopping criterion is good for compact emission. However, for extended emission, this leaves correlated flux which is within the estimated noise level.

Field of view and the scale of radio emission at low frequencies is relatively large. Conventional deconvolution algorithms (CLEAN or MEM based), which treat each pixel in the image as an independent degree of freedom (zero correlation length), produce the well known instabilities and "breaking up" of extended emission. A generalization of the MEM and multi-scale deconvolution algorithms using a pixel model with a finite correlation length (Pixon), would drastically reduce the number of degrees of freedom required to represent the image (Puetter & Pina1994; Pina & Puetter1993). Development of algorithms for the deconvolution of images made using radio interferometric telescopes is a very interesting and useful future direction of research in this field.

Mapping at low frequencies is relatively more difficult and a time consuming task. This is due to a combination of larger number of sources of data corruption and inherent difficulties in mapping at low frequencies (see Chapter 4). Refining the techniques and methodology for data calibration and analysis described in this dissertation, with the ultimate goal of developing techniques for automatic data flagging, is an interesting future direction of research, particularly in the context of low frequency instruments like the GMRT and the future instruments like the Low Frequency Array (LOFAR) and Square Kilometre Array (SKA). Software developed during the course of this work produces information about corrupted data, at least for the calibrator scans. Development of supporting software to automatically identify systematic patters (like consistently bad baselines/channels) and transform this information into flagging tables directly readable by mapping software is highly desirable. Similarly, development of software to generate on-line flagging information based on the information of the health of various systems is required to improve the data quality. Combination of these automatic on-line and off-line flagging information will go a long way in improving the data quality and reduce the time it takes to map with such data. Work on these lines will be almost necessary for large scale mapping projects like multi-frequency surveys of the Galactic plane.

The software for on-line control of the telescope and the electronics is complex, requiring a large number of settings to be done for a typical observation. Complexity of this software for the GMRT at present is directly visible to the end user - something which is neither desirable nor necessary. Design and development of a integrated interface, which presents an astronomically useful view of the instrument (antenna pointing, tracking, frequency/bandwidth selection, observing schedule, etc.) will go a long way in making it easier to use the instrument as well as reduce possibility of human errors during observations.

Online data display for the GMRT is as good as missing. The data acquisition software as well as the on-line telescope control software, however, provides enough hooks from where the relevant information can be tapped and displayed online for a much improved monitoring of the system as well as the data. This again requires an integrated system of software, which can perform some on-line data processing and display. The former (on-line data processing software), to a large extent, was developed during the course of this work and is currently usable. Development of the latter (display software) is still in infancy and requires further work.

All the research and development suggested above, which ultimately also requires non-trivial software development, must be done in a well thought out manner. This usually requires participation of a number of people with varied interests and skills and at different points of time. A carefully thought-out underlying software design, for long term use and development of the software, is therefore necessary. In my opinion, a patchy ad hoc development of potentially unrelated stand-alone software, developed by people with different skills and style of coding (possibly based on old software technology and design techniques), at different times, is unlikely to be useful in the long run and will be a waste of enormous human and computing resources. Observational astronomy and related instruments have moved into a regime which is dominated by use of complex software which is not always hidden from the end user (particularly when directly using complex instruments like the GMRT). In the context of computer language design, it is said, ``The connection between the language in which we think/program and the problems and solutions we can imagine is very close. For this reason restricting language features with the intent of eliminating programmer errors is at best dangerous''11.1. Similarly, I reckon, the connection between the problems and solutions astronomers can imagine, is closely related to their perception of the capabilities of the available data analysis software. Challenging new observations, almost by definition, require new capabilities in data analysis software and instrumentation. Aspiring and practicing observational astronomers, who like this kind of research, will therefore gain from learning new software design and development techniques. In my experience, these are not fancy tools which can be dispensed with, but actually result into stable, easy to modify and debug, and hence reliable software which in the long run, is a big time saver as well. Fluency in using/developing/modifying software gives the freedom of thought in research, which is of paramount importance in forging new areas of research - and going where few (or none) have gone before.


next up previous contents
Next: The user interface for Up: Conclusions Previous: Future directions   Contents
Sanjay Bhatnagar 2005-07-07