Memo Review Reply
Memo: 372 - An Amplitude Calibration Strategy for ALMA
Moreno & Guilloteau, 2002May10
Reviewer: Bryan Butler (unsolicited)
Date Received: 2002Aug28
Reply from: Stephane Guilloteau
Date Received: 2002Sep20
SG comments are interspersed in the original butler review, and
are denoted by 'begin SG' and 'end SG'.
Summary:
The title for this memo is somewhat misleading, as it treats issues
which would not normally be thought of as "amplitude calibration" (in
the traditional way). But, formally, these issues do impact the ability
to calibrate the amplitudes - it's just that folks may be surprised to
open up the memo and find, e.g., a discussion on focus. Nearly all
aspects of calibration are discussed and treated (except polarization,
which is specifically excluded), and as such I think it can serve as a
valuable basis for further discussion on the complete calibration scheme
for ALMA. But the memo suffers from a number of flaws which prohibit
it from being accepted as is as a recipe for the calibration scheme for
ALMA. In addition, there are a number of places where assumptions or
statements are made with no discussion or corroborative evidence ("proof
by assertion"). Two examples: 'Moreover, polarized emission from the
asteroid edges will be a larger fraction than for the small planets or
giant satellites.' - this statement is not discussed or defended, and
is not strictly correct;
begin SG:
- OK, to be discussed further
end SG:
and 'Since the accuracy of opacity correction
allows to take a source 10 deg away,...' - again, not defended or
discussed, and I'm not sure that it is correct either.
begin SG:
- I guess you missed the discussion because you skipped over Section
6. It is discussed extensively in 6.1.2, page 16.
end SG:
These kinds of
statements are sprinkled throughout, making the document mostly a
collection of qualitative arguments lacking rigour. Again, it's a nice
framework for discussion and further refinement, but cannot be taken
as is. To me, the 2 most important points of the memo are: the
importance of the sideband gain ratio calibration (I think this needs
more work); and the suggestion to relax the 1% accuracy spec to 3% in
the submm (this needs more discussion, but is certainly reasonable to
consider at least).
Let me now make more specific comments on the memo:
p.2: 'However, they have not yet been measured or modeled with better
than 5% accuracy.' This is not right - Gene Serabyn has numbers
on Uranus and Neptune that he thinks are accurate to a few %
and perhaps better (see his DPS paper from the Pasadena DPS, or his
contribution the the IAU Morocco site testing meeting if there is
nothing more recent on this from him).
begin SG:
- by the time the memo was written, these were not published results.
Also, to be accepted as accurate, the values would need some
confirmation (measurement, or independent modeling from another
group...). What is the difference between 5 % and "a few %" ?
end SG:
'In most cases, going from 5% to 1% typically implies 25 times
longer calibration times.' This is only true if you are thermally
limited in your errors at both 5% and 1%.
begin SG:
- This is the reason of the "In most cases". Calibration is often
thermally limited, as the many examples in the document show.
end SG:
p.3: 'For a single-dish telescope, the last term plays no role.'
This is not true for large single dishes. The LMT, for instance,
will have to worry about this. For a single dish the effect is
much smaller, of course, but it is just decorrelation on a size
scale equal to the diameter of the antenna. The effect may be
very small, and might be negligible for 12-m antennas on the
Chajnantor site, but it is not correct to say that it 'plays no
role.'
begin SG:
- Decorrelation due to electronic phase noise plays no role in
single-dish
- Atmospheric effects related to pathlength fluctuations within the
antenna diameter are usually called "anomalous refraction", and as
such I agree they do play a role in single-dish data. It is just
usually affected to the pointing error budget, and should not be
counted twice.
end SG:
Here is the first elaboration of what the authors perceive as
the necessary 'calibrations' - what will be discussed further in
the memo. I agree with most of it, and will make some comments
later on specific disagreements, but let me make one comment here.
'decorrelation estimates' are included in item 6. If we have
data that is significantly decorrelated on the timescale of
fast switching or WVR calibration, then we are in real trouble.
That is the point of having those calibrations - to avoid the
decorrelation, i.e., make it possible to track accurately the
phase variations on short timescales and correct them in the data.
p.4: I find the description of an observing session extremely
confusing. What is 'nf'? 'na'? etc...
begin SG:
- Arbitrary repeat counts. We can add a short explanatory sentence.
end SG:
p.5&6: There are several problems with the discussion on the emission
from planets, satellites, and asteroids:
- The uncertainties on Uranus and Neptune are better than 5%,
if you believe Gene Serabyn's most recent results.
begin SG:
- The text already says Uranus is known to better than 5 %. For
Neptune, it is yet unclear: it may be true way off the lines of CO,
some of which are as strong as 30 %... We all agree that so far
Uranus is the best "ABSOLUTE" calibrator (if not the only one...).
end SG:
- Titan's mm/submm continuum is *not* known to 5% accuracy.
==> Rafael
- Polarization at mm/submm wavelengths is *not* negligible
for Titan (in the long-mm, surface emission contributes a
significant amount, in the submm, there may be polarization
from large haze/cloud particles).
==> Rafael
- Modelling the planets and satellites as a smooth isothermal
dielectric sphere is a mistake. There is no reason to do this,
either - especially the isothermal part, but also the smooth
part (and, in addition, the spatially homogenous assumption
which is implicit here). If such assumptions are made, there
is no way we will reach 1%. We should use proper models which
have temperatures calculated properly, and use all available
information on surface and subsurface properties for planets,
satellites, and asteroids.
begin SG:
Absolutely correct. The more complete the model, the better.
The smooth isothermal dielectric sphere is a first order
approximation only.
end SG:
- The polarized emission from the asteroid 'edges' is no more
than that from the satellites or solid body planets, at least
for the larger (spheroidal) ones [which are the only ones
useful in this context]. The physical mechanism is the same,
being caused by the different Fresnel transmittivities in the
2 different linear polarizations as the emission passes through
the surface-to-atmosphere interface. The only difference is
in the 'order' (or regularity, if you will) of the polarized
response. Asteroids will have a somewhat more disordered
polarization response because their topography/roughness is a
larger fraction of their size. However, for the larger
asteroids (> 150 km diameter or so - the only ones useful in
this respect), this should be a small effect, and good shape
models can be used to ameliorate the problem. See Lagerros'
papers on this.
==> Rafael
- There is a major problem with using the giant planet
satellites in this way - we do not understand physically why
the brightness temperature is depressed as it is at mm
wavelengths, so we have trouble modelling it successfully.
The authors quote Muhleman & Berge (1991), but leave out
this important finding from that paper - I quote from it:
"Much work remains to be done to explain the anomalous
behavior of the Galilean Satellites in both microwave emission
and radar backscattering."
== Rafael
- Another problem with using the large icy satellites is that
they are not distributed on the sky very uniformly. Asteroids
do not suffer from this problem, of course.
I think that because of these arguments, asteroids might be much
more useful than the authors conclude. They are relatively bright,
relatively small, have relatively easily modeled emission (including
the light curves - see recent papers/theses by Lagerros and Muller),
and are well distributed across the sky (one is observable at most
times).
begin SG:
- The last (and most convincing) Lagerros papers were not out at the
time we wrote the memo. Lagerros claims an accuracy "within 5 %" for
Ceres, Pallas and Vesta, and far less good for the others
(10 -- 15 %)), and at IR wavelengths only. But all these asteroids
have low emissivity at mm wavelengths which are not yet fully
understood (Redman et al 1998). This is exactly like the "anomalous"
behavior of Galilean Satellites at mm wavelengths. Our understanding
is limited here.
end SG:
p.7: In discussion on quasars as primary calibrators: 'At the
Chajnantor site, given the variety of hour angle and declination,
one of them will be available at anytime for bandpass calibration.'
We may find that we want the bandpass calibrator near the source
of interest, not just anywhere in the sky. This is certainly the
case at the VLA. It will depend on the stability of the bandpass
with time, temperature, antenna motion, etc...
begin SG:
- That is a key issue and a key design problem. We have to build the
instrument so that the bandpass IS stable in time. Otherwise, there
is little hope to be able to calibrate it out in any way. It would be
interesting for ALMA to understand the main cause of the VLA bandpass
instability. The fact that a nearby calibrator gives better result on
the VLA suggests some possible major origins
- delay change due to insufficiently accurate cable-length
compensation
- differential delay changes due to different mechanical
behavior of the antennas
- receiver gain changes as a function of antenna elevation
end SG:
In addition, I think that the quasars might actually be useful
at the long-mm wavelengths (we use 3C286 at the VLA at 7mm,
and the accuracy is limited mostly by the uncertainty on the
brightness temperature of Mars).
begin SG:
- 3C286 can be an excellent SECONDARY calibrator, since its flux
appears stable in time. But it is impossible to predict its flux
from theory: so it cannot be a PRIMARY calibrator. Also no theory
ever tells you it will remain stable...
end SG:
section 4: I find this section nearly useless, since the phase
fluctuation statistics from the STI were not used. I'm not sure
how the authors attempted to represent the correlation between the
various parameters, since it is not discussed, but I view it with
skepticism. A proper treatment of this should use mostly the phase
from the STI and opacity from the 225 GHz tipper (scaled properly
with frequency) [a possible addition is the change of temperature
with time, i.e., dT/dt, since during dawn or dusk the temperature
gradient will be large and you probably won't want to observe in
the submm then because of antenna thermal deformations].
begin SG:
- Accounting for the correlation (or lack of) between phase noise and
transparency should be done at some point. However, for the amplitude
calibration, the first order effect remains the atmospheric
absorption, and as such Section 4 is not "useless".
end SG:
p.12,13: Why use 'several "simple" approximations" to Tsys? Why not
simply calculate it, or use what has been published before in ALMA
memos?
begin SG:
- Because it is useless: the "exact" Tsys depends too much on the exact
observing frequency and atmospheric conditions. A representative,
easily verifiable, approximation is better for our purpose.
end SG:
section 6: I did not go through these sections much at all because
memos 422 and 423 supersede this section.
section 7.1: The discussion here is correct, but the authors seem to
be operating under the assumption that every observing program will
need to calibrate delays. I disagree. The delay only needs to be
determined once (each time an antenna is moved and reconnected) per
antenna. It may need to be done once per feed/Rx system, but even
that remains to be seen (there may simply be a stable difference
between each band).
begin SG:
- That is overly optimistic. In a system with cable length compensation
active, tracking the instrumental delay when the receiver is retuned
is not a trivial task. Delays can change for many reasons: thermal
dilatation is one of the most obvious, but switching an attenuator
can have an effect also. Continuous changes can be tracked by a
round-trip phase correction, but jumps cannot. Re-calibrating the
delay is an important step for final accuracy.
end SG:
section 7.2: This is an important point, and one that has not received
enough attention, I think. Is this calibratable a priori? How
strong a function of frequency within a given band is the effect?
We need some interaction with the engineers on this topic.
sections 7.3 and 7.4: I do not understand the distinction between
'fine scale' and 'large scale' bandpass. The bandpass is the
bandpass. It might be different on source (could be narrow) and
secondary calibrator [*not* the 'bandpass calibrator'] (always wide
bandwidth), so will have to be measured in 2 correlator
modes/frequencies, but I fail to see the reason to separate it into
fine scale and large scale bandpass. The discussion of the
coherent source in the subreflector to calibrate the bandpass is
good, and we need to visit that topic in more detail.
begin SG:
- The bandpass is the product of the contributions from many
components. To mention only 3: atmosphere, receiver, bandpass filter.
The bandpass filter response does not depend on the receiver. The
receiver bandpass response should have no narrow feature. Hence it
may be interesting to calibrate the overall bandpass in two steps.
Section 7.3 and 7.4 just attempts to quantify the required
calibration time in such a mode.
end SG:
section 7.5: It is appropriate to bring this up, and, again, we need
more discussion on this (especially in combination with the
coherent device in the subreflector). I'm not sure I agree with
the statement that it will take 100 times longer to calibrate it
than the bandpass or sideband gain ratio. I'm also not sure what
they mean by a 'half-wavelength modulation scheme to reduce any
standing wave pattern'. Maybe this is well known in some circles,
but I am not sure what they are referring to.
begin SG:
- Spending half of the time with the nominal focus and half of it with
the focus displaced by a quarter wavelength would add the baseline
ripples in opposition, thereby minimizing them. This is a standard
procedure on single-dish telescopes. The whole point of Section 7.5
is to point out that it may be better to "suppress" the ripples as
much as possible than to expect to be able to calibrate them, given
the long integration times required for this.
end SG:
section 8: I don't know why they use a 'semi-optimized five point
method' when scanning circles or continuous triangles have been
shown to be more efficient (see e.g., Steve Scott's OVRO memo on
how they do it there).
begin SG:
- it does not change significantly the time estimate
end SG:
I would argue against using 'major
satellites' for pointing. The variable emission from the primary
(variable because it's moving around in the beam) will probably
confuse things more than we want.
begin SG:
- In interferometry, this is far less of a nuisance than in single-dish
end SG:
Asteroids, on the other hand, might be quite useful for this.
begin SG:
- Yes, if only the ephemerides are known to 0.1" accuracy. I believe it
is not the case, and the current ephemerides only have 1" accuracy,
in which case they are useless for pointing.
end SG:
The calculation shown in Figure 3
is a nice one, but despite all of this observers may wish to
determine pointing *at the observing frequency*. Theoretically,
it only needs to be determined at one frequency and then have
(presumably) well-known collimation offsets applied for the other
frequencies. In practice, this does not work as well as one would
hope, and our experience at the VLA is that if you want to do the
highest dynamic range/fidelity/sensitivity mapping observations
you want to determine the pointing at the observing frequency
rather than determining it at, say, 3.5 cm and then applying the
collimation offsets.
begin SG:
- I agree: collimation offsets may change because of thermal
deformations of the main dish which affect differently each
frequency... It will depend on the antenna quality, and cannot be
decided before we test them...
end SG:
I like very much the discussion on looking
at a number of nearby sources to get a 'local pointing model'.
This is a good concept and one we should adopt as the working
model for pointing calibration, I think.
section 9: The discussion here is good and I think correct, but I would
make a similar argument here as for the delay determination. The
focus needs to be determined only once per antenna per receiver
cartridge, and perhaps as a function of elevation (although the
model for this deflection should be pretty good and might be good
enough to use from scratch - only experience will tell). Thermal
deformations can probably be modeled as well. The focus only needs
to be redetermined if something changes mechanically on the antenna.
It does not need to be determined by every observing program.
begin SG:
- That is not true: the antenna is always changing due to thermal
deformations. The whole point of Section 9 is to show that, with the
current antenna specification, the focus may need to be re-determined
every 10 minutes at high frequencies. This is a serious issue.
end SG:
The statement 'Frequencies around 90-100 GHz are optimal.' does
not make any sense - the focus needs to be determined for each
band (on each antenna) independently.
begin SG:
- This is like collimation offsets: focus offsets should be constant to
first order, but only to first order.
end SG:
section 10: I think the arguments here are superseded by memos 403 &
404 for fast switching. In fact, I'm not sure why this section is
included at all, except in a 'completeness' sense.
begin SG:
- Memo 372 is older than 403 or 404... Also, we thought that the
required integration time for atmospheric transparency calibration
was an important issue. Observing strategies will depend very much on
whether this time is short or not.
end SG:
section 11.1: 'Since the accuracy of opacity correction allows to take
a source 10 deg away, we can use a Q7 quasar of typical flux
S0 = 1.5 Jy...". This isn't right, since your flux density
calibrator can't be just any old Q7 quasar, but has to rather be
one of a small subset, *unless* you plan on monitoring every Q7
quasar regularly (and by this, it means every day, since flux density
variations of several percent or larger occur daily).
begin SG:
- No. It can be any Q7 quasar, but you have to bootstrap the flux of
this quasar to some SECONDARY calibrator for each observation.
end SG:
I think that
given our current thinking on breaking observing runs into small
chunks, the use of quasars for absolute flux density won't work.
If we had long runs (similar to what is currently done at mm arrays
[or cm arrays, for that matter]), then you could catch one of the
small number of prime flux density calibrators at some point in your
run, but given small chunks, this becomes unrealistic.
I think we'll want primary calibrators closer than 10-15 deg from
secondary calibrators. But, this might be a problem with short
observing blocks.
begin SG:
- Yes, and the conclusion is that "SMALL CHUNKS ARE UNREALISTIC".
Although not explicitely written in the memo conclusions, this is an
important one which derives from the calibration time estimate given
in the Tables.
end SG:
section 11.4: 'Using a source model and the actual layout...'. Which
source model?
begin SG:
- Table 9 was build using a 1" disk, but the order of magnitude will
remain the same whatever the exact source model and the exact layout
of the array are for the same angular size.
end SG:
I disagree with 'It is thus sufficient to design the largest
configuration with 1 short baseline to provide the same sensitivity.'
begin SG:
- The 4 km array at 850 GHz has 2016 x 0.00016 = 0.32 "effective"
baselines for a 1" source. So just 1 baseline in the 14 km array will
provide a better result...
end SG:
section 12.1: As I pointed out above, I don't think it should be
necessary to calibrate delays and focus for every observation.
SG: - This is absolutely required, see before. Additionally, as
pointed out above, I don't see the need to separate the bandpass
calibration into fine and large scale.
begin SG:
- If we had enough S/N at the observing frequency, there would be no
need. It is only one way to beat this S/N limitation
end SG:
section 12.4: 'Monitoring of these secondary calibrators should be done
regularly to provide sufficient reference sources.' This is a huge
time commitment. To get 1% accuracy, you need to monitor them
every day, at all frequencies. If they are suggesting to monitor
every Q4 & Q7 quasar, this is *alot* of time (1 Q4 quasar per 16
square degrees means several thousand of them, and even at 1 second
per observation, this is many hours!). They must be advocating some
subset of these quasars, but even if that subset is only 100
quasars (of order 1 per 100 square deg), then this is still a
serious time commitment.
begin SG:
- "secondary" calibrators do not mean the same thing for you and us.
For us,
- a PRIMARY (flux) calibrator is a source of ABSOLUTELY known flux
- a SECONDARY (flux) calibrator is a source whose flux is regularly
measured against one (or more) PRIMARY calibrator
- an AMPLITUDE calibrator is an intermediate object whose flux must
be determined against a PRIMARY or a SECONDARY calibrator at the
time of observations. The total number of SECONDARY calibrators
should be the smallest number which allows to always have one
visible at any given time within the elevation constraints.
end SG:
section 13: The recommendation to relax the submm spec to 3% is an
important one, and should be considered seriously. I guess the ASAC
should be queried on this.
SG: - Done already
I agree with the directions for future research here, but would add
development of the dual-load calibration system.
SG: - Agreed too.