Basic Use Case for EVLA Observing, v3

Bryan Butler


This use case outlines the way that I forsee EVLA observing proceeding, 
from the very beginning (proposal preparation and submission) up to,
but not including, final data processing (that is a separate use 
case, to come later).  This use case will serve as the basis for all 
further use cases, which will consist of deviations from this one.
The format of this use case is simply a list of steps which might occur 
in the normal course of setting up and executing an observation with 
the EVLA.  It is broken up into several major subsections, but there 
is still information flow across these (in some cases arbitrary) 
divisions.  The use case is presented from the point of view of the 
astronomer, so things which impact operations but should be transparent 
to the astronomer are not mentioned (for instance, exactly how 
proposals are handled by the observatory).  The use case presented here 
is for continuum observing at an intermediate frequency (X- or C-band), 
so it is not necessary to do bandpass or polarization calibration (in 
detail, anyway).

  I. Proposal Preparation and Submission
     A. user logs into the "Proposal Preparation Tool", which has a GUI
        (web-based or downloadable or both)
        1. this requires a unique username and password, since users 
           don't want their proposals globally accessible
     B. user does a search on past proposals and observations for the 
        source of interest (presumably, this hasn't already been done!)
     C. user enters "cover page" information for a proposal, including:
        1. title
        2. author information
        3. abstract
        4. necessary resources, including configuration, frontend(s),
           backend (correlator) setup
        5. source information: position, flux density, size
        6. necessary observing conditions
        7. time needed
           a) in order to figure out how much time is needed, the user 
              may do one of:
              (1) specify an rms, from which the tool automatically 
                  calculates the necessary time
              (2) specify a dynamic range, then given the source flux 
                  density the tool figures out the needed rms and then 
                  the time
     D. user "saves" the information
     E. some time passes
     F. user comes back and edits some of the fields in I.B.1-7.
     G. user transmits this "proposal in preparation" to another user
        (this can be effected in any number of ways, but *some* method
        of passing proposals around as they are being worked on needs to
        be provided [or password access granted somehow]).
     H. more time passes
     I. this new user uploads a scientific justification to be attached
        to the proposal (allowable length and format(s) to be determined
        by NRAO management)
     J. user "submits" the proposal to be considered by the TAC

 II. Observation Preparation
     A. after the proposal is granted observing time, the user logs in 
        to the "Observation Preparation Tool" to specify in detail how 
        the observations are to be set up.  This tool has a GUI
        (web-based or downloadable or both)
        1. this requires a unique username and password, since users 
           don't want others setting up their observing
        2. any user listed as an author on the proposal should be 
           granted access to the setup information for this particular 
     B. the basic information entered into the proposal preparation tool
        should be passed on to the observation preparation tool, and 
        used to set initial values of all quantities possible (this is 
        transparent to the user, but is so vitally important that I list
        it here)
     C. the user then specifies the following:
        1. configuration
        2. a source list, containing all of the sources of interest
           a) the position of each source
           b) the order in which the sources are to be observed
           c) constraints on observing each source (time, elevation, 
              interrelations between sources, etc...)
           d) a hardware setup for each source (including frontend and 
              backend details), including integration time, correlator 
              setup, etc...
           e) associated flux density scale calibrators for each 
              source (might be the same for many sources)
           f) associated calibrators for complex gain as a function 
              of time (often called the "phase calibrator") for each 
           g) a sequence of observations - including the cycle time 
              between source and calibrators, and when to observe the 
              flux density scale calibrators.  Note that sensible 
              defaults should be provided, of course.
           h) pipeline reduction parameters
              (1) Quick Look Pipeline
              (2) Default Archive Image Pipeline (if there are any
                  user-selectable parameters for this pipeline)
        3. for the calibrators, it should be possible to select:
           a) by specifying it directly
           b) by using a "calibrator selection subtool" to do so,
              which should have a GUI if desired by the user
           c) by allowing the tool to pick the "best" one
        4. the minimum allowable length of a single contiguous 
           observing session

III. During Observing
     A. the user should be notified when any SB is close to getting 
        scheduled on the queue
     B. a web page should be set up to show progress of the queue, and
        the actual observing (this only needs to occur when the first
        bit of observing is actually done for a project)
     C. to access that web page, the user must enter a username and
        password which has access to that page (one of the co-authors, 
        or somebody else granted specific access by the observatory or
        one of the co-authors - see II.A.2 above)
     D. the user is notified when an SB is actually queued up for 
     E. when observing begins, the user opens up an application which 
        access to the setup information for this particular observation
        allows for tracking of conditions during the observation - the
        "Observing Status Tool" (note that we have called this the
        "What's Up Screen" in the past, which I'd prefer to get away 
        1. for general access, this tool should not require username and
           password access entry.  However, if the user wants to have 
           some control of the observations ("manual" or "interactive" 
           mode), or the ability to interact with the Quick Look 
           Pipeline (see results from automatic executions of it or run
           it on command), then username and password which are 
           appropriate to the observations currently underway must be 
        2. The user will want to see what the current observing 
           parameters are, including date/time, source, sky position 
           (both RA/DEC and az/el), band/frequency, bandwidth, 
           correlator setup, etc... - these are the quantities displayed
           on the default AOI screen for the current VLA.  The user will
           also want to display what the current meteorological 
           conditions are, including temperature, dew point, wind speed,
           wind direction, and pressure.  Additionally, the time 
           histories of these quantities should be displayed, for a 
           specified past interval (probably OK to default to 1 day).  
           Quantities supplied by TelCal should also be displayed here, 
           including derived phases and amplitudes per antenna, source 
           flux densities, structure function, etc...  Time histories of
           these quantities must be accessible.  Current visibility 
           quantities should be displayed if desired, including 
           amplitude and phase and their time history.  In the continuum
           case, this should probably be averaged over the central 75% 
           or so of the passband.  API and WVR quantities should be 
           displayed, if available, as should any other future 
           ancillary device measurements of this type (RFI or GPS are 
           other possibilities).  It should be possible to view the 
           results from the Quick Look Pipeline, and to request that it
           be run (with a check to see if it is feasible).

 IV. After Observing (Data Access)
     A. after the observations are completed, the user logs in to the 
        "Archive Tool" to access the data.  This tool has a GUI 
        (web-based or downloadable or both)
        1. this requires a unique username and password, since users 
           don't want others having access to their data (note that this
           is not necessary after the "proprietary period" has elapsed.
        2. any user listed as an author on the proposal should be 
           granted access to the data for this particular observation
     B. the following information is accessible from the archive tool:
        1. results from the Default Archive Image Pipeline
        2. actual visibility data
        3. ancillary data, including operator log, TelCal results, and
           ancillary device data (API, WVR, RFI, GPS, etc...).
        4. all other M&C data acquired during the observation