NRAO AIPS++ Users Group Meeting Date: 2003-1-8 (Wednesday) Time: 1300 MST Video Hub: SOC-conf Rooms: SOC317/CV311/GB241/TUCN505 Agenda: 1. NAUG organizational issues (Steve) o Recap of December 19 meeting with Fred o December 2002 AIPS++ Newsletter available: http://aips2.nrao.edu/weekly/docs/newsletters/dec02/dec02.html 2. VLA Requirements, Audit and Evaluation (Steve and Debra) o Latest versions http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.ps http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.tex o Due (grades and all) Jan 15, 2003 o Note - if requirements do not make sense or are unclear or inappropriate, you should propose new versions or delete the requirement in question. o Current list of assignments for sections: 1. Operations - Fomalont (benchmarking), Chandler (interface, etc.) 2. Data Handling - Myers/Butler (format,in/ex), Claussen (data examination) 3. Cal & Edit - Taylor (gain calibration, polarimetry), Hibbard (spectral line), Frail (flagging), Chandler (atmosphere) 4. Imaging - Shepherd, Fomalont, Butler, Brogan 5. Analysis & Vis - Brogan, Brisken, Claussen 6. Doc & Help - Hibbard, Claussen, Shepherd 7. Special - Butler (solar sys), Brisken (astrometry), Claussen (simulation), Ulvestad (pipeline) o How to do the auditing (revised, included below) o Send comments on requirements, and grades, to Debra and Steve For area not in your "assigned" list, send also to the leads for those sections. "Ad-hoc" comments (particularly by those not assigned sections or new to the NAUG, are welcome. o Re-use of ALMA audit (not much - it did not involve extensive testing) http://www.aoc.nrao.edu/~smyers/alma/offline-req/ALMAoffline_audit.pdf o Frazer's wide-field requirements: http://www.aoc.nrao.edu/~smyers/aips++/vla/fowen-20030107-widef.txt 3. GBT (Steve and Joe) o "gbtssr" list is available on mailman o "NAUG" GBT requirements/evaluation draft progress report (Steve) o NAUG issues: - handling multi-beam/IF/bank data - time (elevation,HA) dependent feed offsets - msconcat issues, use of multiple ms - multi-feed imaging requirements, modes and algorithms o Volunteers for GBT reqs and testing: Hibbard, Dyer, Mangum From CV: Hogg, Liszt, Turner and AIPS++ (Garwood) From GB: Lockman, Langston, Mason, Minter and AIPS++ (Braatz) 4. AIPS++ news o New stable "snapshot" release v1.8 build 391 (20021221) o Completed targets: o Defects resolved See: http://aips2.nrao.edu/daily/docs/reference/updates.html 5. Upcoming meetings and deadlines: o Jan 5-9, 2003 - Seattle AAS, no AIPS++ booth this time (but if you are going, spread the word about the positive changes!) o Jan 15 - VLA Audit due o Jan 22 - NAUG meeting o Jan 24 - NRAO Internal DM Review o Jan 25 - Feb 1 ALMA Offline/Pipeline meeting at ESO Garching o Feb 5 - AIPS++ Tutorial Video Conf. (monthly) o Feb? - AIPS++ Technical Review o Mar/Apr - ALMA Computing PDR ------------------------------------------------------------------------------- Revised VLA Auditing Plan, Jan 6 2003 (previous draft 8 Nov 2002) Here is an updated sketch of what we envision for the VLA Auditing. Note that the ALMA audit was based mostly on the documentation, not testing, since its main purpose was to identify missing functionality, not identify buggy tools. However, looking that over might be useful. It can be found at: http://www.aoc.nrao.edu/~smyers/alma/offline-req/ALMAoffline_audit.pdf For the latest scheme, look at the introductory section to the draft at: http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.ps with http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.tex containing the source LaTeX. Some of this is summarized below. 1. It is best to work in a group with others assigned to your section, and feel free to draft others in the observatory to help. Or at least pass comments back and forth by email. I found it very helpful to have overlap with other auditors (especially Crystal) when doing the ALMA audit to catch stuff I missed. 2. The group should make a pass through the requirments drafted for the given section, and make changes, additions or deletions. I marginally prefer to keep or only slightly modify requirements that fit rather than substantially rewrite them, as it will make it clearer which requirments are in common with the other documents (ALMA, EVLA, GBT). Delete ones that do not make sense, add ones that are missed (and we should propagate good changes to the EVLA doc as the EVLA SSR writes that). 3. Each requirement is graded on: Functionality = is a feature available, does it do what is needed Usability = ease of use, speed, efficiency, look and feel Documentation = clarity, completeness, useful examples The grading scheme is now: A = adequate E = adequate, but enhancements desired I = inadequate, needs further development or defect repair N = missing, not available in AIPS++ U = unable to grade (missing info, etc.) or category not applicable (e.g. Functionality for a documentation requirement) For items graded I or N, a severity level should be indicated, e.g. I/low, where the severities are: low = low priority feature and/or minor defects med = medium priority feature and/or moderate defects high = critical feature and/or severe defects These severity levels are needed to prioritize the AIPS++ targets. 3. For each requirement and subrequirement, identify whether its functionality is fulfilled at Level 1 or Level 2 or both, as described in the Intro. Preferably, a requirment should be only a Level 1 or Level 2 requirement for ease of evaluation, but if a given requirement has both Level 1 and Level 2, then you should explain which features are which level in the comments. Grades for Functionality should have the Level attached, e.g. A1, I2/low. Usability and Documentation grades can have Levels attached if needed, otherwise they pertain to all levels. 4. Identify the AIPS++ tools or functions that pertain to the given requirement. Note these in comments (there are macros at the top of the .tex file). 5. Look the documentation over and assign a grade to Documentation (refine as need be throughout the testing also). 6. For a dataset that is appropriate to the requirement, test the mode(s) of the tool that apply. This will require that we have a suite of apppropriate test data handy, we will have to communicate among ourselves on this. There are some AIPS++ assays that might help in certain areas - for example Kumar's improved imager assay generates simulated data with point sources on the fly and then tests them. We should check into this. Note the performace of the tool, and assign grades for Functionality (at least relevant Level) and for Usability. 7. For problem areas, try to isolate the cause of the inadequacy. In particular, if there is a fundamental problem in one area, do not grade a whole bunch of other areas I due to this - just grade the specific requirement that points this out as I, and note dependencies elsewhere in comments. For example, of the GUI speed is slow across the board, grade this in the appropriate requirement in the User Interface section, do not grade all other tools as I because of this. The main purpose of this is to identify what things need to be fixed in order to fulfill VLA user requirements (and secondarily to see where we are for general synthesis reduction, for ALMA and EVLA). This, it is important that AIPS++ be able to map requirement failures to work to be done. Note, this may require rewording of the requirments to make them clearer and to make isolation of problems easier. I cannot stress enough the need for the requirements themselves to be clear and precise! 8. If you are testing on your own, make sure different people in the group overlap with things youve tested for items you deem inadequate. Or just test them together to get agreement. 9. Iterate on improving unclear, inappropriate, or missing requirments during the testing. In a sane procedure, we would set the reqs down first, then audit (like in ALMA), then test. Time pressure makes us collapse the process. One could think of a model where each person takes a dataset (L-band spectral line to use John's example) and pushes it through the narrow path, and adds "grades" to the requirments hit along the way. Im not sure that I see how this would work in practice. I could see taking that dataset, getting it to the point where imaging can be done, and then trying each of the relevant Imaging requirments for it. Maybe it makes sense to have the same people testing Cal+Edit, Imaging, and Analysis and each taking a different data set and pushing along the "narrow" path through the 3 sections. Would people prefer this? Note that 1-5 can be done without the data to some extent. But we do need to identify the fiducial data. We should use data that has been checked into AIPS++ (and used in their assay already), and add stuff missing. Any ideas?