NRAO AIPS++ Users Group Meeting Date: 2003-1-8 (Wednesday) Time: 1300 MST Video Hub: SOC-conf (other sites call in) Rooms: SOC317/CV311/GB241/TUCN505 Agenda and Minutes: (send corrections & additions to smyers) 1. NAUG organizational issues (Steve) o Recap of December 19 meeting with Fred - ALMA top priority (miss no deadlines!) - GBT high priority (try to shield GBT effort from delays) - VLA Audit important to ALMA also (promised as input to PDR) - we can tailor ALMA development to cover much of what will be wanted for the VLA early on o December 2002 AIPS++ Newsletter available: http://aips2.nrao.edu/weekly/docs/newsletters/dec02/dec02.html - NAUG help might be wanted for future newsletters (volunteers?) - should the newsletters be regular (e.g. 6 months, 1 year) or should there be a HotNewsletter page where stuff is put up each month or so, and then each Newletter compiles those for sending out bundled? - My 2cents: I will probably take on some of this as part of my AIPS++ PR and outreach duties as Project Scientist :-( But I will be asking for help! 2. VLA Requirements, Audit and Evaluation (Steve and Debra) o Last versions (Dec 18) http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.ps http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.tex - will generate new versions tomorrow containing latest submissions by the groups, so there will be an up-to-date version people can look at while grading o Due (grades and all) Jan 15, 2003 - let's try to have a first (reasonably complete) draft by then o Note - if requirements do not make sense or are unclear or inappropriate, you should propose new versions or delete the requirement in question. - goal should be to have clear and quantifiable requirements that read as a punch-list for AIPS++ targeting o Current list of assignments for sections (as updated during meeting): 1. Operations - Fomalont (benchmarking), Chandler (interface, etc.) 2. Data Handling - Myers/Butler (format,in/ex), Claussen (data examination) 3. Cal & Edit - Taylor (gain calibration, polarimetry), Hibbard (spectral line), Frail (flagging), Chandler (atmosphere) 4. Imaging - Shepherd, Fomalont, Butler, Brogan, Owen (wide-field) 5. Analysis & Vis - Brogan, Brisken, Claussen 6. Doc & Help - Hibbard, Claussen, Shepherd 7. Special - Butler and Bastian (solar sys, including the Sun), Brisken(astrometry), Claussen (simulation), (deleted pipeline part) o How to do the auditing (revised, included below) - please look this over! o Send comments on requirements, and grades, to Debra and Steve For area not in your "assigned" list, send also to the leads for those sections. "Ad-hoc" comments (particularly by those not assigned sections or new to the NAUG, are welcome. o Re-use of ALMA audit (not much - it did not involve extensive testing) http://www.aoc.nrao.edu/~smyers/alma/offline-req/ALMAoffline_audit.pdf o Frazer's wide-field requirements: http://www.aoc.nrao.edu/~smyers/aips++/vla/fowen-20030107-widef.txt - will include as Appendix to document for now o Use Cases (brought up during meeting): - it is hard in the current document to see whether a given path, (e.g. Frazer's wide-field case) is supported - should have a set of paths or use-cases written as a list of steps (which can be linked to requirements in the main document) which define a mode - put these as appendices to the requirements (like in the ALMA SW-11 document, see http://www.alma.nrao.edu/development/computing/docs/joint/0011/ssranduc.pdf - do use cases after this audit - in short term, it would be very useful to try to define the modes (Frazer estimates about 2 dozen) for the VLA. Ideally, these could be identified by keywords (e.g. SCAN: SINGLE, MOSAIC; FIELD: NARROW, WIDE; POLARIZATION: SINGLE, DUAL, FULL; SPECTRAL_MODE: CONTINUUM, SPECTROSCOPIC - note that these combos give 24, plus of course SPECIAL_MODES: PLANETARY, SOLAR, PULSAR) 3. GBT (Steve and Joe) o "gbtssr" list is available on mailman - if interested, join at http://listmgr.cv.nrao.edu/mailman/listinfo/gbtssr o "NAUG" GBT requirements/evaluation draft progress report (Steve) - draft will be produced on Thu 9 Jan. Can find this at: http://www.aoc.nrao.edu/~smyers/aips++/gbt/naug_gbt_eval.tex http://www.aoc.nrao.edu/~smyers/aips++/gbt/naug_gbt_eval.pdf (will send out message to NAUG and gbtssr when ready) o NAUG issues for GBT multi-X stuff (Joe): - handling multi-beam/IF/bank data - time (elevation,HA) dependent feed offsets - msconcat issues, use of multiple ms - multi-feed imaging requirements, modes and algorithms - Joe will send out his document on this soon o Volunteers for GBT reqs and testing: From CV: Hibbard, Hogg, Liszt, Turner and AIPS++ (Garwood) From GB: Lockman, Langston, Mason, Minter and AIPS++ (Braatz) From TUC: Mangum From SOC: Dyer, Chandler - Dave commented that it will be important to incorporate as much of the 12-m expertise that we can (agreed!) o GBT modes should be incorporated as appendices, eventually work up the use cases o Will discuss these issue more next meeting after people have a chance to read the draft 4. AIPS++ news o We have moved to a sub-cycle "snapshot" stable release (1 month - 6 wks) to allow clearer targeting and planned functionality availability on faster turnaround. You can see the webpage set up at: http://aips2.nrao.edu/daily/docs/reference/updates.html This currently consists of release v1.8 build 391 (2002-12-21) The update page now contains the list of - completed (and incomplete) targets - defects resolved We will refer to this for tracking testing It was suggested we build our testing around these builds Will also link the next snapshot so we can track progress o Project office: we are in the process of putting together an AIPS++ project office like that at GBT that Nicole put together, see http://projectoffice.gb.nrao.edu (Note that this is still under construction.) o Benchmarking: Sanjay is writing up a memo on the VLA procedures and results. 5. Upcoming meetings and deadlines: o Jan 5-9, 2003 Seattle AAS, no AIPS++ booth this time (but if you are going, spread the word about the positive changes!) o Jan 15 VLA Audit due o Jan 20 ALMA AMAC meeting (also cancelled) o Jan 22 NAUG meeting o Jan 24? NRAO Internal DM Review (probably deferred to February) o Jan 25 - Feb 1 ALMA Offline/Pipeline meeting at ESO Garching o Feb 5 AIPS++ Tutorial Video Conf. (monthly) o Feb? AIPS++ Technical Review o Mar 18-19 ALMA Computing PDR (Tuscon) The agendas for past NAUG meetings are archived at: http://www.aoc.nrao.edu/~smyers/aips++/agenda/ The minutes for past NAUG meetings are archived at: http://www.aoc.nrao.edu/~smyers/aips++/minutes/ ------------------------------------------------------------------------------- Revised VLA Auditing Plan, Jan 6 2003 (previous draft 8 Nov 2002) Here is an updated sketch of what we envision for the VLA Auditing. Note that the ALMA audit was based mostly on the documentation, not testing, since its main purpose was to identify missing functionality, not identify buggy tools. However, looking that over might be useful. It can be found at: http://www.aoc.nrao.edu/~smyers/alma/offline-req/ALMAoffline_audit.pdf For the latest scheme, look at the introductory section to the draft at: http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.ps with http://www.aoc.nrao.edu/~dshepher/aips++/naug_vla_eval.tex containing the source LaTeX. Some of this is summarized below. 1. It is best to work in a group with others assigned to your section, and feel free to draft others in the observatory to help. Or at least pass comments back and forth by email. I found it very helpful to have overlap with other auditors (especially Crystal) when doing the ALMA audit to catch stuff I missed. 2. The group should make a pass through the requirments drafted for the given section, and make changes, additions or deletions. I marginally prefer to keep or only slightly modify requirements that fit rather than substantially rewrite them, as it will make it clearer which requirments are in common with the other documents (ALMA, EVLA, GBT). Delete ones that do not make sense, add ones that are missed (and we should propagate good changes to the EVLA doc as the EVLA SSR writes that). 3. Each requirement is graded on: Functionality = is a feature available, does it do what is needed Usability = ease of use, speed, efficiency, look and feel Documentation = clarity, completeness, useful examples The grading scheme is now: A = adequate E = adequate, but enhancements desired I = inadequate, needs further development or defect repair N = missing, not available in AIPS++ U = unable to grade (missing info, etc.) or category not applicable (e.g. Functionality for a documentation requirement) For items graded I or N, a severity level should be indicated, e.g. I/low, where the severities are: low = low priority feature and/or minor defects med = medium priority feature and/or moderate defects high = critical feature and/or severe defects These severity levels are needed to prioritize the AIPS++ targets. 3. For each requirement and subrequirement, identify whether its functionality is fulfilled at Level 1 or Level 2 or both, as described in the Intro. Preferably, a requirement should be only a Level 1 or Level 2 requirement for ease of evaluation, but if a given requirement has both Level 1 and Level 2, then you should explain which features are which level in the comments. Grades for Functionality should have the Level attached, e.g. A1, I2/low. Usability and Documentation grades can have Levels attached if needed, otherwise they pertain to all levels. 4. Identify the AIPS++ tools or functions that pertain to the given requirement. Note these in comments (there are macros at the top of the .tex file). 5. Look the documentation over and assign a grade to Documentation (refine as need be throughout the testing also). 6. For a dataset that is appropriate to the requirement, test the mode(s) of the tool that apply. This will require that we have a suite of apppropriate test data handy, we will have to communicate among ourselves on this. There are some AIPS++ assays that might help in certain areas - for example Kumar's improved imager assay generates simulated data with point sources on the fly and then tests them. We should check into this. Note the performace of the tool, and assign grades for Functionality (at least relevant Level) and for Usability. 7. For problem areas, try to isolate the cause of the inadequacy. In particular, if there is a fundamental problem in one area, do not grade a whole bunch of other areas I due to this - just grade the specific requirement that points this out as I, and note dependencies elsewhere in comments. For example, of the GUI speed is slow across the board, grade this in the appropriate requirement in the User Interface section, do not grade all other tools as I because of this. The main purpose of this is to identify what things need to be fixed in order to fulfill VLA user requirements (and secondarily to see where we are for general synthesis reduction, for ALMA and EVLA). This, it is important that AIPS++ be able to map requirement failures to work to be done. Note, this may require rewording of the requirments to make them clearer and to make isolation of problems easier. I cannot stress enough the need for the requirements themselves to be clear and precise! 8. If you are testing on your own, make sure different people in the group overlap with things youve tested for items you deem inadequate. Or just test them together to get agreement. 9. Iterate on improving unclear, inappropriate, or missing requirments during the testing. In a sane procedure, we would set the reqs down first, then audit (like in ALMA), then test. Time pressure makes us collapse the process. One could think of a model where each person takes a dataset (L-band spectral line to use John's example) and pushes it through the narrow path, and adds "grades" to the requirments hit along the way. Im not sure that I see how this would work in practice. I could see taking that dataset, getting it to the point where imaging can be done, and then trying each of the relevant Imaging requirments for it. Maybe it makes sense to have the same people testing Cal+Edit, Imaging, and Analysis and each taking a different data set and pushing along the "narrow" path through the 3 sections. Would people prefer this? Note that 1-5 can be done without the data to some extent. But we do need to identify the fiducial data. We should use data that has been checked into AIPS++ (and used in their assay already), and add stuff missing. Any ideas?