NAUG Testers Meeting Date: 2002-11-20 (Wednesday) Time: 1400 MST Video Hub: SOC Audio Hub: CV (for Tuscon, call 434-296-7082) Agenda: 1. NAUG organizational issues (Steve) o additions to NAUG: Holdaway, Mangum (Tuscon) - welcome! o AIPS++ night out Thursday! o rebranding - should AIPS++ be renamed? 2. GBT (Steve) o Commissioning Review recap o "NAUG" GBT requirements/evaluation draft (I will produce draft next week) o Volunteers for GBT reqs and testing: Hibbard, Dyer, Mangum? From CV: Hogg, Liszt and AIPS++ (Garwood) From GB: Lockman, Minter?, Langston?, Mason? and AIPS++ (Braatz) o Have GBT-centric NAUG meetings every 4th (1 per 2mos) 3. VLA Requirements, Audit and Evaluation (Debra or Steve) o Due Jan 1, 2003 o Current list of assignments for sections: 1. Operations - Fomalont (benchmarking), Chandler (interface, etc.) 2. Data Handling - Myers/Butler (format,in/ex), Claussen (data examination) 3. Cal & Edit - Taylor (gain calibration, polarimetry), Hibbard (spectral line), Frail (flagging), Chandler (atmosphere) 4. Imaging - Shepherd, Fomalont, Butler, Brogan 5. Analysis & Vis - Brogan, Brisken, Claussen 6. Doc & Help - Hibbard, Claussen, Shepherd 7. Special - Butler (solar sys), Brisken (astrometry), Claussen (simulation), Ulvestad (pipeline) o How to do the auditing (see message of 08Nov, included below) o Re-use of ALMA audit (not much - it did not involve extensive testing) 4. AIPS++ Development Plan and progress report (Joe,Steve) o will have to descope given loss of Kemball and Sessoms (but Sanjay transferred), plus AIPS++ and DM reviews (>6 man-months?) o focus on VLA completeness, robustness? o try to finalize this next week 5. Calling Fortran from glish (Joe) o cool demo! 6. AIPS++ Tutorial Series (Joe) o propose for next 6 lectures (first lecture 4 Dec) 3 x VLA, 1 x GBT, 1 x Engineering, 1 x other (pipeline?) o what parts do we think are worth showing, e.g. how robust should things be before advertising? 7. AIPS++ news (from last week, no discussion) o Public talks in system followup - daily/docs/presentations/public - Link from Main page to tour o Completed targets: - new imager test in assay - very thorough - msconcat accepts input frequency and position tolerance - uv-plane continuum subtraction - coming very soon - fields function in misc.g (gives field names in glish records in a more readable format) - listed version numbers within a build have been made more sensible o Defects: - (most of these were not bugs or were unreproducible) - AOCso04074 when use circular mask image, mem seemed to deconvolve - AOCso03849 weight function does not work as shown in documentation - AOCso03793 link to imager is wrong - AOCso03169 msselect argument in setdata doesn't appear to work - AOCso00159 Names should not have embedded underscores o Updated lists (Joe)? 8. Upcoming meetings: o Wed Dec 4 - Tutorial series (Joe) o Wed Dec 11 - NAUG meeting (back on normal 2nd/4th Wednesday schedule) o Jan 5-9, 2003 - Seattle AAS, no AIPS++ booth this time (but if you are going, spread the word about the positive changes!) o Sometime (early?) January - NRAO Internal DM Review o Sometime late January (early Feb?) - AIPS++ Technical Review ------------------------------------------------------------------------------- Date: Fri, 8 Nov 2002 16:27:53 -0700 (MST) From: Steven T. Myers To: aips2-naug@aoc.nrao.edu Subject: Clarification of the VLA Audit and Evaluation process Since some of you asked, here is what I envisioned for the auditing process. Note that the ALMA audit was based mostly on the documentation, not testing, since its main purpose was to identify missing functionality, not identify buggy tools. 1. It is best to work in a group with others assigned to your section, and feel free to draft others in the observatory to help. Or at least pass comments back and forth by email. I found it very helpful to have overlap with other auditors (especially Crystal) when doing the ALMA audit to catch stuff I missed. 2. The group should make a pass through the requirments drafted for the given section, and make changes, additions or deletions. I marginally prefer to keep or only slightly modify requirements that fit rather than substantially rewrite them, as it will make it clearer which requirments are in common with the other documents (ALMA, EVLA, GBT). Delete ones that dont make sense, add ones that are missed (and we should propagate good changes to the EVLA doc as we, or at least the EVLA SSR, write that). 3. For each requirement and subrequirement, identify whether its functionality is fulfilled at Level 1 or Level 2 or both, as described in the Intro. These are currently blank. 4. Identify the AIPS++ tools or functions that pertain to the given requirement. Note these in comments (there are macros at the top of the .tex file). 5. Look the documentation over and assign a grade to Documentation (refine as need be throughout the testing also). 6. For a dataset that is appropriate to the requirement, test the mode(s) of the tool that apply. This will require that we have a suite of apppropriate test data handy, we will have to communicate among ourselves on this. There are some AIPS++ assays that might help in certain areas - for example Kumar's improved imager assay generates simulated data with point sources on the fly and then tests them. We should check into this. Note the performace of the tool, and assign grades for Functionality (at least relevant Level) and for Usability. 7. If you are testing on your own, make sure different people in the group overlap with things youve tested for items you deem inadequate. Or just test them together to get agreement. 8. Iterate on improving unclear, inappropriate, or missing requirments during the testing. In a sane procedure, we would set the reqs down first, then audit (like in ALMA), then test. Time pressure makes us collapse the process. One could think of a model where each person takes a dataset (L-band spectral line to use John's example) and pushes it through the narrow path, and adds "grades" to the requirments hit along the way. Im not sure that I see how this would work in practice. I could see taking that dataset, getting it to the point where imaging can be done, and then trying each of the relevant Imaging requirments for it. Maybe it makes sense to have the same people testing Cal+Edit, Imaging, and Analysis and each taking a different data set and pushing along the "narrow" path through the 3 sections. Would people prefer this? Note that 1-5 can be done without the data to some extent. But we do need to identify the fiducial data. We should use data that has been checked into AIPS++ (and used in their assay already), and add stuff missing. Any ideas?