User Interface Working Group Topics and Questions ------------------------------------------------- STM v1.0 2006-03-27 1400MST v2.0 2006-03-28 1000MST v2.1 2006-03-28 1730MST v3.0 2006-03-30 0930MST v3.1 2006-03-30 1730MST v3.2 2006-03-30 2200MST v4.0 2006-03-31 1700MST Materials available at: http://www.aoc.nrao.edu/~smyers/aips++/uiwg/ =============================================================================== Summary for Day 1 2006-03-27 1. Agreed: Documentation inline "info", "help", "explain" highest priority, along with accurate online User Reference Manual. 2. There were several proposals for param setting look and feel (e.g. IRAF epar, sticky variables). Wanted to see demo of IRAF / Pyraf. 3. Agreed: Some rationalization of the toolkit (param naming, consolidation of set methods) should be done asap, as this has implications for the tasks. 4. Agreed: a good method for parameter save/get (tput/tget) is essential within the command interface. 5. Joe wanted: a) preliminary high priority task list + param list + defaults b) us to look over the current "tasks" (clean, feather, invert, mosaic) =============================================================================== Summary for Day 2 2006-03-28 1. There was considerable discussion of just what a "Task" was and whether differentiation between Tasks and Procedures was warranted. I propose that the definitions be amended to: "Task" - a series of operations, beyond those covered by use of the toolkit, carried out in the package through the standard tool interface (CLI and/or GUI). The Task may be constructed using Python or C++ (or any other language) but must obey the standard interface and must include standard parameter and error handling, plus full documentation. It is expected that the Tasks will be higher level functional bundling of toolkit operations but that they will be indistinguishable to tool at the user level. This means they have to have the XML stuff to go along with the interface. "Procedure" - a Python script obeying the (minimal) standard parameter interface, but not requiring the same level of error handling or documentation as Tasks. Similar to AIPS procedures. This means they have to have some minimal XML stuff to go along with the interface. "Scripts" - a Python script that strings together toolkit or task commands, which may or may not have an interface or functional call. Basically used as examples for processing, but may have more general utiliity. 2. There is no clear need for "Tasks" at this time. Given the pending toolkit rationalization plus the complications of writing down full use cases for calibration or imaging the group could not think of a high-priority "Task" that is must-have right now. We propose to await the outcome of the toolkit rationalization (in particular selection and general argument uniformity) before reassessing the need for programmer-provided Tasks beyond the toolkit. It was also remarked that the projects had no clear definition for tasks. 3. It was thought that a mechanism (as in Pyraf) for setting arguments by setting attributes, e.g. cb.modelfit(niter=100,...) ==> cb.modelfit.niter = 100 which persist, and will be carried out at execution cb.modelfit() Note that cb.modelfit(niter=200) would use 200 instead but keep the current value, maybe queried by cb.modelfit.niter? Note that this would really make scripting better! 4. If there were Tasks, then it was strongly felt that open/closing of tools and selection of datasets should be hidden from the user (e.g. open if not already open, select if the selection has changed). 5. It was strongly felt that uniform and efficient ms selection was very important, and should include: a) uniformity across tools (as for all parameters) b) ability to transfer selection params between tools (through .par files, or objects) c) ability to associate ms selection with particular ms in the case of multi-ms inputs (e.g. to imager) d) select once until changed (particularly in Tasks) e) desirability of selection objects f) desirability of an ms-server g) efficient selection implementation (e.g. only select when needed, create new or sub ms when warranted) h) selection for data restriction and selection for data transformation (e.g. mapping ms channels to image channels) should be clear and distinct 6. Ways of better manipulating weighting selection in imager was desired (e.g. "and"-ing weights robust + taper + range) maybe through objects or a special parmam set (e.g. im.weight.type, im.weight.briggs.robust, im.weight.uvmin which is distinct from selection uvmin by the way). More efficient weighting application (not creating weight column until actually needed) might be good. 7. Agreed: Calibrating data weights/errors (as in DOCAL=2) is essential. 8. Better way to specify mapping of ms channels to image channels is desirable, e.g. im.inchannels.bchan, .echan, .nchav and/or im.outchannels.nchan, .fstart (or .vstart), .fwidth (or .vwidth) 9. A discussion commenced on what we though MUST be implemented before any CASA v1.0 release. A rough list: a) toolkit rationalization (at some level) b) data weight calibration c) inline documentation uniform and correct across package d) some viewer capability (qtviewer) in CASA e) a set of basic scripts (preferably but not necessarily Procedures) covering fundamental processing use cases f) some key functionality (filler, fitsio, ...) We will work on this further... 10. We liked what we saw in Pyraf (in the tutorial) but wanted to investigate it further. =============================================================================== Summary for Day 3 2006-03-29 1. After further thought, it was decided that there was a role for tasks. Crystal and Walter drafted up a trial CLEAN task plus some others. These included SELECT MAKEMASK CLEAN MEM (separate, unless MEM has all the options of CLEAN) (I put Tasks in capitals merely to differentiate them from tool methods) Tasks are invoked same as tool methods, using the same interface, e.g. clean(['wf','mosaic','sd'],...) or clean(alg=['wf','mosaic','sd'],...) or equivalently in ipython clean ['wf','mosaic','sd'],... (I don't know if we can make this equivalent to 'wf,mosaic,sd' which would be nice shorthand.) ---- CLEAN has a number of parameters (arguments) as inputs. The difference is that the choice of algorithm (alg = first argument) changes the lists of parameters shown in the interactive param setting environment. Note that this first parameter can be reset at anytime during the interactive setting session, which will then readjust the parameter list accordingly (returning any previously set values or hiding but not erasing no longer relevant ones). Furthermore, the parameters are grouped by function. For example (these are only examples of possible groupings): Note, from Joe's prototypes the parameter interface will be invoked by using the "input" or "inp" command, e.g. inp clean You can prime the task by selecting the first argument, e.g. inp clean ['wf','mosaic','sd'] which should bring up: [CLEAN] _____________________________________________________________________ Task to deconvolve (multiple) ms using CLEAN. NOTE: input ms(s) must have been selected using SELECT task. _____________________________________________________________________ alg = 'wf','mosaic','sd' | Choice of algorithm(s) _____________________________________________________________________ imagenames :: Names of input/output images model = | model image (I/O) complist = | component list (I/O) mask = | mask image (I/O) image = | restored image (O) residual = | residual image (O) psf = | psf (beam) image (O) _____________________________________________________________________ setimage :: Set position and stokes shape of output images nx = ny = cellx = celly = stokes = doshift = phasecenter = shiftx = shifty = _____________________________________________________________________ setchannels :: Set frequency axis shape of output images mode = | Chan mode (mfs,channel,velocity) nspw = | Number of spectral windows nchan = | Number of channels for each window chwid = | Channel width (GHz,kms) per window _____________________________________________________________________ weighting :: Set controls for visibility gridding weights type = | Weight mode (briggs,uniform... rmode = | Mode for Briggs weighting robust = | Factor for Briggs weighting noise = ! Noise for Briggs weighting ... uvmin = ! Min uvradius for nonzero weight uvmax = | Max uvradius for nonzero weight taper = | Taper type (none,Gaussian) tapbmaj = | Gaussian taper bmaj ... _____________________________________________________________________ clean :: Set common controls for cleaning algorithm = | 'clark'|'hogbom' niter = | Max number of iterations ... _____________________________________________________________________ wf :: Set controls for wide-file imaging wplanes = 1 | if > 1 then wprojection nfacets = 1 | if > 1 then uv faceting _____________________________________________________________________ mosaic :: Set controls for mosaicing gridtype = | 'image','uv' ... _____________________________________________________________________ sd :: Set controls for single-dish data inclusion useac = F ! Use interferometer autocorrelations ... _____________________________________________________________________ interaction :: Set controls for interaction interactive = F | Interactive clean? async = F ! Run asynchronously? _____________________________________________________________________ commands: GO, SAVE, GET, ESC, EXIT, SCRIPT, TOOLSCRIPT, HELP, EXPLAIN Now if the user were to reset the alg parameter, say to only alg = 'wf', then the interface would become: which should bring up: [CLEAN] _____________________________________________________________________ Task to deconvolve (multiple) ms using CLEAN. NOTE: input ms(s) must have been selected using SELECT task. _____________________________________________________________________ alg = 'wf','mosaic','sd' | Choice of algorithm(s) _____________________________________________________________________ imagenames :: Names of input/output images model = | model image (I/O) complist = | component list (I/O) mask = | mask image (I/O) image = | restored image (O) residual = | residual image (O) psf = | psf (beam) image (O) _____________________________________________________________________ setimage :: Set position and stokes shape of output images nx = ny = cellx = celly = stokes = doshift = phasecenter = shiftx = shifty = _____________________________________________________________________ setchannels :: Set frequency axis shape of output images mode = | Chan mode (mfs,channel,velocity) nspw = | Number of spectral windows nchan = | Number of channels for each window chwid = | Channel width (GHz,kms) per window _____________________________________________________________________ weighting :: Set controls for visibility gridding weights type = | Weight mode (briggs,uniform... rmode = | Mode for Briggs weighting robust = | Factor for Briggs weighting noise = ! Noise for Briggs weighting ... uvmin = ! Min uvradius for nonzero weight uvmax = | Max uvradius for nonzero weight taper = | Taper type (none,Gaussian) tapbmaj = | Gaussian taper bmaj ... _____________________________________________________________________ clean :: Set common controls for cleaning algorithm = | 'clark'|'hogbom' niter = | Max number of iterations ... _____________________________________________________________________ wf :: Set controls for wide-file imaging wplanes = 1 | if > 1 then wprojection nfacets = 1 | if > 1 then uv faceting _____________________________________________________________________ interaction :: Set controls for interaction interactive = F | Interactive clean? async = F ! Run asynchronously? _____________________________________________________________________ commands: GO, SAVE, GET, ESC, EXIT, SCRIPT, TOOLSCRIPT, HELP, EXPLAIN Note that the parameters for 'mosaic' and 'sd' have disappeared. They would return if you reset (again) alg = ['wf','mosaic','sd'] etc. Thus the user has control over what they see at any time without restarting the task. Some comments: - there needs to be a column with the type, I didnt have room - The only param whose choice changes the further params shown is the first one (alg) for simplicity. One could think of making this possible for others (e.g. weight.type='briggs') - since you can input multiple ms, which may have different channel or velocity mappings, I wanted to divorce the output frequency axis shape choice from the input (which probably is in SELECT). You could also choose to map from the first ms and then use standard bchan,echan,nchav style params. I made up some params, theres probably a better way. - It would be best if the names here corresponed directly to method arguments in the toolkit, e.g. weighting::type ==> im.weight.type which might guide a toolkit reorganization - It would be good to restrict paging of the input list - A its most basic, this is just a delineated set of AIPS INPuts - It would be nice to have a command-line IRAF epar scrolling inputs - Also have Pyraf epar form window inputs (where commands are buttons) - The "script" command will write the current task call (w/arguments) to a specified .py file. The "toolscript" command will write the series of toolkit calls to a .py file. - There are some "hidden" parameters/arguments not visible in the interactive interface to handle return variables (error, rms, cleanflux, and such) plus to allow writing of .par files when used in script (parinit, parsave). - Some "return" variables might appear in the interface if they can be set in a previous run of the task and are relevant for subsequent runs. Clarifications: - In this model, it is what is set in param "alg" that determines what is shown. In the example above, you see sections for wf, mosaic, and sd because those were set in alg=['wf','mosaic','sd']. Note that these sections will be missing if you use the default (single-field) mode alg=''. - The second comment above says that we COULD take this further. Obvious tree branches include weight.type='briggs' which then shows the rmode, robust, and noise params. My gut feeling is that when we graph out this list even for single-field clean we will probably have to include these branches to keep the param list manageable. 2. The SELECT task is special. It essentially (in toolkit parlance) does and .open and a .setdata in all tools using the params. It needs to allow multiple ms assembly for imager (maybe via a addms=T parameter to accumulate ms in virtual list). Since doing the setdata in multiple tools (im,cb,ms) might be inefficient, maybe we should revisit the idea of a single meta-tool (bigDO) with all the default im,cb,ms sharing the single 'workspace'. The SELECT would also include the option to output a new ms. The MAKEMASK task bundles up the current image and imager stuff that lets you make masks (maskfromimage, boxmask, ...) plus interactivemask. 3. After CLEAN we tried to see how this might work for calibration. An example CALIB task would look similar, maybe like [CALIB] _____________________________________________________________________ Task to solve for calibration tables. NOTE: input ms(s) must have been selected using SELECT task. _____________________________________________________________________ what = 'g' | Type: 'g'|'t'|'d'|'b' _____________________________________________________________________ setapply :: Calibration to apply before solving ... Note that this task really is equivalent to an AIPS CALIB or BPASS or PCAL (though writing a table not putting in AN table) and only does one, outputting a table. There needs to be another task, maybe CALCORRECT to apply calibration if desired (or do on-the-fly in imaging?). 4. Since we envision that tasks have the same interface as tool methods, tasks can also be built from tasks as well as the toolkit (mix and match). For example, multiple CALIB calls can be assembled into a guided full calibration task. maybe AUTOCAL. But this gets dangerously close to a pipeline and maybe is best left to the pipeline or its emulator in the offline package. NOTE: Since the pipeline was being developed (and paid for) it seems inefficient to recreate it in our tasks. The finite programmer time might be best spent on the levels distictly between pipeline and toolkit (not too close to either). 5. Though not discussed, I would personally consider moving autoflag up to the task level as AUTOFLAG and keep flag or flagger (including auto options) at the toolkit level. Just MHO. Also invisible to the user. 6. While talking about calibration, it was apparent that AUTOFLAG and AUTOCAL plus voluminous plot outputs are insufficient and there needs to be some sort of SNIFFER (using VLBA terminology) to summarize relevant info for diagnostics. An example suggested was some sort of VERIFY task, that attempts to check the calibration using some basic heuristic and notify the user if there are obvious trouble areas. This could be combined (in a procedure) with a separate calibration-based flagging task. 7. It seems that the task/tool interface parameters will be saved in XML files. It was desired that there be some "editor" for users to deal with this (there must be some XML editor out there free and simple to use). 8. Is there a better way to organize calibration tables, which now are standalone tables with user-supplied names? Associate in/with ms? Should we use a default naming (e.g. keyed off of ms name)? Non-tasking issues discussed: 9. The need / strong desire for simple non-root installation of CASA was stressed. This must be possible and would greatly improve the release and the ability to grad and install this while on the road or visiting other institutions. I hope the projects require this :) 10. I had mistakenly stated that it was hard (or impossible) to invoke the toolkit constructers to instantiate new tools, e.g. you have im. but might like to also have an im1. This can be done and could be made easier. 11. Any GUI forms should be navigable via "clickless" means, using arrow keys and/or page up/down and tabs. 12. Next - look at the viewer qtviewer... 13. It was pointed out that this is the second chance for aips++ to get this (the interface? the package?) "right" and there would not be a third. =============================================================================== Summary for Day 4 2006-03-30 Recess! =============================================================================== Summary for Day 5 2006-03-31 Morning Session 1. D'oh! No time yet for qtviwer, will have to talk with David offline. 2. We had some issues in the examples with not enough room for the desired information (particularly the choices available with defaults noted, plus some description of purpose) along with the param and value fields. It was pointed out that like AIPS this will have to be multi-line, making the ability to efficiently page through the interface doubly important. 3. There was loud support for the need for some sort of "boxfile" approach for input of boxes (regions) for masking or imaging field location (or facets). These need to be simple ascii files that are easily editable. You should be able to point the task (e.g. CLEAN) at these, e.g. fields = @boxfile or fieldfile = boxfile. This needs to obey some standard formst (AIPS STARS files?). There was some discussion of whether masks (as images) were the way to go or everything could be boxfiles. It was pointed out that mask images were useful (could be made easily from other images) and thus should be supported. There were questions whether masks from boxes could be hidden from the user, say by box lists or files as input and underneath having the conversion to masks if needed or to virtual masks. 4. FITS image support (and presumably UV FITS also) in the viewer, and in the package in general (images) needs to be robust. If we find flavors of files that viewer or image barfs on (but ds9 swallows) then we should send these to the programmers so they can fix things. 5. Once again, sticky parameters were stressed. 6. The key thing to help users seems to be the grouping of tasks and tools into operational classes. For example: imaging = uv(+sd) data to images via FT (FFT or DFT) calibration = calibration of data, resulting in improved data uv analysis = analysis of uv data, not via images image analysis = analysis of images, not involving uv data data handling = manipulation of data (uv,sd) including editing & flagging image handling = manipulation of images including transforms and masking visualization = viewer, general plotting (rest included in other classes) also other stuff like utilities = misc things that are needed (measures,quanta,math) simulation = where the instrument simulators live ... We felt the tasks should be organized by these classes, but accessible from a flat system (e.g. from the casapy environment) without loading "packages" (ala IRAF), including modules (ala glish aips++), or other shenanigans. These are just groupings by functionality in an operation-centric view. It was desired that the toolkit (after rationalization and reorganization) be mapped also onto these classes, e.g. (old names) imager ==> imaging calibrater ==> calibration (& uv analysis, like modelfitting) images ==> image analysis, image handling viewer ==> visualization ms ==> data handling 7. Again it was stressed how annoying constructors, open/closing data was. It was suggested that these could be handled by also organizing the tookit under the data-centric model: im. (all the methods that expect an image to be opened) uv. (all the methods that expect a uv ms to be opened) and then you just need 2 methods im.open, uv.open (plus sd.open if we want to add single-dish separately). Or have the ms selection through single ms.open calls (plus extension to multi-ms opens) and then all ms-opening tools following from that, similar with im.open. You could have the tookit hierarchy then be something like tool.class.method (or toolclass.method) where tool=im|uv| and classes are as above e.g. im.analysis.method (or imanalysis.method) uv.calibration.method (or uvcal.method) Note that separate constructors beyond the default im,uv could be made for parallel processing, e.g. im1 and uv1. 8. The question came up whether we should just have only tasks, with toolkit hidden or optional. This would require wrapping the necessary bits of the toolkit to look like tasks (a fair bit of work unless it can be totally automated). A number of the panel members were wary of this approach due to the large number of tasks then available (will look like the canonical number of AIPS tasks) and break the idea that tasks were to bundle up common or special operations. It could also distract the programmers from working on the important stuff. See afternoon for further discussion on this... Afternoon "Summary" Session 1. The data-centric (im. uv.) plus op-centric (classes) approach was discussed, with mixed reactions. In particular, it was questioned whether the users should see the data-centric stuff at all by default. It was thought that the toolkit and task hierarchy should only reflect the classes. In this case, having single point open/close through simple im.open and ms.open and/or a SELECT task (maybe MSSELECT, IMSELECT) is all that is necessary, as long as all tasks and tool methods follow along transparently. 2. Brian Glendenning made the provocative statement that the toolkit approach has apparently been a failure and that what was needed in his mind were a task-only look-and-feel (where the toolkit was totally hidden). This would mean that all needed capabilities must be there in the task list. This seemed daunting though some sort of auto-wrapping process might port much of the toolkit. It was stated that the ONLY models that have worked in the past were task based (AIPS, Miriad, IRAF, GILDAS) and thus we should follow. He also reiterated that there will likely be not third chance if we fail now (with the casa release, particularly to ALMA). This approach has a number of implications of course, for example which task list to use (AIPS, Miriad, GILDAS, completely new, just map the toolkit). Would it be confusing have it similar to but not replicate other packages? Is this too much work? Who gets time budgeted to define and oversee the task definitions? It is definitely worth assessing this approach. Would it solve all our problems (e.g. are the real issues with the whole toolkit approach, or with our implementation)? Is it do-able in time? Note that it might have been helpful to hear any such project directives (if thats what they are) up front rather than letting us pursue unacceptable options (if thats what they are). 3. It was agreed that Steve would write the first draft of an offical summary document. It will be difficult to distill the weeks debates into a consensus position (plus maybe some options to explore) but it must be done. 4. I would like to thank all the panelists who participated in this week's focus group, in particular the CV group (John, Crystal, Ed) who traveled out for the meeting, and Eric (who has much better things to do)! This will have to be an ongoing process to some extent (like following up on issues, looking at prototypes as they become available, following up the viewer stuff), but this was an important kickoff to this process. =============================================================================== HERE IS THE LIST OF QUESTIONS I POSED AT THE START OF THE CHARETTE AND THE ANSWERS FROM THE FOCUS GROUP =============================================================================== Questions and Answers: 1. Command Interface Background: the ALMA and EVLA SSR docs require us to provide interactive command-line interfaces (CLI) and optionally some graphical user interfaces (GUI) to the package. We want to get the look-and-feel right to let the user (of any level) process their data most efficiently. - What part(s) of other package interfaces should we import for ours? (e.g. AIPS, Miriad, difmap, IRAF, Pyraf, IDL, ?) __________________________________________________________________ A: We liked bits of IRAF (command-line epar), PYRAF (form epar), AIPS (simple INPuts), MIRIAD (short param lists, os commandline execution of tasks). __________________________________________________________________ - Is our basic CLI model right? e.g. a) enter interface for tool/task b) query/set parameter values c) save/get parameter values (from/to file) d) write out script e) get some help (see #4 below) f) execute task g) escape (quit) without doing anything or saving state h) exit (quit) saving parameters __________________________________________________________________ A: Yes __________________________________________________________________ - What method of parameter setting is desired? e.g. a) param = value b) scrollable menu c) tool.method.param = value __________________________________________________________________ A: Yes, (a) at the minimum, (b) if possible, plus a form GUI (see below). Also (c) would be very useful. __________________________________________________________________ - Should the tool and task interfaces be the same? See #3 below... __________________________________________________________________ A: Yes __________________________________________________________________ - Should hierarchical parameter lists be supported? See #3 below... __________________________________________________________________ A: Yes, at least with the first choice (or a few choices) limiting the shown parameters for subsequent processing (see our example tasks). __________________________________________________________________ - How do we best accommodate users of differening expertise levels (novice, intermediate, expert) and needs (casual user, frequent user, heavy user)? Do we need different look/feel and if so what should these be? __________________________________________________________________ A: Make the tool, task, and param names intutive as possible, order and organize them sensibly, and provide good inline help. But dont have special super-simple tasks etc (but can have procedures like VLAPROCS). Have the same look and feel across the interfaces. __________________________________________________________________ - Does the CASApy interface previewed here and in the latest ALMA test look to be on the right track? If not, what should we be doing instead? If so, what is right and what can we do better? __________________________________________________________________ A: Yes, if we get simple param=value setting and modulo the epar interfaces that are being worked on. And be sure to get all the inp/help/explain level inline documentation into it uniformly! We are also curious how the paging of long parameter lists will work (command-line IRAF epar might help there). __________________________________________________________________ 2. Graphical Interface - Some GUIs are necessary, e.g. a) image/data display (viewer) b) data plotting (msplot, plotcal) c) graphical parameter setting (clean boxes/regions) d) data flagging (in viewer and msplot) Are the current tools for these on the right track? What would the users like to see? Are there better models out there? Should these be simple or have extensive functionality (e.g. viewer)? __________________________________________________________________ A: The viewer (in spite of some painful features) is well-liked (even loved) and we look forward to getting a full qtviewer. Some thought that limited use of external stuff like ds9 could be good, as long as format (FITS images) conversion were supported. __________________________________________________________________ - Is there a role for GUI parameter setting (forms, tool managers)? Should the package provide a uniform GUI interface equivalent to the CLI? __________________________________________________________________ A: Yes. The Pyraf epar-like form was thought to be desirable. It was strongly felt that "clickless" navigation through up/down arrows and page up/down and tab was necessary. __________________________________________________________________ - Should we provide extensive capabilities for custom GUI development by the user or is Python enough? __________________________________________________________________ A: No opinion. __________________________________________________________________ - What bits can or should be farmed off to standalone apps (e.g. the Miriad model)? __________________________________________________________________ A: We need our own viewer and x-y plotting. Can use ds9 for some images. __________________________________________________________________ 3. Tools and Tasks - Do we have the models and definitions right? e.g. "tool.method" - bottom (fine-grain) level of functionality, important for scripting and tool/pipeline development. In c++. "task" - a more coarse-grained bundling of functionality akin to AIPS and Miriad tasks, used to carry out basic or commonly used sequences of data processing operations with "astronomical knowledge" built-in. Most likely in c++ or Python. Obeys standard interface with error handling and help/docs. "procedure" - a Python script that runs a sequence of tool and/or tasks. Can be provided by project or user developed. Does have a common interface __________________________________________________________________ A: See above. Was thought that the toolkit was the toolkit and that anything else that had the same interface and support level was a task. Procedures would also have some minimal interfacing and documentation, and probably were the sort of thing users might write (besides just Python scripts). Some also asked about IRAF style "packages" (which in aips++ were categories of tools). __________________________________________________________________ - What should be done to rationalize the toolkit (the fundamental level beneath the tasking interfaces)? These include: a) make sure parameter names and usage are consistent across toolkit b) consolidate "set" methods, particularly to contain common params c) setdata => selectdata uniformity (perhaps through ms server?) __________________________________________________________________ A: YES YES YES! Maybe go further to make sensible method and argument names and organization to make taskbuilding easier. __________________________________________________________________ - What is the list of key tasks to work on first? What level of combination is needed (e.g. imager+calibrater for selfcal)? What are the parmaters for these? What are the defaults? __________________________________________________________________ A: See above. SELECT, MAKEMASK, CLEAN, MEM, CALIB so far. Will take more work to get exhaustive param list and defaults, and to some extent requires toolkit rationalization first. __________________________________________________________________ - Is extensive feedback from tasks (e.g. return variables for pipelines) needed? __________________________________________________________________ A: Some feedback necessary (error codes) plus some possible return vars, but pipeline will probably use special toolkit methods etc. So what? __________________________________________________________________ - Do we have the inteface models right (see #1 above)? Are there reasons to have task interfaces distinct from tools or procedures? __________________________________________________________________ A: Yes. Task should look like tool.method calls to the user, who doesn't care whats beneath the hood. And no, use same interface. __________________________________________________________________ - Should the tasking model be more "data-centric" (e.g. you choose a dataset to work on up top and the tools/tasks follow, perhaps using an "attach" command)? __________________________________________________________________ A: Don't like open/close of ms in similar tools, so a SELECT task looks to be very helpful. No one liked "attach" BTW. __________________________________________________________________ - Are global variables desirable? What variables should be pre-defined as global (if any)? __________________________________________________________________ A: This was controversial among the panel, some thought this might be an easy way to pass params between tasks and set working ms lists etc., but other wanted to avoid the danger by using other ways of setting these (e.g. SELECT task). Could be offered as set of pre-defined globals (e.g. $MS_LIST or moral equivalent). __________________________________________________________________ - Should parameter lists for task and tools be hierarchical (e.g. setting a particular param=value implies a new set available) or flat (like AIPS and Miriad)? __________________________________________________________________ A: Yes, at least at one level (first argument). It was worried that too many levels of hierarchy would make it too hard to use. Some wanted deeper trees. __________________________________________________________________ - What levels of "stickiness" for parameters local to tasks is desirable? Are all params sticky (until reset) or does the user specify which ones are sticky? __________________________________________________________________ A: Sticky within subsequent runs of a task. Ability to save/get params. Some "global" state setting (SELECT task). It was hoped we could get command-line access to task or tool.method params via attributes, e.g. im.clean.niter = 100 and these could be sticky. Command line tool.method(param1=value1, param2=value2) not sticky. __________________________________________________________________ - Can the open/close (e.g. of ms in tools) be hidded from the user sensibly yet still not hurt efficiency and flexibility for large datasets? __________________________________________________________________ A: Yes, via a SELECT task, maybe in combo with a metatool/bigDO (im+cb+ia+ms+...) for the default tools in the casapy session. __________________________________________________________________ 4. Documentation - What are the levels of documentation needed? e.g. a) quick tool/task info (e.g. INP) b) more extensive help (e.g. HELP) c) detailed explanations (EXPLAIN) d) Cookbooks e) Getting started docs (or merge with Cookbook?) f) user programming docs g) detailed programmer docs __________________________________________________________________ A: (a),(b),(c) absolutely critical for users. A good Cookbook is important and the current one is looking good. Merge (e) with (d). The (f) as User Reference Man could be useful but if (a)-(c) are good not as necessary. And (g) is for the project not users. __________________________________________________________________ - What (inline) help is needed at the CLI? __________________________________________________________________ A: INP level in interface or default query (?). HELP, EXPLAIN via command or button. Possibly the option to drive a browser to a webpage but I'm not fond of that... __________________________________________________________________ - What should the (online) documentation contain? __________________________________________________________________ A: Cookbook very important. Good User Reference Manual consistent with and including inline help (a)+(b)+(c) should be available (maybe built from inline). Want good examples also. __________________________________________________________________ - Who needs to create and maintain what documentation? (Note - there is currently no budget provided to the aips++/casa group that will allow extensive user documentation beyond basic interface info.) __________________________________________________________________ A: A dedicated Documentation Librarian / Editor will be required. Will need dedicated scistaff input, but help will need to be written by someone very familiar with the package likely at programmming level. Will be a challenge to projects to provide the necessary resources... NOTE: John jotted down 0.1 FTE (1/2 day /wk year 'round) from NAASC for this. I would hazard to guess this is probably more than a factor of 2 too low, presuming new code is being written and new observing modes being implemented at a reasonable rate even through 2016. __________________________________________________________________ - Is there a role for consensual community documentation (e.g. wikipedia)? __________________________________________________________________ A: Possibly, but controversial. Good inline documentation most critical and its not apparent how that could be done via wikifu. However, it was recognized that there MUST be a way to get the documentation out there in a timely manner, and a good wiki (not twiki) if carefully controlled (group access only) could help. __________________________________________________________________ X. Other comments on stuff like: - Distribution (is this working for people?) __________________________________________________________________ A: rpm's good for now, seem to work mostly (some hiccups). Really want non-root install! __________________________________________________________________ - Known Issues (existing stuff that doesnt work right that had better be fixed) __________________________________________________________________ A: Needs more thought...stay tuned. __________________________________________________________________ - Existing Schedule/Plan (Query to Projets: is this schedule ok?) __________________________________________________________________ A: Not our call. __________________________________________________________________