ebook img

DTIC ADA260606: Measures of User-System Interface Effectiveness: An Encoding Scheme and Indicators for Assessing the Usability of Graphical, Direct-Manipulation Style User Interfaces PDF

105 Pages·4.8 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA260606: Measures of User-System Interface Effectiveness: An Encoding Scheme and Indicators for Assessing the Usability of Graphical, Direct-Manipulation Style User Interfaces

A wlmnDTIC AD-A260 606 S ELECTE S FEB 2 4= 1993i Measures of User-System TR 92B0000047V3 Interface Effectiveness: An January 1993 Encoding Scheme and Indicators for Assessing the Usability of Graphical, Direct-Manipulation Style User Interfaces Donna L.C uomo Charles D. Bowen 93-03874 U04ord Maahuseusm Form Approved £ REPORT DOCUMENTATION PAGE oMB No. 0704-0188 PuOic feocrting burden 'or tis collection o0 inormation s estimaten to averaqe I .our per resporse. inciuding the time 0orr evse ing instr.ctioný seacnlrq nfs(cid:127)n ata Se(ourea. gathering and maintaining the data needed, and completing and reviewvnd the collection of information 'Send comments reqaroing this burden estimate or anl other asvec of ti's collection of intformation. including suggestion$ toy reducing this ouroen. to Washington Heacdduartern Services. Directorate ?or ntormation Ooerat:ons and Reports. 1215 Jefteron Davis Highway. Suite 1204. Arlington. VA 22202-4302 and to the Office .-f Management and Budget. PaperworK Reduction Proiect (0704-0188). Vvashington. LrC 20503 1. AGENCY USE ONLY (Leave blank) 1 2I.J RaEnPuORaTr yD AT1E9 9 I73 . REPP R YYEEAADDD AAEE OOEEEE 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Measures of User-System Interface Effectiveness: An Encoding Scheme and Indicators for Assessing the Usabilit of Graphical, Direct-Manipulation Style User Interfaces 6. AUTHOR(S) Donna L. Cuomo Charles D. Bowen 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER The MITRE Corporation 202 Burlington Road Bedford, MA 01730-1420 MTR 92B0000047V3 9. SPONSORING 'MONITORING AGENCY NAME(S' AND ADDRESS(ES) 10. SPONSORING/MONITORING AGENCY REPORT NUMBER same as above same as above 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION 'AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution unlimited ABSTRACT See attached. 14. SUBJECT TERMS 15. NUMBER OF PAGES 92 User-system Interface, Human-computer Interaction 1i. PRICE CODE "I. SECURITY CLASSIFICATION 1i, SECURITY CLASSIFICATION I1. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF ABSTRACT I Unclassified Unclassified Unclassified I Unlimited S" 754(-0-28C-550C, Sta1~Lo.a'3 -o" .2 Measures of User-System MTh 92BOO00047V3 Interface Effectiveness: An January 1993 Encoding Scheme and Indicators for Assessing the Usability of Graphical, Direct-Manipulation Style User Interfaces Donna L. Cuomo Charles D. Bowen DTIC QUALMTY INSPECTED -3 Contra, Spowor MSR Coenbc No. N/A Praie No. 9162A For -ACces-o- DeptD 047 NTIS CRA&I Approvedf or public re(cid:127)me ; Dric TAB 0 dbtflbulon unbmited. U nalInouVced 0- By .. M M5-\ Availability Codes Fldrd Mm w usllut'=s Dist _ Avd1l and Ior Special //L Department Approval: C % C' Nancy C. uGoodwiv MITRE Project Appro val___ __ Donna L. Cuomo ABSTRACT The purpose of this MWIRE Sponsored Research project was to develop methods and measures for evaluating user-system interface effectiveness for command and control systems with graphical, direct manipulation style interfaces. Due to the increased use of user interface prototyping during concept definition and demonstration/vtlidation phases, the opportunity exists for human factors engineers to apply evaluation methodologies early enough in the life cycle to make an impact on system design. Understanding and improving user-system interface (USI) evaluation techniques is critical to this process. In 1986, Norman proposed a descriptive "stages of user activity" model of human-computer interaction (HCI). Hutchins, Hollan, and Norman (1986) proposed concepts of measures based on the model which would assess the directness of the engagements between the user and the interface at each stage of the model. We created operational definitions of the concepts of directness, and derived observable indicators that certain types of indirectness may exist in the interface design. This phase of our research program involved using these concepts as a basis for a methodology of analyzing data collected during usability studies. A usability study was performed on the Military Airspace Management System (MAMS) prototype; four participants' and one user interface expert's data were used for further analysis. We first proved that in order to assess concepts such as the directness of user-system interface engagements we need to know both what the user intended to do and what they did. This involves integrating data collected via different media (computer collected keystrokes, transcribed user protocols, video of the display output). A model-based, two-level encoding scheme was then created and applied to the usability data to aid in extracting and quantifying measures of USI effectiveness. The first level provides a high-level description of user activity, depicting users' task intentions, intentions to execute, errors by stages, and the success of their endeavors. The second level provides detailed information on the users' input activities at a user-interface object level. The two levels combined provide a complete description of what the users want to do, how they did it, and how directly the system allows them to do it. We then manually extracted our derived indicators of indirectness from each user's data and were able to perform a much more complete and quantifiable analysis of the user-system interface than would have been possible with more traditional evaluation methods. Examples of usability problems identified with this method are provided and we discuss the need for a computer tool to make application of the method more efficient. 111 EXECUTIVE SUMMARY INTRODUCTION The focus of the project Measures of User-System Interface Effectiveness is to study and validate methodologies and measures for analyzing the overall effectiveness of user-system interfaces (USI) for task performance. There is an increased emphasis on user-centered system design which involves designing a system from a user's perspective, where the concepts, objects, and actions embodied in a system closely match the user's task concepts, objects, and actions allowing users to interact with the computer task domain in a direct way. This report, the third in a series of MSR reports, documents the evaluation methodology we developed for analyzing data collected in usability studies, and provides examples of the method applied to a prototyped system. MEASURING GRAPHICAL, DIRECT MANIPULATION STYLE INTERFACES The class of interfaces we were interested in evaluating were graphical, direct-manipulation style interfaces supporting ill-defined tasks. Ill-defined tasks refer to tasks which users perform which have more than one correct solution, and alternative methods for performing these tasks exist. This class of applications would include scheduling tasks, mission planning tasks, and computer-aided architectural design tasks. These tasks can be contrasted to well- defined tasks such as some data entry tasks where there is one correct solution, e.g., a document is entered into the system and edited until error free. The attributes of the interface, direct manipulation and graphical, as well as the ill-defined nature of the tasks makes traditional USI evaluation measures less useful in terms of the feedback they provide. Traditional USI evaluation measures tend to be summary measures such as time to complete a task, percent of task completed, time spent in errors, percent or number of errors, command frequency, etc. (Whiteside et al., 1988). These are gross measures and while various aspects of the interface will undoubtedly affect these measures, this type of measure alone does not provide us with enough granularity and diagnostic information on each user interaction with the system. Additionally, the concepts of direct manipulation raise a virtually unexplored area in terms of defining and measuring directness to a degree that they can be applied in practice. In summary, a method for assessing user interfaces for this class of interfaces needs to be defined. v CONCEPTS OF SEMANTIC AND ARTICULATORY DISTANCE Norman (1986), and Hutchins, Hollan and Norman (1986) provide a good treatment of concepts of directness in user-system engagements. In their conceptual model of human- computer interaction they describe seven stages a user could traverse while accomplishing a goal with a computer: intention formation, action specification, execution, perception, interpretation, and evaluation. They then define four concepts of distance which are critical to maidng a design user-centered: semantic and articulatory distance of execution, and semantic and articulatory distance of evaluation. Semantic distance of execution spans the intention formation stage and involves uwhether the user can say what he/she wants to say directly with the computer system or whether a complex expression is required. Articulatory distance of execution spans the action specification stage and reflects the closeness of the form of the action to be executed to the meaning of the input expression. This is followed by the stages of execution and perception -- the stages spanning the translation from mental state to physical activity and back again. Articulatory distance of evaluation spans the interpretation stage and concerns how easily the output expression can be extracted from the output expression form. Semantic distance of evaluation concerns the ease with which users can determine whether they accomplished their goal. These concepts are complex and intriguing but still rather high-level. Characterizing a system by how well it supported the different stages, however, would provide us with the right level of information needed to successfully iterate a design. We derived indicators or behaviors of indirectness for each stage, based on Hutchins et al. concepts of directness; one set of indicators is shown below. Supporting identification of the indicators involves collecting and evaluating user-system performance at an interaction-by-interaction level and the sequencing of engagements would be important. We derived a model-based methodology which allows us to do this. Causes of semantic indirectness of execution and evaluation and the corresponding observable indicators Semantic indirectness of execution if: Indicator User intention not supported • Protocol stating desired function • Attempting to execute unsupported function, forced to abort Missing high-level object • Same step or set of actions repeated on lower-level objects Complex expression required • Many steps/actions required to complete intention to accomplish intention • Errors in step order -Incomplete/aborts in intentions vi Semantic indirectness of evaluation if: Indicator Extra step/s required to • Number and purpose of steps performed (e.g., to get perform an evaluation information, or "check" something) Difficult or user unable to • Frequency and types of evaluation errors perform an evaluation Evaluation not made * THE METHODOLOGY The methodology consisted of four major steps. The first step was to conduct a usability study; this involves real users exercising a system or prototype while evaluators collect data on the process. We have determined that both verbal protocol data (where users are asked to voice their thoughts aloud), as well as time-stamped computer collected history logs (records all the users input actions) are required to be able to assess the four directness of engagement concepts. Protocols provide information about what a user intends to do while the history log provides information about how the user did it. The latter is easier to collect and analyze but is ambiguous and insufficient if used alone. A usability study was conducted using a prototyped airspace management scheduling system. Data was collected on seven participants, with the method being applied to five of the participant's data. One of the participants was the USI design engineer for the project and served as our "user-interface expert" participant. The second step involved integrating the collected data by combining the transcribed user protocols with the appropriate portions of the user's history file; this was done manually. The third step involved developing and applying a two-level encoding scheme, based on Norman's model, to the data. The first level of the encoding scheme provides a high-level description of user activity, depicting users' task intentions, intentions to execute, errors by stages, and the success of each endeavor. The second level provides detailed information on the users' input activities at a user-interface object level. The two levels combined provide a complete description of what the users want to do, how they did it, and how well they did it. The codes and their descriptions are shown in the tables below. vii Semantic-Level Encodings Encoding Definition Goal Scenario step. Task intention (Int.task) An intention to complete one task contributing to the completion of a goal. Perception intention An intention to improve the perceptibility of (Int.per) a display. Intention to execute One computer step (may be comprised of (Int.exe) multiple actions) leading to the completion of a task intention. Several steps may be required per task intention. Evaluate (Eval) The success with which the intention was accomplished. Error in intention (Err.int) The intention was incorrect and will not accomplish the goal. Error in action specification Wrong sequence of actions to accomplish (Err.acsp) the intention to execute. Error in execution Manual, motor error in executing. (Err. exec) Error in perception Break-down in human perceptual (Err. per) processing of information on a display. Error in interpretation User fails to interpret system state correctly. (Err. inter) Error in evaluation User mistakenly thinks has or has not (Err.eval) moved closer to the goal. Recovered error (Rec.err) Error was detected and recovered from. viii Articulatory-Level Encodings Encoding Definition Menu A menu was opened Command A command was selected list-Select An item was selected from a list Button A button was selected Field An action was taken in a field Scroll A scroll bar action was performed Window A window action was performed Application-specific objects Encodings to track the manipulation of application-specific objects The encoding of the data was done with the aid of a tool called SHAPA, developed at the University of Illinois at Urbana-Champagne. The fourth step in the evaluation methodology involved extracting the indicators of interest from the encoded data files and comparing them across users. For ease of recording the extracted information, we created a data summarization table. For each user task intention the critical information is summarized in a manner which allows for easy comparison across subjects. An excerpt from a real participant's summary table for the task "schedule missions" is shown in the figure below. Int.task Freq Int.exec Freq # actions Eval of Eval of Errors Corn- per int.exec int.exec int.task ments 2-24 resconf 1 Iookthawk/sdt-w 1I 4 OK thawk/sdt-w bokwpn-w 1 OK movesdt-w 1 6 OK OK 2-25 schsdt-w 1 schsdt-w 1 3 OK OK 2-26 schwpn-w 1 schwpn-w 1 2 OK OK 2-27 sch1240026-W 1 Iook1240026-w 1 2 OK sch1240026-w 1 1 OK OK 2-30 schfox-w 1 Iookfox-w 1 2 OK movefox-w 1 1 OK e14 -confli schfox-w 1 1 OK OK state -R err.inter ix

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.