D E S I G N E VA L U AT I O N
15.6 D E S I G N E VA L U AT I O N
Once an operational user interface prototype has been created, it must be evaluated to determine whether it meets the needs of the user. Evaluation can span a formal- ity spectrum that ranges from an informal "test drive," in which a user provides impromptu feedback to a formally designed study that uses statistical methods for the evaluation of questionnaires completed by a population of end-users.
The user interface evaluation cycle takes the form shown in Figure 15.3. After the design model has been completed, a first-level prototype is created. The prototype is evaluated by the user, who provides the designer with direct comments about the efficacy of the interface. In addition, if formal evaluation techniques are used (e.g., questionnaires, rating sheets), the designer may extract information from these data
F I G U R E 15.3
The interface
evaluation cycle
Build prototype #1 interface
Build prototype # n interface
are made
Evaluation is studied by
Interface design
designer
is complete
(e.g., 80 percent of all users did not like the mechanism for saving data files). Design modifications are made based on user input and the next level prototype is created.
“A computer terminal is not some clunky
The evaluation cycle continues until no further modifications to the interface design old television with a
are necessary. typewriter in front of
The prototyping approach is effective, but is it possible to evaluate the quality of a it. It is an interface
user interface before a prototype is built? If potential problems can be uncovered and where the mind and
body can connect corrected early, the number of loops through the evaluation cycle will be reduced and with the universe
development time will shorten. If a design model of the interface has been created, a and move bits of
number of evaluation criteria [MOR81] can be applied during early design reviews: it about.”
Douglas Adams
1. The length and complexity of the written specification of the system and its interface provide an indication of the amount of learning required by users of the system.
2. The number of user tasks specified and the average number of actions per task provide an indication of interaction time and the overall efficiency of the system.
3. The number of actions, tasks, and system states indicated by the design
model imply the memory load on users of the system.
4. Interface style, help facilities, and error handling protocol provide a general indication of the complexity of the interface and the degree to which it will be accepted by the user.
Once the first prototype is built, the designer can collect a variety of qualitative and quantitative data that will assist in evaluating the interface. To collect qualita- tive data, questionnaires can be distributed to users of the prototype. Questions can
be all (1) simple yes/no response, (2) numeric response, (3) scaled (subjective) response, or (4) percentage (subjective) response. Examples are
1. Were the icons self-explanatory? If not, which icons were unclear?
2. Were the actions easy to remember and to invoke?
3. How many different actions did you use?
4. How easy was it to learn basic system operations (scale 1 to 5)? User Interface
5. Compared to other interfaces you've used, how would this rate—top 1%, top 10%, top 25%, top 50%, bottom 50%?
If quantitative data are desired, a form of time study analysis can be conducted. Users are observed during interaction, and data—such as number of tasks correctly completed over a standard time period, frequency of actions, sequence of actions, time spent "looking" at the display, number and types of errors, error recovery time, time spent using help, and number of help references per standard time period—are collected and used as a guide for interface modification.
A complete discussion of user interface evaluation methods is beyond the scope of this book. For further information, see [LEA88] and [MAN97].
PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G
Parts
» The Concurrent Development Model
» SUMMARY Software engineering is a discipline that integrates process, methods, and tools for
» PEOPLE In a study published by the IEEE [CUR88], the engineering vice presidents of three
» THE PROCESS The generic phases that characterize the software process—definition, development,
» THE PROJECT In order to manage a successful software project, we must understand what can go
» METRICS IN THE PROCESS AND PROJECT DOMAINS
» Extended Function Point Metrics
» METRICS FOR SOFTWARE QUALITY
» INTEGRATING METRICS WITHIN THE SOFTWARE PROCESS
» METRICS FOR SMALL ORGANIZATIONS
» ESTABLISHING A SOFTWARE METRICS PROGRAM
» Obtaining Information Necessary for Scope
» An Example of LOC-Based Estimation
» QUALITY CONCEPTS 1 It has been said that no two snowflakes are alike. Certainly when we watch snow
» SUMMARY Software quality assurance is an umbrella activity that is applied at each step in the
» R diagram 1.4 <part-of> data model; data model <part-of> design specification;
» SYSTEM MODELING Every computer-based system can be modeled as an information transform using an
» Facilitated Application Specification Techniques
» Data Objects, Attributes, and Relationships
» Entity/Relationship Diagrams
» Hatley and Pirbhai Extensions
» Creating an Entity/Relationship Diagram
» SUMMARY Design is the technical kernel of software engineering. During design, progressive
» Data Modeling, Data Structures, Databases, and the Data Warehouse
» Data Design at the Component Level
» A Brief Taxonomy of Styles and Patterns
» Quantitative Guidance for Architectural Design
» Isolate the transform center by specifying incoming and outgoing
» SUMMARY Software architecture provides a holistic view of the system to be built. It depicts the
» The User Interface Design Process
» Defining Interface Objects and Actions
» D E S I G N E VA L U AT I O N
» Testing for Real-Time Systems
» Organizing for Software Testing
» Criteria for Completion of Testing
» The Transition to a Quantitative View
» The Attributes of Effective Software Metrics
» Architectural Design Metrics
» Component-Level Design Metrics
» SUMMARY Software metrics provide a quantitative way to assess the quality of internal product
» Encapsulation, Inheritance, and Polymorphism
» Identifying Classes and Objects
» The Common Process Framework for OO
» OO Project Metrics and Estimation
» Event Identification with Use-Cases
» SUMMARY Object-oriented analysis methods enable a software engineer to model a problem by
» Partitioning the Analysis Model
» Designing Algorithms and Data Structures
» Program Components and Interfaces
» SUMMARY Object-oriented design translates the OOA model of the real world into an
» Testing Surface Structure and Deep Structure
» Deficiencies of Less Formal Approaches 1
» What Makes Cleanroom Different?
» Design Refinement and Verification
» SUMMARY Cleanroom software engineering is a formal approach to software development that
» Structural Modeling and Structure Points
» Describing Reusable Components
» SUMMARY Component-based software engineering offers inherent benefits in software quality,
» Guidelines for Distributing Application Subsystems
» Middleware and Object Request Broker Architectures
» An Overview of a Design Approach
» Consider expert Web developer will create a complete design, but time and cost can be appropriate
» A Software Reengineering Process Model
» Reverse Engineering to Understand Data
» Forward Engineering for Client/Server Architectures
» SUMMARY Reengineering occurs at two different levels of abstraction. At the business level,
Show more