Measuring the User Experience This page intentionally left blank Measuring the User Experience Collecting, Analyzing, and Presenting Usability Metrics Second Edition Tom Tullis Bill Albert (cid:34)(cid:46)(cid:52)(cid:53)(cid:38)(cid:51)(cid:37)(cid:34)(cid:46)(cid:1)(cid:116)(cid:1)(cid:35)(cid:48)(cid:52)(cid:53)(cid:48)(cid:47)(cid:1)(cid:116)(cid:1)(cid:41)(cid:38)(cid:42)(cid:37)(cid:38)(cid:45)(cid:35)(cid:38)(cid:51)(cid:40)(cid:1)(cid:116)(cid:1)(cid:45)(cid:48)(cid:47)(cid:37)(cid:48)(cid:47)(cid:1)(cid:116)(cid:1)(cid:47)(cid:38)(cid:56)(cid:1)(cid:58)(cid:48)(cid:51)(cid:44)(cid:1) (cid:48)(cid:57)(cid:39)(cid:48)(cid:51)(cid:37)(cid:1)(cid:116)(cid:1)(cid:49)(cid:34)(cid:51)(cid:42)(cid:52)(cid:1)(cid:116)(cid:1)(cid:52)(cid:34)(cid:47)(cid:1)(cid:37)(cid:42)(cid:38)(cid:40)(cid:48)(cid:1)(cid:116)(cid:1)(cid:52)(cid:34)(cid:47)(cid:1)(cid:39)(cid:51)(cid:34)(cid:47)(cid:36)(cid:42)(cid:52)(cid:36)(cid:48)(cid:1) (cid:52)(cid:42)(cid:47)(cid:40)(cid:34)(cid:49)(cid:48)(cid:51)(cid:38)(cid:1)(cid:116)(cid:1)(cid:52)(cid:58)(cid:37)(cid:47)(cid:38)(cid:58)(cid:1)(cid:116)(cid:1)(cid:53)(cid:48)(cid:44)(cid:58)(cid:48) (cid:46)(cid:80)(cid:83)(cid:72)(cid:66)(cid:79)(cid:1)(cid:44)(cid:66)(cid:86)(cid:71)(cid:78)(cid:66)(cid:79)(cid:79)(cid:1)(cid:74)(cid:84)(cid:1)(cid:66)(cid:79)(cid:1)(cid:74)(cid:78)(cid:81)(cid:83)(cid:74)(cid:79)(cid:85)(cid:1)(cid:80)(cid:71)(cid:1)(cid:38)(cid:77)(cid:84)(cid:70)(cid:87)(cid:74)(cid:70)(cid:83) Acquiring Editor: Meg Dunkerley Editorial Project Manager: Heather Scherer Project Manager: Priya Kumaraguruparan Cover Designers: Greg Harris, Cheryl Tullis Morgan Kaufmann is an imprint of Elsevier 225 Wyman Street, Waltham, MA, 02451, USA ©2013 Published by Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods or professional practices, may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information or methods described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability,negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data Tullis, Tom (Thomas) Measuring the user experience : collecting, analyzing, and presenting usability metrics / William Albert, Thomas Tullis. pages cm Revised edition of: Measuring the user experience / Tom Tullis, Bill Albert. 2008. Includes bibliographical references and index. ISBN 978-0-12-415781-1 1. User interfaces (Computer systems) 2. User interfaces (Computer systems)—Measurement. 3. Measurement. 4. Technology assessment. I. Albert, Bill (William) II. Title. QA76.9.U83T95 2013 005.4'37—dc23 2013005748 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. Printed in the United States of America 13 14 15 16 17 10 9 8 7 6 5 4 3 2 1 For information on all MK publications visit our website at www.mkp.com Dedication v Tom: To my wife, Susan, and my daughters, Cheryl and Virginia Bill: To my late father, Lee Albert, and late mother-in-law, Gita Mitra This page intentionally left blank Contents vii PREFACE TO THE SECOND EDITION xiii ACKNOWLEDGMENTS xv BIOGRAPHIES xvii CHAPTER 1 Introduction 1 1.1 What Is User Experience 4 1.2 What Are User Experience Metrics? 6 1.3 The Value of UX Metrics 8 1.4 Metrics for Everyone 9 1.5 New Technologies in UX Metrics 10 1.6 Ten Myths about UX Metrics 11 Myth 1: Metrics Take Too Much Time to Collect 11 Myth 2: UX Metrics Cost Too Much Money 12 Myth 3: UX Metrics Are Not Useful When Focusing on Small Improvements 12 Myth 4: UX Metrics Don’t Help Us Understand Causes 12 Myth 5: UX Metrics Are Too Noisy 12 Myth 6: You Can Just Trust Your Gut 13 Myth 7: Metrics Don’t Apply to New Products 13 Myth 8: No Metrics Exist for the Type of Issues We Are Dealing with 13 Myth 9: Metrics Are not Understood or Appreciated by Management 14 Myth 10: It’s Difficult to Collect Reliable Data with a Small Sample Size 14 CHAPTER 2 Background 15 2.1 Independent and Dependent Variables 16 2.2 Types of Data 16 2.2.1 Nominal Data 16 2.2.2 Ordinal Data 17 2.2.3 Interval Data 18 2.2.4 Ratio Data 19 2.3 Descriptive Statistics 19 2.3.1 Measures of Central Tendency 19 2.3.2 Measures of Variability 21 2.3.3 Confidence Intervals 22 2.3.4 Displaying Confidence Intervals as Error Bars 24 2.4 Comparing Means 25 2.4.1 Independent Samples 26 2.4.2 Paired Samples 27 viii Contents 2.4.3 Comparing More Than Two Samples 29 2.5 Relationships Between Variables 30 2.5.1 Correlations 30 2.6 Nonparametric Tests 31 2.6.1 The χ2 Test 31 2.7 Presenting your Data Graphically 32 2.7.1 Column or Bar Graphs 33 2.7.2 Line Graphs 35 2.7.3 Scatterplots 36 2.7.4 Pie or Donut Charts 38 2.7.5 Stacked Bar or Column Graphs 39 2.8 Summary 40 CHAPTER 3 Planning 41 3.1 Study Goals 42 3.1.1 Formative Usability 42 3.1.2 Summative Usability 43 3.2 User Goals 44 3.2.1 Performance 44 3.2.2 Satisfaction 44 3.3 Choosing the Right Metrics: Ten Types of Usability Studies 45 3.3.1 Completing a Transaction 45 3.3.2 Comparing Products 47 3.3.3 Evaluating Frequent Use of the Same Product 47 3.3.4 Evaluating Navigation and/or Information Architecture 48 3.3.5 Increasing Awareness 48 3.3.6 Problem Discovery 49 3.3.7 Maximizing Usability for a Critical Product 50 3.3.8 Creating an Overall Positive User Experience 51 3.3.9 Evaluating the Impact of Subtle Changes 51 3.3.10 Comparing Alternative Designs 52 3.4 Evaluation Methods 52 3.4.1 Traditional (Moderated) Usability Tests 53 3.4.2 Online (Unmoderated) Usability Tests 54 3.4.3 Online Surveys 56 3.5 Other Study Details 57 3.5.1 Budgets and Timelines 57 3.5.2 Participants 58 3.5.3 Data Collection 60 3.5.4 Data Cleanup 60 3.6 Summary 61 CHAPTER 4 Performance Metrics 63 4.1 Task Success 65 4.1.1 Binary Success 66 4.1.2 Levels of Success 70 Contents ix 4.1.3 Issues in Measuring Success 73 4.2 Time on Task 74 4.2.1 Importance of Measuring Time on Task 75 4.2.2 How to Collect and Measure Time on Task 75 4.2.3 Analyzing and Presenting Time-on-Task Data 78 4.2.4 Issues to Consider When Using Time Data 81 4.3 Errors 82 4.3.1 When to Measure Errors 82 4.3.2 What Constitutes an Error? 83 4.3.3 Collecting and Measuring Errors 84 4.3.4 Analyzing and Presenting Errors 84 4.3.5 Issues to Consider When Using Error Metrics 86 4.4 Efficiency 86 4.4.1 Collecting and Measuring Efficiency 87 4.4.2 Analyzing and Presenting Efficiency Data 88 4.4.3 Efficiency as a Combination of Task Success and Time 90 4.5 Learnability 92 4.5.1 Collecting and Measuring Learnability Data 94 4.5.2 Analyzing and Presenting Learnability Data 94 4.5.3 Issues to Consider When Measuring Learnability 96 4.6 Summary 96 CHAPTER 5 Issue-Based Metrics 99 5.1 What Is a Usability Issue? 100 5.1.1 Real Issues versus False Issues 101 5.2 How to Identify an Issue 102 5.2.1 In-Person Studies 102 5.2.2 Automated Studies 103 5.3 Severity Ratings 103 5.3.1 Severity Ratings Based on the User Experience 104 5.3.2 Severity Ratings Based on a Combination of Factors 105 5.3.3 Using a Severity Rating System 106 5.3.4 Some Caveats about Rating Systems 107 5.4 Analyzing and Reporting Metrics for Usability Issues 107 5.4.1 Frequency of Unique Issues 108 5.4.2 Frequency of Issues Per Participant 109 5.4.3 Frequency of Participants 109 5.4.4 Issues by Category 110 5.4.5 Issues by Task 111 5.5 Consistency in Identifying Usability Issues 111 5.6 Bias in Identifying Usability Issues 113 5.7 Number of Participants 115 5.7.1 Five Participants Is Enough 115 5.7.2 Five Participants Is Not Enough 117 5.7.3 Our Recommendation 118 5.8 Summary 119
Description: