Chapter 16: Evaluation: Inspections, Analytics & Models
Loading audio…
ⓘ This audio and summary are simplified educational interpretations and are not a substitute for the original text.
Evaluation methods that do not require the direct interaction or observation of users are crucial for assessing design usability when access to a target population is limited, relying instead on codified knowledge, remote data logging, or performance estimation formulas. These methods are broadly categorized into inspections, analytics, and predictive models. Inspections, such as heuristic evaluation and walk-throughs, involve expert researchers role-playing typical users to identify potential usability challenges by comparing interface elements against established principles. Heuristic evaluation, notably formalized by Jakob Nielsen, employs a set of usability standards—like ensuring system status visibility, maximizing user control, maintaining consistency, and minimizing memory load—to assess dialog boxes, navigation structure, and overall design effectiveness. Specialized heuristics, including the POUR (Perceivable, Operable, Understandable, Robust) principles derived from the Web Content Accessibility Guidelines (WCAG), focus specifically on serving users with disabilities. Conversely, cognitive walk-throughs are a more granular inspection method focused on ease of learning, simulating the steps a user takes to solve a problem and analyzing whether the correct action is evident, noticeable, and correctly interpreted based on feedback. A related technique, the semiotic engineering method SigniFYIng Message, evaluates how well a design communicates its intended meaning through interaction signs (static, dynamic, and metalinguistic). Pluralistic walk-throughs involve a collaborative team of users, developers, and researchers stepping through task scenarios together. The second major category, Analytics, involves automatically logging user interactions remotely, such as keypresses or page flow, allowing researchers to explore massive datasets visually. Web analytics, valued especially by market-driven businesses, track metrics like user traffic, page views, and bounce rate (the percentage of visitors viewing only one page) to optimize site performance; similarly, learning analytics are employed in educational settings like massive open online courses. Closely related is A/B testing, a large-scale, controlled experiment involving hundreds or thousands of participants randomly allocated to different conditions to statistically compare the impact of minor or major design differences (e.g., Design A versus Design B) on outcomes like click rates, often preceded by an A/A test to ensure population randomness. Finally, Predictive Models use mathematical formulas to forecast user efficiency. The most significant example in interaction design is Fitts’ law, which models the time required to point at a target, predicting that larger targets and targets constrained by screen edges (due to a pinning action) are reached more quickly, influencing the optimal placement and sizing of physical or digital buttons.