Chapter 21: Evaluating Architectures – Tradeoffs & Risks
Loading audio…
ⓘ This audio and summary are simplified educational interpretations and are not a substitute for the original text.
Evaluating Architectures – Tradeoffs & Risks summary of software architecture evaluation, as detailed in Chapter 21 of Software Architecture in Practice, establishes that reviewing an architecture is essential for predicting quality attributes and reducing project risk before costly system construction begins. The cost of evaluation must be less than the value it provides, acting fundamentally as an insurance policy against potentially catastrophic failure, especially if the system costs millions of dollars or has large safety-critical implications. Regardless of the methodology or timing of the review, all evaluations are driven by architecturally significant requirements (ASRs), typically expressed as quality attribute scenarios. Core evaluation activities involve reviewers first ensuring they understand the architecture, then determining drivers, checking if scenarios are satisfied through walkthroughs, and systematically capturing exposed vulnerabilities as potential problems or risks. Analysis effort is guided by the importance of the architectural decision being examined and the number of alternatives under consideration. Evaluation can be executed by the architect internally as part of the design process, through formal peer review, or by unbiased outsiders who may possess specialized knowledge and often influence management more effectively. One primary formal evaluation process is the Architecture Tradeoff Analysis Method (ATAM), designed for large-scale systems and external evaluation teams unfamiliar with the project. The ATAM requires the mutual cooperation of three groups: the external evaluation team, the project decision makers (including the architect, who must willingly participate), and various stakeholders. The ATAM consists of four phases, with the core analysis taking place in Phases 1 and 2, which include nine steps. Key steps involve presenting the ATAM process, detailing the business goals, presenting the architecture using technical views, and identifying the architectural approaches employed. Participants collaboratively generate a quality attribute utility tree, which prioritizes goals by business importance and technical difficulty. The analysis steps involve checking high-ranked scenarios against the architecture, probing for design rationales, and documenting architectural findings such as risks, non-risks, sensitivity points, and tradeoff points. In Phase 2, stakeholders brainstorm additional scenarios, which are also prioritized and analyzed. The final ATAM output consolidates the risks into risk themes, linking systemic deficiencies directly back to the business goals that are threatened. A less formal, internal peer review method is the Lightweight Architecture Evaluation (LAE), which is inexpensive, low-ceremony, and suitable for regular quality assurance sanity checks, often shortening or omitting ATAM steps. The lightest evaluation involves using tactics-based questionnaires to quickly focus on a single quality attribute and uncover buried risks related to specific design decisions.