System Requirements Analysis 2nd Edition pdf pdf

  System Requirements Analysis

  System Requirements Analysis Second Edition Jeffrey O. Grady JOG System Engineering San Diego, CA, USA

  AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD Elsevier

  32 Jamestown Road, London NW1 7BY 225 Wyman Street, Waltham, MA 02451, USA Copyright © 2014, 2006 Elsevier Inc. All rights reserved

No part of this publication may be reproduced or transmitted in any form or by any means,

electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangement with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:

This book and the individual contributions contained in it are protected under copyrightby the

Publisher (other than as may be noted herein).

  Notices Knowledge and best practice in this field are constantly changing. As new research and

experience broaden our understanding, changes in research methods, professional practices,

or medical treatment may become necessary.

  Practitioners and researchers must always rely on their own experience and knowledge in

evaluating and using any information, methods, compounds, or experiments described herein.

In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors,

assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

  British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress

  ISBN: 978-0-12-417107-7 For information on all Elsevier publications visitour website at

This book has been manufactured using Print On Demand technology. Each copy is produced

to order and is limited to black ink. The online version of this book will show color figures

where appropriate.

  List of Illustrations

  95 Figure 2.2 Typical CRL structure.

  78 Figure 1.22 Development models.

  82 Figure 1.23 Modeling history.

  83 Figure 1.24 Systems definition.

  88 Figure 1.25 Specification content linkage to modeling.

  90 Figure 1.26 UADF work pattern.

  91 Figure 2.1 Primitive requirement statement.

  96 Figure 2.3 Parametric analysis example. 102

  76 Figure 1.20 NASA acquisition life-cycle model.

Figure 2.4 Three-dimensional traceability. 109Figure 2.5 Single-tier traceability matrix. 111Figure 2.6 Multiple document traceability matrix. 111Figure 2.7 Requirements vertical traceability. 116Figure 2.8 General interdocument traceability. 121Figure 2.9 Program-wide requirements traceability. 122Figure 2.10 Verification traceability string. 124 Figure 2.11 Manual RAS. 127

  76 Figure 1.21 A successful prescription.

  75 Figure 1.19 FAA acquisition life-cycle model.

Figure 1.1 The ultimate system abstraction. (A) Traditional, (B) modern structured analysis, and (C) unified modeling language.

  47 Figure 1.9 Multiphase program structures.

  3 Figure 1.2 The fundamental system relation.

  12 Figure 1.3 The enterprise and system life cycle.

  16 Figure 1.4 Typical matrix organization.

  27 Figure 1.5 System development transformations.

  39 Figure 1.6 The analyst’s relationship to problem space.

  41 Figure 1.7 Modeling set organization (A) first tier sets, (B) second tier sets, and (C) third tier sets.

  44 Figure 1.8 Vision and need statement relationships.

  50 Figure 1.10 Program phasing and generic process relationships.

  69 Figure 1.18 DoD acquisition life-cycle model.

  51 Figure 1.11 Product and process system life cycle.

  53 Figure 1.12 Grand systems definition.

  57 Figure 1.13 Structured selection of the preferred concept.

  59 Figure 1.14 Requirements quality assurance.

  65 Figure 1.15 Synthesize system.

  66 Figure 1.16 Preliminary design.

  68 Figure 1.17 Detailed design.

Figure 2.12 Lateral traceability using RAS complete. 130 xvi List of Illustrations

Figure 2.13 Requirements design margin accounts. (A) Margins in an architecture context; (B) margin Venn diagram.

  229

Figure 3.23 Recommended functional model specification structure. 206Figure 3.24 Integration of user, acquisition agent, and contractor requirements work.

  208

Figure 3.25 Generic master function flow diagram. 211Figure 3.26 Diagramming comparison. 213 Figure 3.27

  VPA and MRA process flow. 219

Figure 3.28 Maintenance analysis using process diagramming. 223Figure 3.29 Functional and performance requirements analysis. 225Figure 3.30 Team-oriented function allocation. 226 Figure 3.31 Typical RAS.Figure 3.32 RAS—example 1. 234

  200

Figure 3.33 RAS—example 2. 235Figure 3.34 RAS—example 3. 236Figure 3.35 RAS—example 4. 237Figure 3.36 Specification template for performance requirements. 238Figure 3.37 Functional analysis summary. 239Figure 3.38 Product entity synthesis process. 241Figure 3.39 Typical product entity diagram. 243Figure 3.40 System hierarchy level names. 248Figure 3.41 Centaur upper-stage product entity structure. 249Figure 3.22 Functional N-square diagram. 203Figure 3.20 Alternative OR symbol usage example. 199 Figure 3.21 FFD levels.

  137

Figure 3.8 System life-cycle model. 173Figure 2.14 Freestyle is for experts and fools. 144Figure 2.15 Three cloning methodologies. 145Figure 3.1 Unprecedented system definition. 155Figure 3.2 Requirements fusion and partition. 156Figure 3.3 System context diagram. 162Figure 3.4 General preferred concept selection. 163Figure 3.5 Typical trade matrix. 165Figure 3.6 Utility curve examples. 166Figure 3.7 Precedented system definition. 169Figure 3.9 Grand system requirements analysis process function. 174Figure 3.19 FFD style sheet, combinatorial symbols. 199Figure 3.10 Three-faceted problem space. 177Figure 3.11 Traditional entry facet and sequence. 178Figure 3.12 The problem space entry perspective continuum. 180Figure 3.13 SAR coordination with program processes and specification templates.

  181

Figure 3.14 Traditional structured analysis. 186Figure 3.15 Development planes. 189Figure 3.16 Functional analysis. 190Figure 3.17 Life-cycle master flow diagram. 196Figure 3.18 FFD style sheet, blocks. 197Figure 3.42 Warship packaging structure. 251

  List of Illustrations xvii

Figure 3.77 Typical process flow diagram. (A) Computer graphics process diagram and (B) project planning software process diagram.Figure 3.68 FRAT sequencing. 286Figure 3.69 State diagram. 288Figure 3.70 Superconductor super collider state transition diagram. 290 Figure 3.71 Petri nets.

  293

Figure 3.72 Example of a mathematically specified problem. 294Figure 3.73 Scenario formed by icons. 296Figure 3.74 System function depiction. 297Figure 3.75 The QFD house of quality. 299Figure 3.76 QFD augmented structured analysis. 301

  308

Figure 3.66 Behavioral diagramming. 284Figure 3.78 Process-product entity matrix. 310Figure 3.79 A multiplicity of processes. 311Figure 3.80 TPA and MRA process flow. 316Figure 3.81 LSA process flow. 321Figure 3.82 Typical system process diagram. 323Figure 3.83 Postflight maintenance process flow. 324Figure 3.84 LSA example. (A) First task analysis sheet, (B) second task analysis sheet, and (C) third task analysis sheet.

  325

Figure 3.85 Operational sequence diagram. 328Figure 3.86 System modification process. 329Figure 3.67 Typical IDEF diagram. 285Figure 3.65 Commodity flow in enhanced functional flow. 283Figure 3.43 Existing product entity example. 252Figure 3.53 Crossface SBD expansion. 269Figure 3.44 Typical N-square diagram. 257Figure 3.45 Extended RAS. 259Figure 3.46 Compound N-square diagram example. 262Figure 3.47 SBD symbols. 263Figure 3.48 Universal ultimate SBD. 264Figure 3.49 Typical system SBD. 265Figure 3.50 Primitive SBD. 267Figure 3.51 Finished SBD. 267Figure 3.52 Triangular matrix SBD example. 268Figure 3.54 Typical interface dictionary listing. 270Figure 3.64 Kill branch construct. 283Figure 3.55 RAS capture of interface requirements. 271Figure 3.56 Interface responsibility model. 274Figure 3.57 Interface partitions. 275Figure 3.58 Subsystem principal engineer views. 276Figure 3.59 Cross-organizational interface through an SBD. 278Figure 3.60 Functional hierarchy diagram. 280Figure 3.61 Trigger construct. 281Figure 3.62 Multiple exit construct. 282Figure 3.63 Repetition constructs: (A) Iteration construct and (B) loop construct.

  282

Figure 3.87 Ultimate system diagram. 330 xviii List of Illustrations

Figure 3.88 The system relationship. 330Figure 4.19 Statechart diagram. 390Figure 4.12 SADT diagramming. 381Figure 4.13 Class and object artifact according to Rumbaugh. 384Figure 4.14 Class and object relationships. (A) Generalization, (B) aggregation, and (C) association.

  384

Figure 4.15 State diagram notation. 385Figure 4.16 Functional model notation example. 386Figure 4.17 Hierarchical static structure relationships. 388Figure 4.18 Use case diagram. 389Figure 4.20 Activity diagram. 391Figure 4.11 IPO diagram.Figure 4.21 Communication diagram. 392Figure 4.22 Sequence diagram. 392Figure 4.23 Component and deployment diagrams. 393Figure 4.24 Framework product partitioning. 399Figure 4.25 Logical data model example. 403Figure 4.26 High-level operational concept graphic example. 404Figure 4.27 Organizational relationships chart. 404Figure 4.28 Activity model example. 405

  380

  IDEF-1X diagram. 377

Figure 3.89 Function sequence. 331Figure 3.98 System context diagram. 340Figure 3.90 Function decomposition. 332Figure 3.91 System life cycle. 333Figure 3.92 Traditional RAS. 334Figure 3.93 Function-product entity (architecture) matrix. 335Figure 3.94 System product entity structure. 336Figure 3.95 Traditional isolated N-square diagram. 338Figure 3.96 Juxtaposition of RAS and N-square diagrams. 338Figure 3.97 System environment. 339Figure 3.99 Environmental requirements RAS addition. 341 Figure 3.100 RAS-complete in graphical form. 344 Figure 3.101 RAS-complete in tabular form. 345 Figure 3.102 Verification extension. 345 Figure 3.103 Functional SAR structure. 354 Figure 3.104 Specification management matrix. 356Figure 4.9 Entity relationship diagram. 375 Figure 4.10Figure 4.1 Flowchart example. (A) Level n flowchart and (B) level n+1 flowchart.

  365

Figure 4.2 Higher-tier flowchart. 366Figure 4.3 Context diagram. 367Figure 4.4 Dataflow diagram. 368Figure 4.5 Data dictionary fragment. 369Figure 4.6 Processing specification (P-spec). 369Figure 4.7 MIL-STD-498 SRS format. 370Figure 4.8 DFD for discussion. 373Figure 4.29 Operational state transition description example. 406

  List of Illustrations xix

Figure 6.8 Software extended environmental categories. 520Figure 5.20 The system and its environment. 488Figure 5.21 The ultimate system. 489Figure 6.1 Federated database structures. 497Figure 6.2 Program preparation process. 500Figure 6.3 Modeling pathways. 501Figure 6.4 Universal architecture Venn diagram. 502Figure 6.5 Inter-model transfers in system definition. 503Figure 6.6 Inter-model transfer possibilities. 506Figure 6.7 Functional UADF inter-model transfers. 517Figure 6.9 PSARE analysis of the context bubble. 522Figure 5.18 EME class comparison. 483Figure 6.10 PSARE architecture template. 524Figure 6.11 The UML–SysML modeling scheme. 528Figure 6.12 UML/SysML inter-model transfer options. 529Figure 6.13 Adjusted UPDM for specification modeling support. 534Figure 6.14 Extended UPDM UADF modeling artifacts. 537Figure 6.15 Function/MSA UADF inter-model transfer possibilities. 542Figure 6.16 Model transfer matrix. 543Figure 6.17 Example functional analysis data in the RAS. 544Figure 6.18 Traceability evaluation. (A) Common RAS fragment, (B) cross-

  model traceability evaluation matrix, and (C) requirements traceability table fragment.

Figure 5.19 A fragment of a RAS containing environmental requirements. 486Figure 5.17 Self-induced environmental stress. 481Figure 4.30 Operational event/trace description example. 407Figure 5.5 Safety hazard diagram. (A) Safety index and (B) program safety hazard index metric.Figure 4.31 System interface description diagram example. 412Figure 4.32 System–systems matrix. 413Figure 4.33 System functionality description. 413Figure 4.34 Operational activity to systems traceability matrix. 414Figure 4.35 Systems evolution description. 415Figure 5.1 Specialty engineering scoping matrix. 423Figure 5.2 Design constraints identification form. 425Figure 5.3 Typical reliability model. 435Figure 5.4 Operator sequence diagram. 449

  450

Figure 5.16 Environmental relationships. 480Figure 5.6 System relationship with its environment. 454Figure 5.7 System environmental classes. 457Figure 5.8 Time line diagram symbols and conventions. 461Figure 5.9 Typical timeline diagram. 461Figure 5.10 Time analysis sheet example. 465Figure 5.11 System environmental requirements analysis. 469Figure 5.12 Service use profile analysis. 471Figure 5.13 AQM-91 firebolt operations and maintenance process diagram. 472Figure 5.14 Three-dimensional end item environmental model. 473Figure 5.15 Sample zoning diagram. 476

  546 xx List of Illustrations

Figure 6.19 Requirements traceability across the gap. 546Figure 8.21 TBD/TBR closure matrix. 692

  675

Figure 8.14 Program risk index profile. 676Figure 8.15 Item requirements validation process. 680Figure 8.16 Correlation of validation with the metrics and program risk universe. 682Figure 8.17 Evaluate requirements activity. 683Figure 8.18 Requirements validation intensity hierarchy. 685Figure 8.19 Requirements Validation Tracking Matrix. 687Figure 8.20 TPM parameter documentation: (A) technical performance measurement chart and (B) TPM action plan.

  689

Figure 8.22 Database structure subset supporting TBD/TBR. 693Figure 8.11 Risk identification form. 674Figure 8.23 Parametric analysis of cost and reliability. 697Figure 8.24 Validation traceability. 699Figure 8.25 Synthesizability validation traceability record example. 700Figure 8.26 Typical product entity block diagram. 707Figure 8.27 Single-item view of the process. 718Figure 8.28 Specialty engineering integration process. 723Figure 8.29 Federated ICWT structure. 728Figure 8.30 Interface integration categories. 729Figure 8.31 The RAS-complete view of verification. 736Figure 8.12 Sample program risk list. 675 Figure 8.13 Risk index.Figure 8.10 Requirements validation is imbedded in the risk program. 672Figure 6.20 Common product entity structure. 548Figure 7.6 Part 2 outline. 615Figure 6.21 Functional relations to physical interfaces transform. 550Figure 6.22 Equivalent schematic block diagram. 551Figure 6.23 Universal specification structure. 553Figure 6.24 Product entity structure crafted through observation. 554Figure 7.1 Method of identifying two-part specifications. 580Figure 7.2 Specification types of interest. 583Figure 7.3 Work progression. 611Figure 7.4 Modeling sequence. 612Figure 7.5 Specification development timing. 614Figure 7.7 Typical summary status briefing viewgraph. 626Figure 8.9 DDP responsibility matrix. 670Figure 7.8 Applicable document assessment workflow. 627Figure 7.9 ANSI/EIA 632 requirements work sequence. 636Figure 8.1 Prepare program for structured analysis. 642Figure 8.2 Coordinated specification responsibility and models. 644Figure 8.3 Cost-sharing formula. 651Figure 8.4 Typical specification tree. 656Figure 8.5 Specification development environment. 658Figure 8.6 Development schedule modularization. 660Figure 8.7 The advancing wave. 661Figure 8.8 Sample IPT meeting cycle. 665Figure 8.32 Verification matrices. 741

  List of Illustrations xxi

Figure 9.1 Evolution of system development. 770Figure 10.2 The movement toward model-driven development. 797

  792

  knowledge base in earlier years and (B) the designer’s knowledge base today.

Figure 10.1 Putting Humpty Dumpty back together again. (A) The designer’sFigure 9.4 Integrated specialty engineering tools. 781Figure 9.3 Verification tracking links. 779Figure 9.2 Computer tool environment. 773Figure 8.42 Software ICD outline. 766Figure 8.33 Typical graphical specification tree. 743Figure 8.41 Hardware ICD outline. 765Figure 8.40 Two-layer media-partitioned interface definition. 762

  ICD figure and text coordination. 761

Figure 8.38 Interface definition. 760 Figure 8.39Figure 8.37 Specification change notice. 754Figure 8.36 Specification review and approval. 747Figure 8.35 Specification publishing. 746Figure 8.34 Specification development process. 745Figure 10.3 Will teams accept a new member? 799

  List of Tables Table 1.1 Comparison of Models

  514 Table 6.3 MSA-PSARE Modeling Artifact List

  458 Table 5.10 Typical Serial Time Allocation Example

  464 Table 5.11 Sample Environmental Subset Definition Table

  473 Table 5.12 E3 Class Relationships

  482 Table 5.13 System and Hardware Item Performance Specification Template for Environmental Requirements

  487 Table 6.1 Universal Model Coupling

  509 Table 6.2

JOGSE Universal MID List for Functional Analysis

  525 Table 6.4 System Specification Template Employing MSA-PSARE UADF

  446 Table 5.8 Specialty Engineering Specification Template

  527 Table 6.5 JOGSE MID List for UML–SysML UADF

  532 Table 6.6 Modeling Artifact Correlation

  535 Table 6.7 Functional Relations To Physical Interfaces Transfer Map

  549 Table 6.8 RAS Fragment Showing Interfaces

  551 Table 7.1 Specification Types

  560 Table 7.2 Specification References

  567 Table 7.3 Specification Section Titles

  453 Table 5.9

Some Natural Environmental Parameter References

  443 Table 5.7 Maintainability References

  87 Table 2.1 Rationale Traceability

  230 Table 3.5 Intermediate Allocation Class Codes

  115 Table 2.2 Specification Sections and Functions

  119 Table 2.3 Specification Traceability to Responsibility and Method

  129 Table 2.4 Sample Requirements in a Comprehensive RAS

  131 Table 3.1 Four Three-Faceted Modeling Approaches

  177 Table 3.2 Model Coverage References

  184 Table 3.3 Principal Organization Responsibility Map

  212 Table 3.4 Intermediate Allocation Character Codes

  230 Table 3.6 State Dictionary

  442 Table 5.6 Corrective Maintenance Requirements List

  288 Table 3.7 State Transition Dictionary

  289 Table 3.8 JOGSE Universal MID List

  347 Table 5.1 Interface Requirements Specification Template

  420 Table 5.2 RAS Specialty Engineering Entries Example

  427 Table 5.3 System Reliability Data Table

  436 Table 5.4 Reliability References

  439 Table 5.5 Maintainability Parameters

  573 xxiv List of Tables

  Table 7.4 System Specification Template Employing Functional UADF

  690 Table 8.7 Margin Accounts

  778 Table 9.5 Management Data Fields

  777 Table 9.4 Sample Traceability Data

  777 Table 9.3 Type Codes

  776 Table 9.2 Field Types

  744 Table 9.1 Sample Database Structure and Data

  744 Table 8.10 Program Team Responsibilities

  729 Table 8.9 Typical Tabular Specification Tree

  715 Table 8.8 LCT Identification

  669 Table 8.6 Sample Representations Identification Matrix

  585 Table 7.5

System Specification Template Employing MSA-PSARE

UADF

  662 Table 8.5 DDP Data Destinations

  653 Table 8.4 Development Control Table

  646 Table 8.3 Principal Engineer Levels

  646 Table 8.2 SAR Structure for UML-SysML

  629 Table 8.1 SAR Structure for MSA-PSARE

  626 Table 7.9 Principal Organizational Responsibilities

  624 Table 7.8 Compliance Classes

  590 Table 7.7 Definitions

  589 Table 7.6

System Specification Template Employing UML-SysML

UADF

  780

  List of Acronyms ABD Architecture block diagram ASCII American standard code for information interchange ASIC Application specific integrated circuit BIT Built-in test CAD Computer-aided design CAIV Cost as an independent variable CAM Computer-aided manufacturing CASE Computer-aided software engineering CCB Configuration control board CDR Critical design review CDRL Contract data requirements list CDROM Compact disk read-only memory CEP Circular error of probability CFD Control flow diagram CI Configuration item CM Configuration management CMM Capability maturity model CONOPS Concept of operations COTS Commercial off the shelf CPC Corrosion prevention and control CPM Critical path method CRL Concept requirements list CSCI Computer software configuration item CSCSC Cost schedule control system criteria

C4ISR Command, control, communications, computers, intelligence, surveillance,

and reconnaissance

  DBS Drawing breakdown structure DCA Design constraints analysis DCIF Design constraints identification form DCSM Design constraints scoping matrix DDP Development data package DFD Data flow diagram DID Data item description DMA Database modeling analysis DoD Department of Defense DoDAF Department of Defense Architecture Framework DPA Deployment planning analysis DRA Deployment requirements analysis DSMC Defense Systems Management College DTC Design to cost xxvi List of Acronyms

  DTE Development test and evaluation EADL Enterprise applicable document list ECP Engineering change proposal EDD Enterprise definition document EFFBD Enhanced functional flow block diagram EIA Electronics Industry Association EID End item description EIT Enterprise integration team EMC Electromagnetic compatibility EMD Engineering and manufacturing development EME Electromagnetic environment EMI Electromagnetic interference EMP Electromagnetic pulse ERB Engineering Review Board ERD Entity relationship diagram ESD Electrostatic discharge EW Electromagnetic warfare E3 Electromagnetic environmental effects FA Functional analysis FAA Federal Aviation Administration FCA Functional configuration audit FDA Food and Drug Administration FFBD Functional flow block diagram FFD Functional flow diagram FMECA Failure modes effects and criticality analysis FRACAS Failure reporting, analysis, and corrective action system FRAT Functions requirements architecture test FRB Failure Review Board GFE Government furnished equipment GFP Government furnished property GPS Geographic positioning system GIDEP Government Industry Data Exchange Program GUI Graphical user interface HAHST High-altitude high-speed target HERF Hazards of electromagnetic radiation to fuel HERO Hazards of electromagnetic radiation to ordinance HERP Hazards of electromagnetic radiation to personnel HOL Higher order language HP Hatley Pirbhai HPM High-power microwave HW Hardware

  ICAM Integrated computer-aided manufacturing

  ICBM Intercontinental ballistic missile

  ICD Interface control document

  ICWG Interface control working group

  ICWT Interface control working team

  IDEF Integrated definition

  IEEE Institute of Electrical and Electronics Engineers

  IMP Integrated master plan

  List of Acronyms xxvii

  IMS Integrated master schedule

  INCOSE International Council on Systems Engineering

  IO Input–Output

  IOC Initial operating capability

  IPO Input process output

  IPPT Integrated product and process team

  IRD Interface requirements document

  IRFNA Inhibited red fuming nitric acid

  ITER International thermonuclear experimental reactor JOGSE JOG System Engineering LCC Life-cycle cost LCT Lowest common team LOX Liquid oxygen LSA Logistics support analysis MBS Manufacturing breakdown structure MID Modeling ID MoD Ministry of Defense MoDAF Ministry of Defense Architecture Framework MOE Measure of effectiveness MRA Manufacturing requirements analysis MSA Modern structured analysis MTBF Mean time between failures MTBM Mean time between maintenance MTTR Mean time to repair NASA National Aeronautics and Space Administration NATO North Atlantic Treaty Organization NCOSE National Council on Systems Engineering OMG Object modeling group OOA Object-oriented analysis ORD Operational requirements document OTE Operational test and evaluation PCA Physical configuration audit PCB Parts control board PDR Preliminary design review PERT Program evaluation and review technique PIT Program integration team PMP Parts materials and processes PMP Parts materials and processes selection list PPEM Process product entity matrix PSARE Process for system architecture and requirements engineering PSL Program specifications library QFD Quality function deployment RAS Requirements analysis sheet RID Requirement ID RS Raw score (in a trade study) SAC Strategic air command SADT Structured analysis definition tool SAR System architecture report SBD Schematic block diagram xxviii List of Acronyms

  SCA Sneak circuit analysis SCD Specification change notice SDR System design review SEMP Systems engineering management plan SEP System engineering plan SESM Specialty engineering scoping matrix SOW Statement of work SRA System requirements analysis SRD System requirements document SRR System requirements review SRS Software requirements specification SW Software SysML System modeling language TAAF Test analyze and fix TBD To be determined TBR To be resolved TLCSC Top-level computer software component TPM Technical performance measurement TQM Total quality management TRA Teledyne Ryan Aeronautical TSA Traditional structured analysis TV Trade value (in a trade study) UADF Universal architecture description framework UML Unified modeling language UPDM Unified process for DoDAF MoDAF USAF United States Air Force USA United States of America USSR Union of Soviet Socialist Republics UV Utility value (in a trade study)

  VCRM Verification cross reference matrix

  VDC Volts DC

  VPA Verification planning analysis WBS Work breakdown structure WT Weight (in a trade study)

  Preface

  The serious study of the practice of how to determine the appropriate content of a specification is a seldom-appreciated pastime. Those who have the responsibility to design a product would prefer a greater degree of freedom than permitted by the con- tent of a specification. Many of those who would manage those who would design a product would prefer to allocate all of the project funding and schedule to what they consider more productive labor. These are the attitudes, of course, that doom a project to defeat but they are hard to counter no matter how many times repeated by design engineers and managers. A system engineer who has survived a few of these experiences over a long career may retire and forget the past but we have an endur- ing obligation to work toward changing these attitudes while trying to offer younger system engineers a pathway toward a more sure success in requirements analysis and specification publishing.

  This is the third attempt I have made to capture the essence of an effective process for accomplishing requirements analysis to expose the proper content of a specifica- tion. The first attempt was a book published by McGraw-Hill in 1993 with the title “System Requirements Analysis.” It was based on work done at General Dynamics Space Systems Division as the Systems Development Department Manager. The method was dominated by functional analysis. Since that time I have been the owner of JOG System Engineering, a system engineering consulting and training enterprise for which I have taught the subject of this book over 120 times at universities, in pri- vate and public courses for short course companies, and at companies through direct sale by my company. As a result I have been exposed to many theories about how to teach others how to do this work and have tried to capture the good ideas and to ignore the bad ones as the book has progressed to another publisher, and two more recent editions of the book.

  For a very long time I have been convinced that all requirements should be derived through modeling but there have been so many different models developed beyond the functional method I used exclusively as a younger man. In 2009 it finally dawned on me that all of the very smart people who have developed new models could be right. Each of these models seems to expose the same central ideas but each has its own unique strengths. I was fortunate in having a paper titled “Universal Architecture Description Framework” published in “Systems Engineering, the Journal of the International Council on Systems Engineering” Volume 12, Number 2, Summer 2009 not long after I had turned in the manuscript for the previous Elsevier book under this title. It has required the past several years to polish the edges of the ideas expressed in that paper through exposure to critical comment from students in tutorials and courses on this subject. xxx Preface

  The result is captured in this edition integrated with the complete story. That story includes three Universal Architecture Description Framework (UADF) that are immediately identifiable: functional, MSA-PSARE, and UML-SysML. Plus a fourth possible candidate in the form of an extended unified process for DoDAF MoDAF (UPDM). The book includes an encouragement that an enterprise select one of these UADF, select a tool set compatible with it, educate personnel in the application of the model and tool set, and over time continue to improve through repetition of a common process on all programs.

  This book also improves upon an earlier proposition that all requirements derived through modeling be traceable to the modeling artifacts from which they were derived requiring a unique means of identifying every artifact from which requirements could be derived no matter the UADF selected. The suggestion is also advanced that a development program should capture the modeling artifacts in a fashion that they can be retained in a configuration-managed baseline in a set of paper documents or computer files.

  Each of the UADF offered is formed by a set of problem-space models and solution-space models. The former deals with the problem one is trying to solve expressed in functional or behavioral descriptions, while the latter deals with the physical perspective after the product entities and interfaces have been identified through application of the problem-space models. The solution-space models cover interface, specialty engineering, and environmental requirements while problem- space models are employed in identifying product entities, interfaces, and perfor- mance requirements related to them.

  Recent consulting experiences have helped me clarify exactly why programs have so much difficulty accomplishing verification work affordably and the problem occurs at the hand off between requirements work and verification work related to the assignment of principal engineers for each verification task. The story on solving this problem begins in this book and is carried forth in a new edition of my “System Verification” book.

  Introduction

1 The book and its first chapter begin with an introduction to the work this volume

  focuses on. Many of the subject matter words the reader will find throughout the book are defined and explained. One of the key ideas that support the use of the word “affordability” is that the book encourages the use of models and shows ways that all requirements appearing in all specifications can be derived from models and in the second chapter we will introduce the idea of modeling. Requirements work is what a program begins with so it is appropriate to discuss the management infra- structure within which it and later program phases will occur. Some variations on a central theme are also discussed. Closing out this chapter will be an insight into a proposed prescription for achieving an effective system development process that is affordable to apply while producing good program results.

  The overall objective of the book is to show system engineers how to accomplish the early program work related to developing the program-peculiar specifications needed to define the problem that must be solved through design, procurement, and manufacture. A companion volume titled “System Verification” completes the story by covering how a program may determine to what extent the manufactured product complies with the content of the specifications.

1.1 Introduction to Systems Requirements

1.1.1 What Is a System?

  This book deals with man’s efforts to develop systems. Many definitions of the word system have been offered, but in the broadest and most simple sense, any two or more entities interacting cooperatively to achieve some common goal, function, or purpose constitutes a system. Thus systems are composed of entities that inter- act together through relationships and with their environment. The content of this book applies most appropriately to man-made systems that are organized collections of entities that interact synergistically via the interfaces connecting them to achieve preplanned goals in accordance with a predetermined plan or process. Generally, in these kinds of systems, no subset of the system resources operating independently can totally achieve the same system goal or purpose, because we tend to create them in a least-cost configuration. The key to system existence, and the superior perfor- mance of a system over an unorganized collection of independent objects, is the pur- poseful cooperative interaction that occurs among the multiple system resources via the interfaces that connect them. So any one system must consist of entities intercon- nected by relationships, most often referred to as interfaces. A system also interacts with its environment through interfaces.

2 System Requirements Analysis

  There exist natural systems in our universe, such as the ultimate system, the uni- verse itself, the climatic system on Earth, and the human circulatory system. These systems evolved through natural processes not requiring any human engineering activity, and a good thing because there were no humans when many of these sys- tems evolved. Some readers may prefer to recognize that natural systems were actu- ally put in place by God. The author neither poses a counterargument to that premise nor recognizes that there would be any significant difference in the content of this book if that were the case. Natural systems can be characterized using techniques described in this book, but we must recognize one fundamental difference between natural and man-made systems. Natural systems are not designed by people, they simply must be understood and described by people and this is perhaps more of a job for scientists than engineers.

  This situation is changing, however. We are manipulating natural systems to an increasingly powerful extent, bordering on redesign on a small scale. As a result, there will likely be an increasing application in the future for organized system engineering methods in natural system fields like biotechnology, agronomy, weather, and aquifers. The requirements impact analysis approach discussed in this book in association with engineering change proposals and environmental requirements analysis may apply to these situations more than we would care to think about. A TRW (Thompson, Ramo, Woolridge who were the three gentlemen forming this company) systems engineer working on the Yucca Mountain nuclear waste storage site once told the author that the developer first had to identify the degree of isolation provided between the stored material and the local aquifer offered by government furnished property (GFP) before determining what man-made features were required. The author asked how GFP was involved and the engineer replied—“No, God-furnished property.”

  A man-made system is developed to achieve a preplanned function, goal, or purpose. These systems require engineering development work to convert the pre- planned function, goal, or purpose into a practical solution composed of physical hardware elements that can be manufactured and assembled from available materi- als. Most often, these systems will also include computer software and, commonly, human operators who interact with the hardware and software to guide the system toward its function, goal, or purpose. Finally, these systems will include relationships implemented through interfaces between the product entities composing the system.

Figure 1.1 shows three ways systems are illustrated at the very top level and related to their environment using models that we will discuss in this book.

  

illustrates the ultimate abstraction for a system in the form of a single

  block that represents the complete system. It interacts with a system environment and internally within itself to achieve the system function, goal, or purpose. To sim- plify this phrase let us agree to simply refer to the function of a system being its goal or purpose. The system environment consists of everything that influences the system that exists in the Universe, less the system itself. The system function, stated in a customer need statement, is the requirement that is assigned to the system block in this diagram. It is the ultimate requirement for the system that can be mindlessly brought into imaginary existence by the allocation of the customer need to an entity

  Introduction 3 (A)

  Einvironment

  12 System

  11

  13 E A Terminator 3 Terminator 1 Use case

  System Terminator 2 Actor (B)

  (C) Figure 1.1

  The ultimate system abstraction. (A) traditional, (B) modern structured analysis, and (C) unified modeling language.

  is where the decomposition process starts, with the ultimate requirement. The two fundamental blocks sho are interrelated by three different kinds of interfaces identified as I1, I2, and I3 that will be explained in a later section.

  Using a functional modeling process we can progressively decompose or parti- tion the functionality represented by the need into lower-tier functionality as a means of gaining insight into what the system and its parts must accomplish and how well it and its many elements must perform. The decomposition process stops when we have identified all of the system resources that will yield to detailed design by a single design agent or team in the producing enterprise or can be procured from a single supplier. In the late 1990s, the author, trying to impress a manager on the International Thermonuclear Experimental Reactor (ITER) program, then based in La Jolla, California, and thus encourage him to purchase a system engineering training program, showed the manager his schematic block diagram (SBD) of the universe. The author, thinking that the universe included everything there was, illus- trated only one block on the diagram containing everything rather than the two- block arrangement shown in . The diagram in question will appear in the section dealing with environmental requirements analysis. The manager looked at the diagram briefly and said, “You may have forgotten a few wormholes.” A con- tract was not forthcoming so that may not have been a good marketing technique. Chapter  3 of this book covers the functional analysis (FA) approach depicted in

  

in considerable detail. Performance requirements are derived from func-

tions and allocated to product entities thereby identifying lower-tier system entities.

  

offers the ultimate system view from the perspective of an adher-

  ent of modern structured analysis (MSA) used for many years, and to this day, by some to develop computer software. The system, whatever it may become during the development process, is shown interacting with several (three in this case) external entities called terminators. This is a very useful diagram no matter what modeling approach one might employ. A context diagram can also be used to identify all of the parties interested in the development effort, often called stakeholders. In the case

4 System Requirements Analysis

  we extend MSA to the process for system architecture and requirements engineering (PSARE), initially called Hatley–Pirbhai modeling, in Chapter 4 we will find that the terminators can be information, energy, or material because PSARE was developed so as not be limited to software development as MSA was intended.

   illustrates a system from the perspective of an adherent of system

  modeling language (SysML) or unified modeling language (UML). Actors interact with the system through what are called use cases achieving specific benefits in so doing. Modeling artifacts are then employed to describe these benefits from which requirements are derived. Chapter 4 also cov .

  This book covers three comprehensive modeling approaches coordinated with the three top-end vie . A forth modeling approach, also addressed in the book involves over 50 modeling artifacts and the author cannot think of a simple view of the ovU.S. Department of Defense (DoD) has shown great interest in a development model named DoDAF for DoD Architecture Framework. This interest has been extended through cooperation with the UK Ministry of Defence (MOD) under the acronym MODAF. It was initially developed to model large-scale information systems but was initially not appropriate as a comprehensive general system modeling approach like the other three brought together in Chapter 6. Chapter 4 provides an overview of this process in the interest of completeness and recognizes the continuing work being done since 2004 by the Unified Process for DoDAF MoDAF (UPDM) RFC Group composed of several companies in cooperation with the U.S. DoD and UK MOD to advance the use of the modeling artifacts of UML-SysML rather than whatever modeling artifacts the customer, team, or program prefers. In Section 6.5, we also extend the model so that it may be used as a single model that can be used to define the system architecture and support derivation of all requirements that will populate specifications no matter how the system is to be implemented in terms of hardware, software, and people doing things.

  The author maintains that all requirements appearing in specifications should be derived through modeling. The author’s set of comprehensive modeling approaches started with three useful models for doing this work covered in Chapters 3 through 5 with the beginnings for them noted in . With the emergence of UPDM the author has grudgingly added it to form a forth comprehensive modeling approach from which an enterprise may select a single model and encourage its employees to become proficient in using that one model on all programs.

  The book focuses the modeling artifacts thus described on four specific universal architecture description frameworks (UADFs) in Chapter  6 and offers encourage- ment that a systems development enterprise select one, coordinate it with an effective tool set, qualify its employees to effectively use the selected modeling methods and selected tools, and manage the whole well. In a nutshell, this is the beginning of the prescription for achieving affordability and success in requirements and verification. Chapter 6 offers an extended version of UPDM as a forth UADF such that an enter- prise required by contract to employ DoDAF may use the UPDM version extended to include modeling elements that support environmental and specialty engineering mod- Introduction

  5 The modeling capability of the problem space components of these four UADF