Wiley Digital Logic Testing And Simulation 2nd Edition Jul 2003 ISBN 0471439959 pdf

  

DIGITAL LOGIC TESTING

AND SIMULATION

  

DIGITAL LOGIC TESTING AND SIMULATION SECOND EDITION Alexander Miczo Copyright  2003 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail: [email protected]. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print,

however, may not be available in electronic format.

  Library of Congress Cataloging-in-Publication Data : Miczo, Alexander.

  Digital logic testing and simulation / Alexander Miczo—2nd ed. p. cm. Rev. ed. of: Digital logic testing and simulation. c1986. Includes bibliographical references and index.

  ISBN 0-471-43995-9 (cloth)

1. Digital electronics—Testing. I. Miczo, Alexander. Digital logic testing and simulation II. Title.

  TK7868.D5M49 2003 621.3815 ′48—dc21 2003041100 Printed in the United States of America

  CONTENTS Preface xvii

  2.1 Introduction 33

  24

  1.10 Summary 26 Problems

  29 References

  30

  2 Simulation

  33

  2.2 Background 33

  23

  2.3 The Simulation Hierarchy

  36

  2.4 The Logic Symbols

  37

  2.5 Sequential Circuit Behavior

  39

  2.6 The Compiled Simulator

  1.9.2 Evaluating Test Decisions

  1.9.1 The Effectiveness of Fault Simulation

  1 Introduction

  1.5 Design Automation

  1

  1.1 Introduction 1

  1.2 Quality 2

  1.3 The Test

  2

  1.4 The Design Process

  6

  9

  23

  1.6 Estimating Yield

  11

  1.7 Measuring Test Effectiveness

  14

  1.8 The Economics of Test

  20

  1.9 Case Studies

  44

  2.6.2 Sequential Circuit Simulation

  48

  2.6.3 Timing Considerations

  50

  2.6.4 Hazards 50

  2.6.5 Hazard Detection

  52

  2.7 Event-Driven Simulation

  54

  2.7.1 Zero-Delay Simulation

  56

  2.7.2 Unit-Delay Simulation

  58

  2.7.3 Nominal-Delay Simulation

  59

  2.8 Multiple-Valued Simulation

  61

  2.9 Implementing the Nominal-Delay Simulator

  64

  2.9.1 The Scheduler

  64

  2.9.2 The Descriptor Cell

  67

  2.9.3 Evaluation Techniques

  70

  2.9.4 Race Detection in Nominal-Delay Simulation

  71

  2.9.5 Min–Max Timing

  72

  2.10 Switch-Level Simulation

  74

  2.11 Binary Decision Diagrams

  86

  2.11.1 Introduction 86

  2.11.2 The Reduce Operation

  91

  2.11.3 The Apply Operation

  96

  2.12 Cycle Simulation 101

  2.13 Timing Verification 106

  2.13.1 Path Enumeration 107

  2.13.2 Block-Oriented Analysis 108

  2.14 Summary 110 Problems

  111 References

  116

3 Fault Simulation

  119

  3.1 Introduction 119

  3.2 Approaches to Testing 120

  3.3 Analysis of a Faulted Circuit 122

  3.3.1 Analysis at the Component Level 122

  3.3.2 Gate-Level Symbols 124

  3.4 The Stuck-At Fault Model 125

  3.4.1 The AND Gate Fault Model 127

  3.4.2 The OR Gate Fault Model 128

  3.4.3 The Inverter Fault Model 128

  3.4.4 The Tri-State Fault Model 128

  3.4.5 Fault Equivalence and Dominance 129

  3.5 The Fault Simulator: An Overview 131

  3.6 Parallel Fault Processing 134

  3.6.1 Parallel Fault Simulation 134

  3.6.2 Performance Enhancements 136

  3.6.3 Parallel Pattern Single Fault Propagation 137

  3.7 Concurrent Fault Simulation 139

  3.7.1 An Example of Concurrent Simulation 139

  3.7.2 The Concurrent Fault Simulation Algorithm 141

  3.7.3 Concurrent Fault Simulation: Further Considerations 146

  3.8 Delay Fault Simulation 147

  3.9 Differential Fault Simulation 149

  3.10 Deductive Fault Simulation 151

  3.11 Statistical Fault Analysis 152

  3.12 Fault Simulation Performance 155

  3.13 Summary 157 Problems

  159 References

  162

4 Automatic Test Pattern Generation 165

  4.1 Introduction 165

  4.2 The Sensitized Path 165

  4.2.1 The Sensitized Path: An Example 166

  4.2.2 Analysis of the Sensitized Path Method 168

  4.3 The D-Algorithm 170

  4.3.1 The D-Algorithm: An Analysis 171

  4.3.2 The Primitive D-Cubes of Failure 174

  4.3.3 Propagation D-Cubes 177

  4.3.4 Justification and Implication 179

  4.3.5 The D-Intersection 180

  4.4 Testdetect 182

  4.5 The Subscripted D-Algorithm 184

  4.6 PODEM 188

  4.7 FAN 193

  4.8 Socrates 202

  4.9 The Critical Path 205

  4.10 Critical Path Tracing 208

  4.11 Boolean Differences 210

  4.12 Boolean Satisfiability 216

  4.13 Using BDDs for ATPG 219

  4.13.1 The BDD XOR Operation 219

  4.13.2 Faulting the BDD Graph 220

  4.14 Summary 224 Problems

  226 References

  230

5 Sequential Logic Test

  233

  5.3.5 Extended Backtrace 250

  5.5 Experiments with Sequential Machines 266

  5.4.3 The General Sequential Circuit 264

  5.4.2 The Balanced Acyclic Circuit 262

  5.4.1 Acyclic Sequential Circuits 260

  5.4 Sequential Logic Test Complexity 259

  5.3.6 Sequential Path Sensitization 252

  5.3.4 The Critical Path 249

  5.2 Test Problems Caused by Sequential Logic 233

  5.3.3 The 9-Value ITG 246

  5.3.2 The Iterative Test Generator 241

  5.3.1 Seshu’s Heuristics 239

  5.3 Sequential Test Methods 239

  5.2.2 Timing Considerations 237

  5.2.1 The Effects of Memory 234

  5.1 Introduction 233

  5.7 Summary 277 Problems

  6.9.2 The Reference Tester 312

  7.4 A Testbench 327

  7.3 Overview of the Design and Test Process 325

  7.2 The Test Triad 323

  7.1 Introduction 323

  

7 Developing a Test Strategy 323

  321

  320 References

  6.13 Summary 319 Problems

  6.12 Test Cost 319

  6.11 Visual Inspection 316

  6.10 The Test Plan 315

  6.9.3 Diagnostic Tools 313

  6.9.1 Emulating the Tester 311

  278 References

  6.9 The PCB Tester 310

  6.8 The In-Circuit Tester 307

  6.7 Developing a Board Test Strategy 304

  6.6 Manufacturing Test 301

  6.5 The Electron Beam Probe 299

  6.4 Using the Tester 293

  6.3 The Standard Test Interface Language 288

  6.2.2 The Dynamic Tester 286

  6.2.1 The Static Tester 284

  6.2 Basic Tester Architectures 284

  6.1 Introduction 283

  

6 Automatic Test Equipment 283

  280

  7.4.1 The Circuit Description 327

  7.5 Fault Modeling 331

  7.8.5 Code Coverage 365

  7.7.11 Fault Dropping 352

  7.8 Behavioral Fault Modeling 353

  7.8.1 Behavioral MUX 354

  7.8.2 Algorithmic Test Development 356

  7.8.3 Behavioral Fault Simulation 361

  7.8.4 Toggle Coverage 364

  7.9 The Test Pattern Generator 368

  7.7.9 Fault Coverage Profiles 350

  7.9.1 Trapped Faults 368

  7.9.2 SOFTG 369

  7.9.3 The Imply Operation 369

  7.9.4 Comprehension Versus Resolution 371

  7.9.5 Probable Detected Faults 372

  7.9.6 Test Pattern Compaction 372

  7.9.7 Test Counting 374

  7.7.10 Fault Dictionaries 351

  7.7.8 Circuit Initialization 349

  7.5.1 Checkpoint Faults 331

  7.6.2 CMOS 338

  7.5.2 Delay Faults 333

  7.5.3 Redundant Faults 334

  7.5.4 Bridging Faults 335

  7.5.5 Manufacturing Faults 337

  7.6 Technology-Related Faults 337

  7.6.1 MOS 338

  7.6.3 Fault Coverage Results in Equivalent Circuits 340

  7.7.7 Incremental Fault Simulation 349

  7.7 The Fault Simulator 341

  7.7.1 Random Patterns 342

  7.7.2 Seed Vectors 343

  7.7.3 Fault Sampling 346

  7.7.4 Fault-List Partitioning 347

  7.7.5 Distributed Fault Simulation 348

  7.7.6 Iterative Fault Simulation 348

  7.10 Miscellaneous Considerations 378

  7.10.2 ATPG User Controls 380

  7.10.3 Fault-List Management 381

  7.11 Summary 382 Problems

  383 References

  385

  8 Design-For-Testability 387

  8.1 Introduction 387

  8.2 Ad Hoc Design-for-Testability Rules 388

  8.2.1 Some Testability Problems 389

  8.2.2 Some Ad Hoc Solutions 393

  8.3 Controllability/Observability Analysis 396

  8.3.1 SCOAP 396

  8.3.2 Other Testability Measures 403

  8.3.3 Test Measure Effectiveness 405

  8.3.4 Using the Test Pattern Generator 406

  8.4 The Scan Path 407

  8.4.1 Overview 407

  8.4.2 Types of Scan-Flops 410

  8.4.3 Level-Sensitive Scan Design 412

  8.4.4 Scan Compliance 416

  8.4.5 Scan-Testing Circuits with Memory 418

  8.4.6 Implementing Scan Path 420

  8.5 The Partial Scan Path 426

  8.6 Scan Solutions for PCBs 432

  8.6.1 The NAND Tree 433

  8.6.2 The 1149.1 Boundary Scan 434

  8.7 Summary 443 Problems

  444 References

  449

  9 Built-In Self-Test 451

  9.1 Introduction 451

  9.2 Benefits of BIST 452

  9.3 The Basic Self-Test Paradigm 454

  9.3.1 A Mathematical Basis for Self-Test 455

  9.8.3 Burst Error Correction 499

  9.7.1 The Ordering Relation 489

  9.7.2 The Microprocessor Matrix 493

  9.7.3 Graph Methods 494

  9.8 Fault Tolerance 495

  9.8.1 Performance Monitoring 496

  9.8.2 Self-Checking Circuits 498

  9.8.4 Triple Modular Redundancy 503

  9.6.2 The Desktop Management Interface 487

  9.8.5 Software Implemented Fault Tolerance 505

  9.9 Summary 505 Problems

  507 References

  510

  513

  10.1 Introduction 513

  9.7 Black-Box Testing 488

  9.6.1 The Test Controller 484

  9.3.2 Implementing the LFSR 459

  9.4.4 Aliasing 470

  9.3.3 The Multiple Input Signature Register (MISR) 460

  9.3.4 The BILBO 463

  9.4 Random Pattern Effectiveness 464

  9.4.1 Determining Coverage 464

  9.4.2 Circuit Partitioning 465

  9.4.3 Weighted Random Patterns 467

  9.4.5 Some BIST Results 471

  9.6 Remote Test 484

  9.5 Self-Test Applications 471

  9.5.1 Microprocessor-Based Signature Analysis 471

  9.5.2 Self-Test Using MISR/Parallel SRSG (STUMPS) 474

  9.5.3 STUMPS in the ES/9000 System 477

  9.5.4 STUMPS in the S/390 Microprocessor 478

  9.5.5 The Macrolan Chip 480

  9.5.6 Partial BIST 482

10 Memory Test

  10.2 Semiconductor Memory Organization 514

  10.3 Memory Test Patterns 517

  10.4 Memory Faults 521

  10.5 Memory Self-Test 524

  10.5.1 A GALPAT Implementation 525

  10.5.2 The 9N and 13N Algorithms 529

  10.5.3 Self-Test for BIST 531

  10.5.4 Parallel Test for Memories 531

  10.5.5 Weak Read–Write 533

  10.6 Repairable Memories 535

  10.7 Error Correcting Codes 537

  10.7.1 Vector Spaces 538

  10.7.2 The Hamming Codes 540

  10.7.3 ECC Implementation 542

  10.7.4 Reliability Improvements 543

  10.7.5 Iterated Codes 545

  10.8 Summary 546 Problems

  547 References

  549

  

11 I 551

DDQ

  11.1 Introduction 551

  11.2 Background 551

  11.3 Selecting Vectors 553

  11.3.1 Toggle Count 553

  11.3.2 The Quietest Method 554

  11.4 Choosing a Threshold 556

  11.5 Measuring Current 557

  11.6 I Versus Burn-In 559

  DDQ

  11.7 Problems with Large Circuits 562

  11.8 Summary 564 Problems

  565

12 Behavioral Test and Verification 567

  12.8.9 Pitfalls When Building Goal Trees 626

  12.8.3 The Fault Simulator 616

  12.8.4 Building Goal Trees 617

  12.8.5 Sequential Conflicts in Goal Trees 618

  12.8.6 Goal Processing for a Microprocessor 620

  12.8.7 Bidirectional Goal Search 624

  12.8.8 Constraint Propagation 625

  12.8.10 MaxGoal Versus MinGoal 627

  12.8.1 An Overview of TDX 607

  12.8.11 Functional Walk 629

  12.8.12 Learn Mode 630

  12.8.13 DFT in TDX 633

  12.9 Design Verification 635

  12.9.1 Formal Verification 636

  12.9.2 Theorem Proving 636

  12.8.2 DEPOT 614

  12.1 Introduction 567

  12.2 Design Verification: An Overview 568

  12.4.2 Design Error Modeling 578

  12.3 Simulation 570

  12.3.1 Performance Enhancements 570

  12.3.2 HDL Extensions and C++ 572

  12.3.3 Co-design and Co-verification 573

  12.4 Measuring Simulation Thoroughness 575

  12.4.1 Coverage Evaluation 575

  12.5 Random Stimulus Generation 581

  12.7.2 The Petri Net 602

  12.6 The Behavioral ATPG 587

  12.6.1 Overview 587

  12.6.2 The RTL Circuit Image 588

  12.6.3 The Library of Parameterized Modules 589

  12.6.4 Some Basic Behavioral Processing Algorithms 593

  12.7 The Sequential Circuit Test Search System (SCIRTSS) 597

  12.7.1 A State Traversal Problem 597

  12.8 The Test Design Expert 607

  12.9.3 Equivalence Checking 638

  12.9.4 Model Checking 640

  12.9.5 Symbolic Simulation 648

  12.10 Summary 650 Problems

  652 References

  653

  Index 657

  PREFACE

  About one and a half decades ago the state of the art in DRAMs was 64K bytes, a typical personal computer (PC) was implemented with about 60 to 100 dual in-line packages (DIPs), and the VAX11/780 was a favorite platform for electronic design automation (EDA) developers. It delivered computational power rated at about one MIP (million instructions per second), and several users frequently shared this machine through VT100 terminals.

  Now, CPU performance and DRAM capacity have increased by more than three orders of magnitude. The venerable VAX11/780, once a benchmark for performance comparison and host for virtually all EDA programs, has been relegated to muse- ums, replaced by vastly more powerful PCs, implemented with fewer than a half dozen integrated circuits (ICs), at a fraction of the cost. Experts predict that shrink- ing geometries, and resultant increase in performance, will continue for at least another 10 to 15 years.

  Already, it is becoming a challenge to use the available real estate on a die. Whereas in the original Pentium design various teams vied for a few hundred addi-

  1

  tional transistors on the die, it is now becoming increasingly difficult for a design

  

2

team to use all of the available transistors.

  The ubiquitous 8-bit microcontroller appears in entertainment products and in automobiles; billions are sold each year. Gordon Moore, Chairman Emeritus of Intel Corp., observed that these less glamorous workhorses account for more than 98% of

3 Intel’s unit sales. More complex ICs perform computation, control, and communi-

  cations in myriad applications. With contemporary EDA tools, one logic designer can create complex digital designs that formerly required a team of a half dozen logic designers or more. These tools place logic design capability into the hands of an ever-growing number of users. Meanwhile, these development tools themselves continue to evolve, reducing turn-around time from design of logic circuit to receipt of fabricated parts.

  This rapid advancement is not without problems. Digital test and verification present major hurdles to continued progress. Problems associated with digital logic testing have existed for as long as digital logic itself has existed. However, these problems have been exacerbated by the growing number of circuits on individual chips. One development group designing a RISC (reduced instruction set computer)

  4

  stated, “the work required to ... test a chip of this size approached the amount of effort required to design it. If we had started over, we would have used more

  The increase in size and complexity of circuits on a chip, often with little or no increase in the number of I/O pins, creates a testing bottleneck. Much more logic must be controlled and observed with the same number of I/O pins, making it more difficult to test the chip. Yet, the need for testing continues to grow in importance. The test must detect failures in individual units, as well as failures caused by defec- tive manufacturing processes. Random defects in individual units may not signifi- cantly impact a company’s balance sheet, but a defective manufacturing process for a complex circuit, or a design error in some obscure function, could escape detec- tion until well after first customer shipments, resulting in a very expensive product recall.

  Public safety must also be taken into account. Digital logic devices have become pervasive in products that affect public safety, including applications such as trans- portation and human implants. These products must be thoroughly tested to ensure that they are designed and fabricated correctly. Where design and test shared tools in the past, there is a steadily growing divergence in their methodologies. Formal veri- fication techniques are emerging, and they are of particular importance in applica- tions involving public safety.

  Each new generation of EDA tools makes it possible to design and fabricate chips of greater complexity at lower cost. As a result, testing consumes a greater percent- age of total production cost. It requires more effort to create a test program and requires more stimuli to exercise the chip. The difficulty in creating test programs for new designs also contributes to delays in getting products to the marketplace. Product managers must balance the consequences of delaying shipment of a product for which adequate test programs have not yet been developed against the conse- quences of shipping product and facing the prospect of wholesale failure and return of large quantities of defective products.

  New test strategies are emerging in response to test problems arising from these increasingly complex devices, and greater emphasis is placed on finding defects as early as possible in the manufacturing cycle. New algorithms are being devised to create tests for logic circuits, and more attention is being given to design-for-test (DFT) techniques that require participation by logic designers, who are being asked to adhere to design rules that facilitate design of more testable circuits.

  Built-in self-test (BIST) is a logical extension of DFT. It embeds test mechanisms directly into the product being designed, often using DFT structures. The goal is to place stimulus generation and response evaluation circuits closer to the logic being tested.

  Fault tolerance also modifies the design, but the goal is to contain the effects of faults. It is used when it is critical that a product operate correctly. The goal of pas- sive fault tolerance is to permit continued correct circuit operation in the presence of defects. Performance monitoring is another form of fault tolerance, sometimes called active fault tolerance, in which performance is evaluated by means of special self-testing circuits or by injecting test data directly into a device during operation. Errors in operation can be recognized, but recovery requires intervention by the processor or by an operator. An instruction may be retried or a unit removed from

  Remote diagnostics are yet another strategy employed in the quest for reliable computing. Some manufacturers of personal computers provide built-in diagnostics. If problems occur during operation and if the problem does not interfere with the ability to communicate via the modem, then the computer can dial a remote com- puter that is capable of analyzing and diagnosing the cause of the problem.

  It should be obvious from the preceding paragraphs that there is no single solu- tion to the test problem. There are many solutions, and a solution may be appropri- ate for one application but not for another. Furthermore, the best solution for a particular application may be a combination of available solutions. This requires that designers and test engineers understand the strengths and weaknesses of the various approaches.

THE ROADMAP

  This textbook contains 12 chapters. The first six chapters can be viewed as building blocks. Topics covered include simulation, fault simulation, combinational and sequential test pattern generation, and a brief introduction to tester architectures. The last six chapters build on the first six. They cover design-for-test (DFT), built-in self-test (BIST), fault tolerance, memory test, I test, and, finally, behavioral test

  DDQ

  and verification. This dichotomy represents a natural partition for a two-semester course. Some examples make use of the Verilog hardware design language (HDL). For those readers who do not have access to a commercial Verilog product, a quite good (and free) Verilog compiler/simulator can be downloaded from http:// www.icarus.com. Every effort was made to avoid relying on advanced HDL con- cepts, so that the student familiar only with programming languages, such as C, can follow the Verilog examples.

  PART I Chapter 1 begins with some general observations about design, test, and quality. Acceptable quality level (AQL) depends both on the yield of the manufacturing pro-

  cesses and on the thoroughness of the test programs that are used to identify defec- tive product. Process yield and test thoroughness are focal points for companies trying to balance quality, product cost, and time to market in order to remain profit- able in a highly competitive industry.

  Simulation is examined from various perspectives in Chapter 2. Simulators used in digital circuit design, like compilers for high-level languages, can be compiled or interpreted, with each having its distinct advantages and disadvantages. We start by looking at contemporary hardware design languages (HDL). Ironically, while soft- ware for personal computers has migrated from text to graphical interfaces, the input medium for digital circuits has migrated from graphics (schematic editors) to text. Topics include event-driven simulation and selective trace. Delay models for represents one end of the simulation spectrum. Behavioral simulation and cycle simulation represent the other end. Binary decision diagrams (BDDs), used in support of cycle simulation, are introduced in this chapter. Timing analysis in syn- chronous designs is also discussed.

  Chapter 3 concentrates on fault simulation algorithms, including parallel, deductive, and concurrent fault simulation. The chapter begins with a discussion of fault modeling, including, of course, the stuck-at fault model. The basic algorithms are examined, with a look at ways in which excess computations can be squeezed out of the algorithms in order to improve performance. The relationship between algorithms and the design environment is also examined: For example, how are the different algorithms affected by the choice of synchronous or asynchronous design environment?

  The topic for Chapter 4 is automatic test pattern generation (ATPG) for combi- national circuits. Topological, or path tracing, methods, including the D-algorithm with its formal notation, along with PODEM, FAN, and the critical path, are examined. The subscripted D-algorithm is examined; it represents an example of symbolic propagation. Algebraic methods are described next; these include Bool- ean difference and Boolean satisfiability. Finally, the use of BDDs for ATPG is discussed.

  Sequential ATPG merits a chapter of its own. The search for an effective sequential ATPG has continued unabated for over a quarter-century. The problem is complicated by the presence of memory, races, and hazards. Chapter 5 focuses on some of the methods that have evolved to deal with sequential circuits, including the iterative test generator (ITG), the 9-value ITG, and the extended backtrace (EBT). We also look at some experiments on state machines, including homing sequences, distinguishing sequences, and so on, and see how these lead to circuits which, although testable, require more information than is available from the netlist.

  Chapter 6 focuses on automatic test equipment. Testers in use today are extraor- dinarily complex; they have to be in order to keep up with the ICs and PCBs in pro- duction; hence this chapter can be little more than a brief overview of the subject. Testers are used to test circuits in production environments, but they are also used to characterize ICs and PCBs. In order to perform characterization, the tester must be able to operate fast enough to clock the circuit at its intended speed, it must be able to accurately measure current and voltage, and it must be possible to switch input levels and strobe output pins in a matter of picoseconds. The Standard Test Interface Language (STIL) is also examined in this chapter. Its goal it to give a uniform appearance to the many different tester architectures on the marketplace.

  PART II Topics covered in the first six chapters, including logic and fault simulators, ATPG

  algorithms, and the various testers and test strategies, can be thought of as building blocks, or components, of a successful test strategy. In Chapter 7 we bring these components together in order to determine how to leverage the tools, individually and in conjunction with other tools, in order to create a successful test strategy. This often requires an understanding of the environment in which they function, includ- ing such things as design methodologies, HDLs, circuit models, data structures, and fault modeling strategies. Different technologies and methodologies require very different tools.

  The focus up to this point has been on the traditional approach to test—that is, apply stimuli and measure response at the output pins. Unfortunately, existing algorithms, despite decades of research, remain ineffective for general sequential logic. If the algorithms cannot be made powerful enough to test sequential logic, then circuit complexity must be reduced in order to make it testable. Chapters 8 and 9 look at ways to improve testability by altering the design in order to improve access to its inner workings. The objectives are to make it easier to apply a test (improve controllability) and make it easier to observe test results (improve observability). Design-for-test (DFT) makes it easier to develop and apply tests via conventional testers. Built-in self-test (BIST) attempts to replace the tester, or at least offload many of its tasks. Both methodologies make testing easier by reducing the amount and/or complexity of logic through which a test must travel either to stimulate the logic being tested or to reach an observable output whereby the test can be monitored.

  Memory test is covered in Chapter 10. These structures have their own problems and solutions as a result of their regular, repetitive structure and we examine some algorithms designed to exploit this regularity. Because memories keep growing in size, the memory test problem continues to escalate. The problem is further exac- erbated by the fact that increasingly larger memories are being embedded in microprocessors and other devices. In fact, it has been suggested that as micropro- cessors grow in transistor count, they are becoming de facto memories with a little logic wrapped around them. A growing trend in memories is the use of memory BIST (MBIST). This chapter contains two Verilog implementations of memory test algorithms.

  Complementary metal oxide semiconductor (CMOS) circuits draw little or no current except when clocked. Consequently, excessive current observed when an IC is in the quiescent state is indicative of either a hard failure or a potential reliability problem. A growing number of investigators have researched the implications of this observation, and determined how to leverage this potentially powerful test strategy.

  I will be the focus of Chapter 11.

  DDQ

  Design verification and test can be viewed as complementary aspects of one problem, namely, the delivery of reliable computation, control, and communications in a timely and cost-effective manner. However, it is not completely obvious how these two disciplines are related. In Chapter 12 we look closely at design verifica- tion. The opportunities to leverage test development methodologies and tools in design verification—and, conversely, the opportunities to leverage design verifica- tion efforts to obtain better test programs—make it essential to understand the rela- tionships between these two efforts. We will look at some evolving methodologies and some that are maturing, and we will cover some approaches best described as The goal of this textbook is to cover a representative sample of algorithms and practices used in the IC industry to identify faulty product and prevent, to the extent possible, tester escapes—that is, faulty devices that slip through the test process and make their way into the hands of customers. However, digital test is not a “one size fits all” industry.

  Given two companies with similar digital products, test practices may be as dif- ferent as day and night, and yet both companies may have rational test plans. Minor nuances in product manufacturing practices can dictate very different strategies. Choices must be made everywhere in the design and test cycle. Different individuals within the same project may be using simulators ranging from switch-level to cycle- based. Testability enhancements may range from ad hoc techniques, to partial-scan, to full-scan. Choices will be dictated by economics, the capabilities of the available tools, the skills of the design team, and other circumstances.

  One of the frustrations faced over the years by those responsible for product qual- ity has been the reluctance on the part of product planners to face up to and address test issues. Nearly 500 years ago Nicolo Machiavelli, in his book The Prince, observed that “fevers, as doctors say, at their beginning are easy to cure but difficult to recognise, but in course of time when they have not at first been recognised, and

  5

  treated, become easy to recognise and difficult to cure. ” In a similar vein, in the early stages of a design, test problems are difficult to recognize but easy to solve; further into the process, test problems become easier to recognize but more difficult to cure.

  REFERENCES

  

1. Brandt, R., The Birth of Intel’s Pentium Chip—and the Labor Pains, Business Week, March

29, 1993, pp. 94–95.

  

2. Bass, Michael J., and Clayton M. Christensen, The Future of the Microprocessor Business,

IEEE Spectrum , Vol. 39, No. 4, April 2002, pp. 34–39.

  3. Port, O., Gordon Moore’s Crystal Ball, Business Week, June 23, 1997, p. 120.

  

4. Foderaro, J. K., K. S. Van Dyke, and D. A. Patterson, Running RISCs, VLSI Des.,

September–October 1982, pp. 27–32.

  

5. Machiavelli, Nicolo, The Prince and the Discourses, in The Prince, Chapter 3, Random

House, 1950.

CHAPTER 1 Introduction

1.1 INTRODUCTION

  Things don’t always work as intended. Some devices are manufactured incorrectly, others break or wear out after extensive use. In order to determine if a device was manufactured correctly, or if it continues to function as intended, it must be tested. The test is an evaluation based on a set of requirements. Depending on the complex- ity of the product, the test may be a mere perusal of the product to determine whether it suits one’s personal whims, or it could be a long, exhaustive checkout of a complex system to ensure compliance with many performance and safety criteria. Emphasis may be on speed of performance, accuracy, or reliability.

  Consider the automobile. One purchaser may be concerned simply with color and styling, another may be concerned with how fast the automobile accelerates, yet another may be concerned solely with reliability records. The automobile manufac- turer must be concerned with two kinds of test. First, the design itself must be tested for factors such as performance, reliability, and serviceability. Second, individual units must be tested to ensure that they comply with design specifications.

  Testing will be considered within the context of digital logic. The focus will be on technical issues, but it is important not to lose sight of the economic aspects of the problem. Both the cost of developing tests and the cost of applying tests to individual units will be considered. In some cases it becomes necessary to make trade-offs. For example, some algorithms for testing memories are easy to create; a computer pro- gram to generate test vectors can be written in less than 12 hours. However, the set of test vectors thus created may require several millenia to apply to an actual device. Such a test is of no practical value. It becomes necessary to invest more effort into initially creating a test in order to reduce the cost of applying it to individual units.

  This chapter begins with a discussion of quality. Once we reach an agreement on the meaning of quality, as it relates to digital products, we shift our attention to the subject of testing. The test will first be defined in a broad, generic sense. Then we put the subject of digital logic testing into perspective by briefly examining the overall design process. Problems related to the testing of digital components and

  Digital Logic Testing and Simulation , Second Edition , by Alexander Miczo assemblies can be better appreciated when viewed within the context of the overall design process. Within this process we note design stages where testing is required. We then look at design aids that have evolved over the years for designing and testing digital devices. Finally, we examine the economics of testing.

  1.2 QUALITY

  Quality frequently surfaces as a topic for discussion in trade journals and periodi- cals. However, it is seldom defined. Rather, it is assumed that the target audience understands the intended meaning in some intuitive way. Unfortunately, intuition can lead to ambiguity or confusion. Consider the previously mentioned automobile. For a prospective buyer it may be deemed to possess quality simply because it has a soft leather interior and an attractive appearance. This concept of quality is clearly subjective: It is based on individual expectations. But expectations are fickle: They may change over time, sometimes going up, sometimes going down. Furthermore, two customers may have entirely different expectations; hence this notion of quality does not form the basis for a rigorous definition.

  In order to measure quality quantitatively, a more objective definition is needed. We choose to define quality as the degree to which a product meets its requirements. More precisely, it is the degree to which a device conforms to applicable specifica-

  1

  tions and workmanship standards. In an integrated circuit (IC) manufacturing envi- ronment, such as a wafer fab area, quality is the absence of “drift”—that is, the absence of deviation from product specifications in the production process. For digi- tal devices the following equation, which will be examined in more detail in a later

  2

  section, is frequently used to quantify quality level:

  ( 1 T ) −

  AQL = Y (1.1) In this equation, AQL denotes acceptable quality level, it is a function of Y (product yield) and T (test thoroughness). If no testing is done, AQL is simply the yield —that is, the number of good devices divided by the total number of devices made. Con- versely, if a complete test were created, then T = 1, and all defects are detected so no bad devices are shipped to the customer.

  Equation (1.1) tells us that high quality can be realized by improving product yield and/or the thoroughness of the test. In fact, if Y ≥ AQL, testing is not required. That is rarely the case, however. In the IC industry a high yield is often an indication that the process is not aggressive enough. It may be more economically rewarding to shrink the geometry, produce more devices, and screen out the defective devices through testing.

  1.3 THE TEST

  In its most general sense, a test can be viewed as an experiment whose purpose is to

Figure 1.1 depicts a test configuration in which stimuli are applied to a device- under-test (DUT), and the response is evaluated. If we know what the expected

  

response is from the correctly operating device, we can compare it to the response of

the DUT to determine if the DUT is responding correctly.

  When the DUT is a digital logic device, the stimuli are called test patterns or test

  

vectors . In this context a vector is an ordered n -tuple; each bit of the vector is

  applied to a specific input pin of the DUT. The expected or predicted outcome is usually observed at output pins of the device, although some test configurations per- mit monitoring of test points within the circuit that are not normally accessible dur- ing operation. A tester captures the response at the output pins and compares that response to the expected response determined by applying the stimuli to a known good device and recording the response, or by creating a model of the circuit (i.e., a

  3

  representation or abstraction of selected features of the system ) and simulating the input stimuli by means of that model. If the DUT response differs from the expected response, then an error is said to have occurred. The error results from a defect in the circuit.

  The next step in the process depends on the type of test that is to be applied. A

  4

  taxonomy of test types is shown in Table 1.1. The classifications range from testing die on a bare wafer to tests developed by the designer to verify that the design is cor- rect. In a typical manufacturing environment, where tests are applied to die on a wafer, the most likely response to a failure indication is to halt the test immediately and discard the failing part. This is commonly referred to as a go–nogo test. The object is to identify failing parts as quickly as possible in order to reduce the amount of time spent on the tester.

  If several functional test programs were developed for the part, a common prac- tice is to arrange them so that the most effective test program—that is, the one that uncovers the most defective parts—is run first. Ranking the effectiveness of the test programs can be done through the use of a fault simulator, as will be explained in a subsequent chapter. The die that pass the wafer test are packaged and then retested. Bonding a chip to a package has the potential to introduce additional defects into the process, and these must be identified.

  Binning is the practice of classifying chips according to the fastest speed at which they can operate. Some chips, such as microprocessors, are priced according to their clock speed. A chip with a 10% performance advantage may bring a 20–50% premium in the marketplace. As a result, chips are likely to first be tested at their maximum rated speed. Those that fail are retested at lower clock speeds until either they pass the test or it is determined that they are truly defective. It is, of course, pos- sible that a chip may run successfully at a clock speed lower than any for which it was tested. However, such chips can be presumed to have no market value.

  Stimulus T

DU

  Response

TABLE 1.1 Types of Tests

  Type of Test Purpose of Test

Production Test of manufactured parts to sort out those that are faulty

Wafer Sort or Probe Test of each die on the wafer.

Final or Package Test of packaged chips and separation into bins (mili-

tary, commercial, industrial).

Acceptance Test to demonstrate the degree of compliance of a device

with purchaser’s requirements. Sample Test of some but not all parts. Go–nogo Test to determine whether device meets specifications.

Characterization or Test to determine actual values of AC and DC parameters

engineering and the interaction of parameters. Used to set final

specifications and to identify areas to improve pro- cess to increase yield. Stress screening (burn-in) Test with stress (high temperature, temperature cycling, vibration, etc.) applied to eliminate short life parts.

Reliability (accelerated Test after subjecting the part to extended high temperature

life) to estimate time to failure in normal operation.

Diagnostic (repair) Test to locate failure site on failed part.

Quality Test by quality assurance department of a sample of each

lot of manufactured parts. More stringent than final test. On-line or checking On-line testing to detect errors during system operation. Design verification Verify the correctness of a design.