High-frequency, at-speed scan testing - Design & Test of Computers

0 downloads 0 Views 293KB Size Report
May 11, 2009 - increased emphasis on at-speed testing to maintain test quality for larger, more complex chips and new fabrica- tion processes. Growing gate ...
High-Frequency, At-Speed Scan Testing Xijiang Lin, Ron Press, Janusz Rajski, Paul Reuter, Thomas Rinderknecht, Bruce Swanson, and Nagesh Tamarapalli Mentor Graphics

Editor’s note: At-speed scan testing has demonstrated many successes in industry. One key feature is its ability to use on-chip clock for accurate timing in the application of test vectors in a tester. The authors describe new strategies where at-speed scan tests can be applied with internal PLLs. They present techniques for optimizing ATPG across multiple clock domains and propose methodologies to combine both stuck-at-fault and delay-test vectors into an effective test suite. —Li-C. Wang, University of California, Santa Barbara

IC MANUFACTURING TEST is changing, with an increased emphasis on at-speed testing to maintain test quality for larger, more complex chips and new fabrication processes. Growing gate counts and increasing timing defects with small fabrication technologies force improvements in test quality to maintain the quality level of chips delivered to customers after testing. Improving the stuck-at test coverage alone still might leave too many timing-based defects undetected to reach quality goals. Therefore, at-speed testing is often necessary. Scan-based ATPG solutions for at-speed testing ensure high test coverage and reasonable development effort. This article explores applying at-speed scan testing. We introduce new strategies to optimize ATPG to apply specific clock sequences that are valid with the circuit operation. We also use these strategies to have ATPG generate tests that use internal clocking logic. Furthermore, we combine the same technique with programmable phaselocked loop (PLL) features to support applying high-frequency, at-speed tests from internal PLLs. As a result, we can base the application of precise clocks for at-speed tests on on-chip clock generator circuitry instead of testers.

Motivation for at-speed scan testing IC fabrication processes produce a given number of defective ICs. Many companies have gotten by with sta-

0740-7475/03/$17.00 © 2003 IEEE

tic stuck-at scan testing and a limited amount of functional test patterns for atspeed testing to uncover these defective ICs. They often supplement these tests with IDDQ tests, which detect many types of defects, including some timing-related ones.1 In the past, these tests effectively screened out enough of the limited number of timing-related defects. However, at the smaller geometry sizes of today’s ICs, the number of timing-related defects is growing,2 a problem exacerbated by the reduced effectiveness of functional and IDDQ testing. Functional testing is less effective, because the difficulty and time to generate these tests grows exponentially with increasing gate counts. The electrical properties of 0.13-micron and smaller technologies have caused many companies to rely less on IDDQ tests, because a defective device’s current will be difficult to distinguish from the normal quiescent current. Some have abandoned IDDQ tests altogether for these small-geometry devices. The main force behind the need for at-speed testing is the defect characteristics of 0.13-micron and smaller technologies that are causing more timing-related defects.3 For example, one study on a microprocessor design showed that if scan-based at-speed tests were removed from the test program, the escape rate went up nearly 3%.4 This was on a chip with a 0.18-micron feature size. We can use functional tests to provide at-speed tests, but the functional test development problem is exploding exponentially with chip growth. For a different microprocessor design, the development effort to create the functional test set took three person-years to complete.5 Furthermore, these tests consumed a large percentage of the tester memory, and the test time to run them was significant. Because the effort to create effective at-speed

Copublished by the IEEE CS and the IEEE CASS

September–October 2003

17

Speed Test and Speed Binning for DSM Designs

functional-test patterns is daunting, more companies are moving to at-speed scan-based testing. Logic BIST can perform at-speed test but is usually combined with ATPG to get high enough coverage. The value of logic BIST is that it provides test capabilities, such as secure products or in-system testing, when tester access is impossible. Logic BIST uses an on-chip pseudorandom pattern generator (PRPG) to generate pseudorandom data that loads into the scan chains. A second multiple-input shift register (MISR) computes a signature based on the data that is shifted out of the scan chains. There is reasonable speculation that supplementing a high-quality test suite in production with logic BIST might detect some additional unmodeled defects. Atspeed logic BIST is possible using internal PLLs to produce many pseudorandom at-speed tests. However, employing this test strategy requires additional deterministic at-speed tests to ensure higher test quality. This test strategy also requires adhering to strict design rule checking and using on-chip hardware to increase testability and avoid capturing unknown values. Some speculate that logic BIST will improve defect coverage because it will detect faults many times. However, although it does provide multiple detections, they occur only at the fault sites that are random-pattern testable. Also, although logic BIST may be useful for transition fault testing, the low probability of sensitizing critical paths with pseudorandom vectors makes logic BIST unsuitable for path delay testing. Scan-based tests and ATPG provide a good general solution for at-speed testing. This approach is gaining industry acceptance and is a standard production test requirement at many companies. However, scan-based, at-speed ATPG grows the pattern set size significantly. This is because it is more complicated to activate and propagate at-speed faults than stuck-at faults. Because of this complexity, compressing multiple at-speed faults per pattern is less efficient than for stuck-at faults. Fortunately, embedded compression techniques can support at-speed scan-based testing without sacrificing quality. When using any kind of embedded compression solution, however, engineers must take care not to interfere with the functional design, because core logic changes can significantly affect overall cost.

Moving high-frequency clocking from the tester to the chip In the past, most devices were driven directly from an externally generated clock signal. However, the clock frequencies that high-performance ICs require cannot

18

be easily applied from an external interface. Many designs use an on-chip PLL to generate high-speed internal clocks from a far slower external reference signal. The problem of importing high-speed clock signals into the device is also an issue during at-speed device testing. It is difficult and costly to mimic high-frequency (PLL) clocks from a tester interface. Studies have shown that both high-speed functional and at-speed scan tests are necessary to achieve the highest test coverage possible.4 To control costs, more testing will move from costly functional test to at-speed scan test. Some companies have already led the way to new at-speed scan testing by using on-chip PLLs.6 Although this is a new idea, it is gaining acceptance and use in industry designs. These techniques are useful for any type of scan design, such as mux-DFF or level-sensitive scan design (LSSD). Because a delay test’s purpose is to verify that the circuitry can operate at a specified clock speed, it makes sense to use the actual on-chip clocks, if possible. You not only get more accurate clocks (and tests), but you also do not need any high-speed clocks from the tester. This lets you use less-sophisticated, and hence cheaper, testers. In this scenario, the tester provides the slower test shift clocks and control signals, and the programmable on-chip clock circuitry provides the atspeed launch and capture clocks. To handle these fast on-chip clocks, we have enhanced ATPG tools to deal with any combination of clock sequences that on-chip logic might generate.6 The ATPG user must simply define the internal clocking events and sequences as well as the corresponding external signals or clocks that initiate these internal signals. That way the clock control logic and PLL, or other clock-generating circuitry, can be treated like a black box for ATPG purposes, and the pattern generation process is simpler.

At-speed test methodology The two prominent fault models for at-speed scan testing are the path-delay and transition fault models. Path delay patterns check the combined delay through a predefined list of gates. It is unrealistic to expect to test every circuit path, because the number of paths increases exponentially with circuit size. Therefore, it is common practice to select a limited number of paths using a static timing-analysis tool that determines the most critical paths in the circuit. Most paths begin and terminate with sequential elements (scan cells), with a few paths having primary inputs (PIs) for start points or primary outputs (POs) for endpoints.

IEEE Design & Test of Computers

September–October 2003

Capture

Launch Clock Scan enable (SE)

Shift

Shift

Last shift

Capture

Shift

Capture

Figure 1. Launch-off-shift pattern timing.

Launch

The transition fault model represents a gross delay at every gate terminal. We test transition faults in much the same way as path delay faults, but the pattern generation tools select the paths. Transition fault tests target each gate terminal for a slow-to-rise or slow-to-fall delay fault. Engineers use transition test patterns to find manufacturing defects because such patterns check for delays at every gate terminal. Engineers use path delay patterns more for speed binning. At-speed scan testing for both path-delay and transition faults requires patterns that launch a transition from a scan cell or PI and then capture the transition at a scan cell or PO. The key to performing at-speed testing is to generate a pair of clock pulses for the launch and capture events. This can be complicated because modern designs can contain several clocks operating at different frequencies. One method of applying the launch and capture events is to use the last shift before capture (functional mode) as the launch event—that is, the launch-off-shift approach. Figure 1 shows an example waveform for a launch-off-shift pattern for a mux-DFF type design; you can apply a similar approach to an LSSD. The scan-enable (SE) signal is high during test mode (shift) and low when in functional mode. The figure also shows the launch clock skewed so that it’s late in its cycle, and the capture clock is skewed so that it’s early in its cycle. This skewing creates a higher launch-to-capture clock frequency than the standard shift clock frequency. (Saxena et al.7 list more launch and capture waveforms used by launch-off-shift approaches.) The main advantage of this approach is simple test pattern generation. The main disadvantage (for mux-DFF designs) is that we must treat the SE signal as timing critical. When using a launch-off-shift approach, pipelining an SE within the circuit can simplify that SE’s timing and design. However, the nonfunctional logic related to operating SE at a high frequency can contribute to yield loss. An alternate approach called broadside patterns uses a pair of at-speed clock pulses in functional mode. Figure 2 shows an example waveform for a broadside pattern. Each clock waveform is crafted to test only a subset of all possible edge relationships between the same and different clock domains. The first pulse initiates (launches) the transition at the targeted terminal, and the second pulse captures the response at a scan cell. This method also allows using the late and early skewing of the launch and capture clocks within their cycles. The main advantage of this broadside approach is that the timing of the SE transition is no longer critical, because the launch and capture clock pulses occur

Clock

SE Shift

Shift

Dead cycle

Shift

Figure 2. Broadside-pattern timing.

in functional mode. Adding extra dead cycles after the last shift can give the SE additional time to settle. Logic BIST and ATPG test can generate launch-offshift and broadside patterns. Logic BIST includes clockcontrol hardware to provide at-speed clocks from a PLL. The clocks’ sequence is usually constructed in a BIST approach such that the clocks that control a higher amount of logic will be pulsed more often during the pseudorandom patterns. When using deterministic test pattern generation, an ATPG tool can perform the analysis to select the desired clock sequence on a per-pattern basis to detect the specific target faults. ATPG can use programmable PLLs for at-speed clock generation if the PLL outputs are programmable. Both logic BIST and ATPG generally shift at lower frequencies than the fastest at-speed capture frequencies to avoid power problems during shift. In addition, a fast shift frequency would force high-speed design requirements for the scan chain. It is the timing from launch to capture that is important for accurate at-speed testing.

Controlling complex clock-generator circuits To properly use high-frequency clocks that are generated on chip, engineers must address several issues.

19

Speed Test and Speed Binning for DSM Designs

External

Internal

IC System_clk Begin_ac Scan_en

PLL

Clk1 PLL control

Design core

Clk2

Scan_clk1 Scan_clk2

Figure 3. Phase-locked loop (PLL) clock generation with internal and external clocks.

240 ns System_clk Scan_en Begin_ac Scan_clk1 Scan_clk2 Clk1 Clk2 Slow

Fast

Fast

Slow

Figure 4. Waveform of clock-generation logic.

Sequences of multiple on-chip (internal) clock pulses are necessary to create the launch and capture events needed for at-speed scan patterns. Engineers can create them using various combinations of off-chip (external) clocks and control signals. To generate an appropriate internal clock sequence, it is inefficient to have an ATPG engine work back through complex clock generators to determine the necessary external clock pulses and control signals for every pattern. Furthermore, you cannot let the ATPG engine choose the internal clock sequences without regard for the clock-generation logic, because the ATPG engine might use internal clock sequences that cannot be created on chip. To solve these issues, we have implemented an innovative ATPG approach that lets you specify legal clock sequences to the tool using one or more named-capture procedures. These named-capture procedures describe a sequence of events grouped in test cycles. Included in each procedure is the way the internal clock

20

sequence can be issued along with the corresponding sequence of external clocks or events (condition statements) required to generate it. Using these procedures, you can specify all legal clock sequences needed to test the at-speed faults to the tool. The ATPG engine can perform pattern generation while only considering the internal clocks, their legal sequences, and the internal conditions that must be set up to produce the clock sequence. The final scan patterns are saved using the external clock sequences by automatically mapping each internal clock sequence to its corresponding external clock/control sequence. Using internal and external clock sequences (plus control signals) is efficient for behaviorally modeling the clock-generation logic so that ATPG can create at-speed scan patterns by using the on-chip clocks. This method supports all types of clock generation logic, even the logic treated as a black box in the ATPG tool. Figure 3 shows a simple example of a programmable PLL used to generate multiple clock pulses from a single off-chip clock. The programmability is usually a register-controlled clock-gating circuit that gates the PLL outputs. Figure 4 presents the waveform showing the relationship between the external and internal clocks. The example shows a mux-DFF design with two internal clocks and two scan clocks. A similar waveform would exist for an LSSD. We can express this waveform to the ATPG tool as a single named-capture procedure and its associated timing definitions for each cycle (timeplates). We use it to describe a single clock sequence, as well as the timing it uses, to the ATPG tool. If the PLL supports other clock sequences that are necessary to detect at-speed faults, we can write other named-capture procedures for them. In the named-capture procedure, we can mark some test cycles as slow. These cycles are not available for at-speed launch or capture cycles. In other words, we can mark the cycles that are not valid for at-speed fault simulation detection. In some designs, the PLL control signals are not supplied externally. Instead, engineers design them using internal scan cells. To avoid incorrect logic values, we can also use condition statements to force ATPG to load desired values in those scan cells. Often, clock generation circuits require many external cycles to produce several internal pulses. Some cir-

IEEE Design & Test of Computers

Merging at-speed patterns with stuck-at patterns With increasing speed-related defects, it is necessary to have at-speed test patterns such as transition patterns in addition to the usual stuck-at patterns. The number of transition patterns typically ranges from about three to five times the number of stuck-at test patterns. Transition patterns, however, also detect a significant percentage of stuck-at faults. Thus, to minimize the overall test pattern count, we can merge the pattern sets for multiple fault models. Figure 5 illustrates the stuck-at test coverage profile for a half-million gate design with 45,000 scan cells. The figure shows that 2,000 patterns are required to achieve 98.84% stuck-at coverage. For this design, approximately 10,800 patterns, or about five times the number of stuckat test patterns, are required to achieve broadside transition fault coverage of 87.86%. Assume that the tester memory capacity can store only 6,000 patterns. Then, as Figure 5 shows, one solution is to apply the original 2,000 stuck-at test patterns followed by a truncated transition pattern set composed of 4,000 patterns, yielding transition test coverage of 83.44%. The end test quality should be better with the test pattern set composed of stuck-at and transition patterns compared to only stuck-at patterns. However, we can obtain a more efficient compact pattern set because the transition patterns detect a significant percentage of stuck-at faults as well. In fact, for this example, stuck-at fault simulation of the 4,000 transition patterns results in 93.07% of stuck-at faults detected by the transition patterns. As Figure 6 illustrates, only 1,180 extra stuck-at patterns are required to obtain final stuck-at coverage of

September–October 2003

Coverage (%)

100

98.84% 83.44%

50 Stuck-at Transition

0 0

2,000

4,000

6,000

No. of patterns

Figure 5. Stuck-at patterns followed by transition patterns.

100

98.84%

Stuck-at Transition

Coverage (%)

cuits require more than 20 external cycles to produce two or three internal clock pulses. The internal and external modes let the ATPG engine efficiently perform pattern generation without having to simulate the large number of external cycles. The number of internal and external cycles within a named-capture procedure can vary as long as the total times for internal and external are equal. This can dramatically improve pattern generation for these types of circuits. Nonintrusive macrotesting techniques provide a method of applying at-speed test sequences to embedded memories.8 These techniques use the circuit’s scan logic to provide the desired pattern sequences, such as march sequences, at the embedded memory. We can also use named-capture procedures to control at-speed clock events during macrotesting.

93.07% 83.44%

50

0 0

2,000

4,000

5,180 6,000

No. of patterns

Figure 6. Transition patterns with supplemental stuck-at patterns.

98.84%. Thus, by first generating transition patterns and fault-simulating them for stuck-at faults, we can obtain a transition test coverage of 83.44% and a stuck-at test coverage of 98.84% with 5,180 total patterns instead of 6,000 patterns. Figure 7 illustrates a general pattern generation flow for multiple-fault models, to achieve a compact pattern set. As Figure 7 shows, if path delay testing is desired, then the pattern generation effort can commence with path delay ATPG. We can simulate the resulting path delay pattern set against transition faults to eliminate the transition faults that the path delay pattern set detects from our target fault list. If the resulting transition test coverage has not reached the target coverage, we can perform ATPG for the remaining undetected transition faults. Simulating the path delay patterns and the transition patterns detects many of the stuck-at faults. Furthermore, we perform ATPG for the unde-

21

Speed Test and Speed Binning for DSM Designs

Generate path delay patterns

Netlist

Path list

grammable PLL that generates clocks for at-speed testing. The tester can hold no more than 15,000 test patterns. The test strategy requirements are as follows:

Grade for transition coverage ■

Generate additional transition patterns Grade for stuck-at coverage

Critical-path patterns



Transition patterns Generate additional stuck-at patterns

Stuck-at patterns

Pattern optimization

Figure 7. Efficient pattern generation for multiple-fault

■ ■



models. ■

tected stuck-at faults if the target stuck-at coverage is not reached. This methodology generates a compact pattern set across multiple fault models. Even with all the software compression techniques, we might not be able to compress the pattern set enough to fit them in the ATE memory. In those situations, a novel hardware compression technique called embedded deterministic test (EDT) provides a dramatic reduction in test data volume. With this technique, we can comfortably store the pattern set for several fault models. Thus, we can achieve high test quality while simultaneously containing the test costs.9

Case study We use an industrial design to demonstrate how to apply the named-capture procedures. Specifically, we generate at-speed test patterns and describe a methodology to fit stuck-at and at-speed patterns into the tester memory without requiring multiple loads of the test data. The design has ■ ■ ■ ■ ■ ■

16 scan chains; 70,178 scan cells; 358 nonscan cells; five internal clocks; 1,836,403 targeted stuck-at faults; and 2,196,668 targeted transition faults. This chip was designed with an embedded pro-

22

The final test set will include two subtest sets, one for testing the stuck-at faults and the other for testing the at-speed faults. The highest priority is to get the best possible test coverage for the stuck-at faults. This means the test set for stuck-at faults cannot be truncated if the test data volume in the final test set is larger than the tester memory. The transition fault model detects timing-related defects. The test coverage for the transition faults must be as high as possible, provided the final test set fits into the tester memory. The broadside launch-and-capture method must be used to generate at-speed patterns. All values at PIs must remain unchanged, and all POs are unobservable while applying the test patterns for the transition faults. This is necessary for this example because the tester is not fast enough to provide PI values and strobe POs at speed.

During test generation for the stuck-at faults, we use both clock domain analysis and multiple clock-compression techniques to generate the most compact test set. The ATPG tool generates 7,557 test patterns that achieve 96.56% stuck-at test coverage. When generating test patterns for the transition faults, we target only the faults in the same clock domain. The ATPG tool detects these faults using the same clock for launch and capture. However, the transition fault test coverage can improve further if the tool considers faults that cross the clock domains. Testing these faults requires sequences that use different launch and capture clocks. Because the ATPG tool cannot detect faults in different clock domains simultaneously by using a clock sequence with the same launch-and-capture clock, the tool analyzes the fault list and classifies it according to the clock domains, thus splitting up the fault list, before test generation. This lets several test generation processes run in parallel—one for the faults in each clock domain—without increasing the test pattern count. For this experiment’s design, we classified the transition faults into six groups: The first five groups contain the faults for each clock domain; the last group (unclassified) contains all faults that do not fall into a single clock

IEEE Design & Test of Computers

domain. Table 1 gives the Table 1. Transition fault distribution by clock domain. fault classification results. Clock domain No. of faults in domain Test coverage (%) No. of test patterns To test the faults in each clock domain, we Clk1 1,255,898 85.21 2,165 defined five named-capClk2 381,764 72.45 24,799 ture procedures. They Clk3 82,610 81.21 1,024 constrain the clock Clk4 50,628 77.73 638 sequences during test genClk5 48,810 83.03 297 eration for each clock Unclassified 376,958 66.50 NA domain. As an example, 66.72 106 the named-capture proceTotal 2,196,668 79.67 29,029 dure used to test the faults in clock domain Clk1 consists of two cycles. All Table 2. Test generation results before test pattern truncation. clocks except Clk1 are set Test No. of test Stuck-at fault to their off state in those Fault type coverage (%) patterns simulation coverage (%) two cycles, and Clk1 is pulsed in the launch and Stuck-at 96.56 7,557 NA capture cycles. Moreover, Transition 79.67 24,164 89.4 driving PIs and measuring POs are disabled from ATPG in the second cycle so that at-speed events do not the full set of stuck-at patterns. Thus, 28,045 test patterns (24,164 + 3,881) were necessary to achieve the maximum depend on high-speed tester interfaces. The last two columns in Table 1 show the test gen- possible stuck-at test coverage and the best possible traneration results by applying the five named-capture pro- sition test coverage. For this example, the tester can hold only 15,000 test cedures for each clock domain. Before generating the test patterns for the faults in the unclassified group, we patterns, and stuck-at test coverage cannot be sacrifault-simulated those faults by applying the test patterns ficed. So the transition test set must be truncated to fit generated for each clock domain first. The resulting test in the tester memory. To minimize the loss of transition coverage for the unclassified fault group was 66.5%. test coverage due to test pattern truncation, we apply Next, we generated 106 additional test patterns target- the following steps. First, we apply a test pattern ordering technique10 to ing the remaining faults in the unclassified group to improve the test coverage by 0.22%. In summary, ATPG order the stuck-at fault test set based on the stuck-at generated 29,029 test patterns, and the transition test fault model. We record the test coverage curve after coverage achieved was 79.67%. applying the ordered test set (Cstuck). Second, we apply Table 2 summarizes the test generation results for the test pattern ordering technique to order the transistuck-at and transition faults. The number of transition tion test patterns based on the transition fault model. test patterns in Table 2 is the number after the static com- Third, we fault-grade the ordered transition test set by paction of all generated transition test patterns (the stat- using the stuck-at fault model, and record the test covic compaction removed 4,865 redundant transition test erage curve (Ctran). patterns). Because the test patterns generated for the tranFourth, we must determine the number of stuck-at sition faults also detect many stuck-at faults, we must patterns (Nstuck) and transition patterns (Ntran) to reach apply an independently created set of stuck-at test pat- the best possible transition test coverage while reachterns to target faults that the transition test patterns did ing the maximum stuck-at coverage (96.56%). The comnot detect. Thus, the transition fault test patterns are fault- bination of stuck-at patterns and transition patterns simulated for stuck-at test coverage, and 89.4% of the must be less than 15,000. We determine the pattern stuck-at faults are detected with the transition patterns. count mix by using the curves obtained from the first Next, we simulated faults in the original stuck-at test pat- and third steps, as Figure 8 shows. We begin by lookterns and found that 3,881 test patterns were required to ing at a point on the Ctran curve that relates to a specific detect the remaining 7.16% stuck-at faults covered with number of transition patterns (Ntran). This point will also September–October 2003

23

Speed Test and Speed Binning for DSM Designs

Stuck-at test coverage (%)

Stuck-at coverage for stuck-at patterns (Cstuck) Stuck-at coverage grade for transition patterns (Ctran)

100 TC 50

0

Ntran Nstuck No. of patterns

Figure 8. Stuck-at coverage for transition and stuck-at patterns.

define the stuck-at test coverage, TC, if the Ntran patterns are run. Next, we go to the Cstuck curve at the same test coverage. This represents the stuck-at coverage starting point once the Ntran transition patterns are applied. The number of stuck-at patterns from this point to the last stuck-at pattern is Nstuck. It represents an approximation of the number of stuck-at patterns needed to supplement the transition patterns and achieve the maximum stuck-at test coverage. If (Ntran + Nstuck) is greater than 15,000, then we select a smaller Ntran. If the sum is considerably smaller than 15,000, then we select a higher Ntran. We chose an Ntran of 8,307 for this experiment. The transition test coverage and stuck-at test coverage achieved by the 8,307 transition test patterns were 77.15% and 88.95%, respectively. Fifth, once Ntran is determined, we target the stuck-at faults that the Ntran transition test patterns do not detect but that the original stuck-at test patterns do. We perform a new ATPG run to regenerate stuck-at test patterns that detect them. The number of newly generated test patterns is 3,372. (If performing an additional ATPG run is not desirable, we could simulate faults and select the test patterns from the original stuck-at test set that detect the remaining faults instead. For the design under the experiment, this would require approximately 4,234 test patterns from the original stuck-at test set.) Finally, because the total number of test patterns (8,307 plus 3,372) is less than 15,000, we add 3,321 extra test patterns from the ordered transition fault test set. This improves the transition test coverage from 77.15% to 78.28%.

24

The final test set of the design includes 15,000 test patterns. The stuck-at test coverage achieved was 96.56%, and the transition test coverage was 78.28%. Due to the test pattern truncation required to fit on the tester, 1.39% of the possible transition test coverage was lost. Because the at-speed test strategy in this case holds PI values constant, treats all POs as nonobservable, and ignores the faults in cross-clock domains during test generation for transition faults, the highest transition test coverage achieved was only 79.67% before test pattern truncation. However, the ATPG tool determined that 99.91% of all transition faults were classified. This means that most of the undetected faults were ATPG untestable. If we could remove these constraints, we could substantially increase the transition test coverage. However, it is impractical to change PI values and measure POs when using a low-cost tester to test the highfrequency chips at speed.

SCAN-BASED AT-SPEED TESTING is becoming an effi-

cient, effective technique for lowering the high cost of functional test in detecting timing-related defects. It’s important to reiterate the cost benefit of using the on-chip programmable PLL circuitry for test purposes. These highfrequency clocks are available on chip instead of having to come from a sophisticated piece of test equipment. Future work related to at-speed ATPG includes more-precise ATPG diagnostics of timing-related defects to facilitate timing-defect failure analysis and strategies to improve the quality and effectiveness of at-speed ATPG. Researchers must determine how much yield loss occurs from at-speed tests of nonfunctional paths during launch-off-shift patterns, and what the value is of detecting timing defects in nonfunctional paths. Our results demonstrate that deterministic ATPG targeting each stuck-at fault site multiple times will reduce defects per million (DPM) more than singlefault detection. A follow-on to this work is to apply the multiple-detection ATPG technique to transition fault testing. There are also investigations regarding merging physical silicon information to identify potential defect locations for ATPG and to aid in diagnosing physical properties that cause defects. ■

Acknowledgments We are grateful for discussions and contributions from Cam L. Lu and Robert B. Benware of LSI Logic regarding efficient merging of transition and stuck-at pattern sets. IEEE Design & Test of Computers

References 1. P. Nigh et al., “Failure Analysis of Timing and IDDQ-Only Failures from the SEMATECH Test Methods Experiments,” Proc. Int’l Test Conf. (ITC 98), IEEE Press, 1998, pp. 43-52. 2. G. Aldrich and B. Cory, “Improving Test Quality and Reducing Escapes,” Proc. Fabless Forum, Fabless

Janusz Rajski is a chief scientist and the director of engineering for the Design-for-Test products group at Mentor Graphics. His research interests include DFT and logic synthesis. He has a PhD in electrical engineering from Poznan University of Technology, Poznan, Poland.

Semiconductor Assoc., 2003, pp. 34-35. 3. R. Wilson, “Delay-Fault Testing Mandatory, Author Claims,” EE Design, 4 Dec 2002. 4. J. Gatej et al., “Evaluating ATE Features in Terms of Test Escape Rates and Other Cost of Test Culprits,” Proc. Int’l Test Conf. (ITC 02), IEEE Press, 2002, pp. 1040-1048. 5. D. Belete et al., “Use of DFT Techniques in Speed Grad-

Paul Reuter is a staff engineer for the Design-for-Test products group at Mentor Graphics. His research interests include ATPG, BIST, SoC test, low-cost test solutions, and test data standards. He has a BS in electrical engineering from the University of Cincinnati.

ing a 1GHz+ Microprocessor,” Proc. Int’l Test Conf. (ITC 02), IEEE Press, 2002, pp. 1111-1119. 6. N. Tendolkar et al., “Novel Techniques for Achieving High At-Speed Transition Fault Test Coverage for Motorola’s Microprocessors Based on PowerPC Instruction Set Architecture,” Proc. 20th IEEE VLSI Test Symp. (VTS 02), IEEE CS Press, 2002, pp. 3-8. 7. J. Saxena et al., “Scan-Based Transition Fault Testing:

Thomas Rinderknecht is a software development engineer for the Design-for-Test products group at Mentor Graphics. His research interests focus on efficient implementations of logic BIST. He has a BS in electrical engineering from Oregon State University.

Implementation and Low Cost Test Challenges,” Proc. Int’l Test Conf. (ITC 02), IEEE Press, 2002, pp. 1120-1129. 8. J. Boyer and R. Press, “New Methods Test Small Memory Arrays,” Proc. Test & Measurement World, Reed Business Information, 2003, pp. 21-26. 9. J. Rajski et al., “Embedded Deterministic Test for Low Cost Manufacturing Test,” Proc. Int’l Test Conf. (ITC 02), IEEE Press, 2002, pp. 301-310.

Bruce Swanson is a technical marketing engineer for the Design-For-Test products group at Mentor Graphics. His research interests include at-speed test and compression techniques. He has an MS in applied information management from the University of Oregon.

10. X. Lin et al., “On Static Test Compaction and Test Pattern Ordering for Scan Design,” Proc. Int’l Test Conf. (ITC 01), IEEE Press, 2001, pp. 1088-1097.

Xijiang Lin is a staff engineer for the Design-for-Test products group at Mentor Graphics. His research interests include test generation, fault simulation, test compression, fault diagnosis, and DFT. He has a PhD in electrical and computer engineering from the University of Iowa. Ron Press is the technical marketing manager for the Design-for-Test products group at Mentor Graphics. His research interests include at-speed test, intelligent ATPG, and macrotesting. He has a BS in electrical engineering from the University of Massachusetts, Amherst.

September–October 2003

Nagesh Tamarapalli is a technical marketing engineer for the Design-forTest products group at Mentor Graphics. His research interests include all aspects of DFT, BIST, and ATPG, including defect-based testing and diagnosis. He has a PhD in electrical engineering from McGill University, Montreal. Direct questions and comments about this article to Ron Press, Mentor Graphics, 8005 SW Boeckman Rd., Wilsonville, OR 97070; [email protected].

For further information on this or any other computing topic, visit our Digital Library at http://computer.org/ publications/dlib.

25

Suggest Documents