First Special Section of the IEEE TRANSACTIONS ON. INSTRUMENTATION AND MEASUREMENT in the Area of VLSI TestingâFuture of Semiconductor Test.
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005
1659
Guest Editorial First Special Section of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT in the Area of VLSI Testing—Future of Semiconductor Test
A
S THE ELECTRONICS industry continues to grow, technology feature sizes continue to decrease and complex systems and levels of integration continue to increase, the need for better and more efficient methods of testing to ensure reliable operations of chips, the mainstay of today’s digital systems, is being increasingly felt. This tremendous complexity comes primarily from a reduction in the ratio of externally accessible nodes (primary inputs and primary outputs) to inaccessible internal nodes in the resulting circuit design. With changes in technology, the number of input–output (I/O) pins has increased, but not at the same rate as that of the circuit size, with the net result that the pin-to-gate ratio is continually decreasing. In general, the cost of testing integrated circuits (ICs) is prohibitive, ranging from 35%–55% of their total manufacturing cost. Besides, testing a chip is also time-consuming, taking up to about one-half of the total design cycle time. The amount of time available for manufacturing, testing, and marketing a product, on the other hand, continues to decrease. Moreover, as a result of global competition, customers demand lower cost and better quality products. Therefore, in order to achieve this higher quality at a lower cost, testing techniques need to be improved. In recent times, a number of major innovations have occurred in the architecture and design of automatic test equipment (ATE) or IC testers. Some of the examples include fly-by architecture (to overcome round trip delay in gigahertz testing), development of built-in self-testing (BIST), and design-for-testability (DFT) testers (for ICs with BIST and DFT), virtual testers (to validate a test program before tapeout), event testers (to link design and simulation technologies), remote operations of testers, and the most current developments in open tester architectures. The test and measurement community is increasingly adopting design solutions for test resource partitioning and scheduling, with methods to take simulation test benches directly to ATE. At the same time, the semiconductor industry is also going through major changes. The sources of these changes are emanating from the base material with advances in the area of nanotechnology that promise the ushering of a new era in the field of system integration. From the base material point of view, the nanotechnology is transforming deposition of metals and insulators on semiconductor crystal to direct manipulation of molecules and atomic structures. From the device architecture point of view, the transformation is from central processing unit (CPU), application-specific integrated circuits (ASICs)
Digital Object Identifier 10.1109/TIM.2005.857481
and memories to smart system-on-chip (SOC) containing radio frequency (RF) and mixed-signal circuits, microelectromechanical system (MEMS) devices, microcontrollers, embedded memories, and embedded field programmable gate arrays (FPGAs). While the design of these devices is being worked out, the major hurdle in the mass production lies in the fact that manufacturing test methods are unknown at this time. These devices require combined capabilities of high-end logic testers, mixed-signal and RF testers, and memory testers plus some additional capabilities that are traditionally not available on such testers. Because of the unprecedented high volume of devices such as RF tags, these devices also require massive parallelism in test; the present test instruments are not designed to provide such massive parallelism. To compensate for the deficiency of the test instruments, the test and measurement community is increasingly adopting design solutions to facilitate test. These developments have been demonstrated in the works of various research laboratories and industries; however, practically no worthwhile contribution could be found in the current literature in these evolving areas, viz. testing of nanodevices, their measurement methods, and test instrumentation for nanodevices. One of the major objectives of this Special Section was primarily to fill some of the gaps in this increasingly complex arena of very large scale integration (VLSI) test methodologies. The unprecedented response to our Call for Papers from various segments of the test community from all over the world is a brilliant testimony to the profound interest and increased activities of the researchers, test engineers, and scientists in this very important field. All of the selected papers in this first part of our Special Section of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT are from those who are noted authorities in the area of fault-tolerance, reliability, and test generation in digital circuits including VLSI circuits, SOC integrated circuits comprised of embedded cores, and test equipment. The first paper in our Special Section is by Das et al., and this paper revisits response compaction in space and reports on simulation experiments on ISCAS 89 full-scan sequential benchmark circuits using nonexhaustive (deterministic compact and pseudorandom) test sets in the design of space-efficient support hardware in the context of BIST of VLSI circuits. The techniques used by the authors take advantage of sequence characterization as utilized by the authors earlier in response to data compaction in the case of ISCAS 85 combinational benchmark circuits using ATALANTA, FSIM, and COMPACTEST, to realize space compression of ISCAS 89 full-scan sequential benchmark circuits
1521-3323/$20.00 © 2005 IEEE
1660
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005
using simulation programs ATALANTA, FSIM, and MinTest, under conditions of both stochastic independence and dependence of single and double line errors. Our next paper by Rajsuman proposes open architecture ATE to test the next generation of SOC ICs. The architecture provides a framework for integration of software and instruments from different vendors into the ATE. The specifications of this framework known as OPENSTAR specifications have been developed by the Semiconductor Test Consortium (STC). In the test system, each modular unit can be replaced with another modular unit from a different vendor, and the tester can be reconfigured to map the test resources based on the requirements of the device under test (DUT). The only restriction in using the third-party modules is that each modular unit must adhere to the standard interfaces of the integrating framework, besides conforming to the OPENSTAR specifications. The paper describes in detail the basic structure of the test environment, module structure, calibration, diagnostics, synchronization, and system reconfigurability. The third paper of the Special Section is by Nagano and describes a methodology and software for linking ATE and electronic design automation (EDA) tools to identify and diagnose failure at the system level. The software —Wafer Fail Layout Map—provides computer-aided design (CAD) navigation and correlation between tester failure data and design data. The proposed approach facilitates layout-level defect diagnosis at the individual chip level as well as at the wafer level. The fourth paper in the series is by Yin et al., and discusses a very wide bandwidth RF root mean square (rms) detector with a high dynamic range based on the dynamic translinear principle for integration in a BiCMOS process. Monte Carlo analysis is used to check the process design and mismatch variations. The authors provide test setup and also measurements on a prototype chip to demonstrate the feasibility of the circuit for on-chip embedded tests. The fifth paper in the Special Section is by Yoon and Eisenstadt and explores the use of on-chip or on-wafer loopback for verifying the performance of 5-GHz wireless local area network (WLAN) IC circuits. The paper provides test block diagram, test circuit design, and characterization data for subcircuits necessary for implementing the transceiver loopback. However, the research is exploratory in nature. The sixth paper in the Special Section by Narayanan et al. considers built-in self-testing methods for embedded multiport memory arrays. The authors first discuss the architecture of two-port memories and their fault models. They then extend the serial test mechanism to propose new algorithms that prove effective to reduce the hardware cost in a chip with many multiport memories. The understanding of serial interfacing in testing two-port memories helps the possible extension to -port memories, >2. The proposed scheme has the advantage of high fault coverage, low area overhead, and acceptable test application time. Our next paper of the Special Section is by Xiong et al., and discusses a dual-mode BIST scheme that partitions the fixed (nonmovable) capacitance plates of a capacitive MEMS device. The technique divides the fixed capacitance plates at each side of the movable microarchitecture into three portions: one for electrostatic activation, and the other two equal portions for capacitance sensing. Because of the partitioning, the BIST technique can be applied to surface-micromachined, bulk-micromachined
MEMS devices, and other technologies. The technique is verified by three typical capacitive MEMS devices, and simulation experiments show it to be an effective BIST solution for various capacitive MEMS devices. The paper by Laknaur and Wang then presents a methodology for performing online self-testing of analog circuits implemented on field programmable analog arrays (FPAAs). Here, the FPAA circuit under test is first partitioned into subcircuits, and then each subcircuit is tested by replicating it with programmable resources on the chip, and comparing its outputs with the outputs of the replication. The paper studies error sources in the test circuit as well and presents methods to improve upon its accuracy. The experimental results are also included. The ninth paper in the Special Section by Hashempour et al. considers a technique for achieving error-resilience to bit-flips in compressed test data sequences. The error-resilience is related to the capability of a test data stream to tolerate bit-flips that may occur in ATE, either in the electronic components in the load board or in the high-speed serial communication links between workstation and head. The errors due to bit-flips can seriously degrade the test quality. The degradation is significant in variable codeword techniques like Huffman coding. To address the issue, the authors propose a variable-to-constant compression scheme, called Tunstall coding. To substantiate the validity of the approach, simulation results on benchmark circuits are provided. The tenth paper is also by Hashempour et al., and discusses multisite testing of VLSI chips in a manufacturing environment. The multisite testing is analyzed and evaluated using DUT test parameters such as yield and average number of faults, as well as process features like number of channels, fault coverage, and touchdown time for the head. The presence of idle time periods and their impact on multisite test time are also analyzed in depth. The authors suggest that a hybrid scheme based on screening chips by BIST improves the performance of multisite test and allows better utilization of channels in the head of an ATE. The paper by Cazeaux et al. discusses next an on-chip circuit to measure the jitter present at the output of phase-locked loops (PLLs) used for generating phase-synchronous frequency-multiplied clocks. The proposed circuit is able to test PLLs providing an output frequency at the gigahertz range. The technique reduces area overhead and circuit complexity compared to existing techniques. The twelfth paper by Sahinoglu and Ramamoorthy considers reliability block diagramming (RBD) tools to code, decode, and compute reliability in simple and complex networks. The authors propose three tools in their study. The first tool, using a novel compression algorithm, is capable of reducing any complicated series–parallel system to a visibly easy sequence of series and parallel blocks in a reliability block diagram. The second tool dependency reis to decode and retrieve an already coded lationship using post-fix notation for series–parallel or complex networks. The third tool is an approximate fast upper-bound reliability computing algorithm designed for (FUB) series–parallel networks and improved upon to perform state enumeration in a hybrid form. Our final paper of the Special Section by Kuo et al. reviews the characteristics of crosstalk pulse induced by an aggressor signal on a quiet trace first. Then, based on the superposition
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005
principle, a jitter model to calculate the time difference between the distortion-free and distorted edge crossings of the victim signal is developed. The model is also extended to calculate the worst-case timing difference. The authors also develop an algorithm to generate the histogram distribution of bounded uncorrelated jitter (BUJ). The developed algorithm has fast execution times. In concluding this guest editorial, we would like to express our sincerest gratitude to all those who helped make this Special Section a reality. First and foremost among all is our Editor-inChief of this TRANSACTIONS, Milton Slade, who has been a continuing source of encouragement, advice, and guidance. His immense help and valued suggestion came whenever and in whatever form they were solicited. Without him, our dream of this Special Section would never have materialized. We would like to specially acknowledge the contributions of Ms. Cam Ingelin, our Transactions Administrator, and Mr. Andrew Swartz, our IEEE Staff Editor, who rendered help in innumerable possible ways in the ultimate success of our Special Section. Our particular thanks are also due to Dr. Emil M. Petriu, Dean, Faculty of Engineering, University of Ottawa, Ottawa, ON, Canada; Dr. Vincenzo Piuri, of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT; Dr. Wen-Ben Jone of the Department of Electrical and Com-
1661
puter Engineering and Computer Science, University of Cincinnati, OH; and to Dr. Mehmet Sahinoglu, Chair, Department of Computer and Information Science, Troy State University, Montgomery, AL. Our heartfelt thanks are also due to those people behind the scenes—those anonymous reviewers—whose selfless, dedicated, and timely professional help contributed to the superb quality of this Special Section of our esteemed IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT.
SUNIL R. DAS, Guest Editor School of Information Technology and Engineering, Faculty of Engineering, University of Ottawa Ottawa, ON K1N 6N5, CANADA Department of Computer and Information Science, Troy State University Montgomery, AL USA ROCHIT RAJSUMAN, Guest Editor Advantest America R&D Center Santa Clara, CA USA