Revisiting Response Compaction in Space for Full ... - Semantic Scholar

2 downloads 1677 Views 1MB Size Report
The conventional testing techniques of digital circuits require application of test ...... [23] R. A. Frohwerk, “Signature analysis—A new digital field service method ...
1662

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

Revisiting Response Compaction in Space for Full-Scan Circuits With Nonexhaustive Test Sets Using Concept of Sequence Characterization Sunil R. Das, Life Fellow, IEEE, Chittoor V. Ramamoorthy, Life Fellow, IEEE, Mansour H. Assaf, Member, IEEE, Emil M. Petriu, Fellow, IEEE, Wen-Ben Jone, Senior Member, IEEE, and Mehmet Sahinoglu, Senior Member, IEEE

Abstract—This paper revisits response compaction in space and reports results on simulation experiments on ISCAS 89 fullscan sequential benchmark circuits using nonexhaustive (deterministic compact and pseudorandom) test sets in the design of space-efficient support hardware in the context of built-in selftesting (BIST) of VLSI circuits. The techniques used herein take advantage of sequence characterization as utilized by the authors earlier in response data compaction in the case of ISCAS 85 combinational benchmark circuits using ATALANTA, FSIM, and COMPACTEST, to realize space compression of ISCAS 89 full-scan sequential benchmark circuits using simulation programs ATALANTA, FSIM, and MinTest, under conditions of both stochastic independence and dependence of single and double line errors. Index Terms—ATALANTA, built-in self-test (BIST), COMPACTEST, FSIM, full-scan circuits, MinTest, nonexhaustive (deterministic compact and pseudorandom) test sets, sequence characterization, space compaction, VLSI circuits.

I. INTRODUCTION

A

S THE electronics industry continues to grow, integration densities and system complexities continue to increase, and the necessity for better and more efficient methods of testing to ensure reliable operations of chips, the mainstay of today’s many sophisticated devices and products, is being increasingly realized [1]–[57]. The very concept of testing has a relatively broad applicability, and finding the most effective testing techniques that can guarantee correct system performance is of immense practical significance. Generally, the price of testing integrated circuits (ICs) is rather prohibitive, accounting for 35% to 55% of their total manufacturing cost. Besides, testing a chip is also time-consuming, taking up to about one-half of the total Manuscript received October 23, 2003; revised May 9, 2005. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under Grant A 4750. S. R. Das is with the School of Information Technology and Engineering, Faculty of Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada, and also with the Department of Computer and Information Science, Troy State University, Montgomery, AL 36103 USA. C. V. Ramamoorthy is with the Department of Electrical Engineering and Computer Sciences, Computer Science Division, University of California, Berkeley, CA 94720 USA (e-mail: [email protected]). M. H. Assaf and E. M. Petriu are with the School of Information Technology and Engineering, Faculty of Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada. W.-B. Jone is with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA. M. Sahinoglu is with the Department of Computer and Information Science, Troy State University, Montgomery, AL 36103 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TIM.2005.855085

design cycle time. The amount of time available for manufacturing, testing, and marketing a product, on the other hand, is on the decline. Moreover, as a result of diminishing trade barriers and global competition, customers now demand products of better quality at lower cost. In order to achieve this higher quality at lower cost, evidently testing methods need to be improved. The conventional testing techniques of digital circuits require application of test patterns generated by a test pattern generator (TPG) to the circuit under test (CUT) and comparing the responses with known correct responses. For large circuits, because of higher storage requirements for the fault-free responses, the customary test procedures, thus, become very expensive, and, hence, alternative approaches are required to minimize the amount of needed storage [45]. Built-in self-testing (BIST) is a design process that provides the capability of solving many of the problems otherwise encountered in testing digital systems. It combines the concepts of both built-in test (BIT) and self-test (ST) in one termed built-in self-test (BIST). In BIST, test generation, test application, and response verification are all accomplished through built-in hardware, which allows different parts of a chip to be tested in parallel, reducing thereby the required testing time, besides eliminating the necessity for external test equipment. As the cost of testing is becoming the single major component of the manufacturing expense of a new product, BIST, thus, tends to reduce manufacturing and maintenance costs through improved diagnosis [1]–[53]. Several companies such as Motorola, AT&T, IBM, and Intel have incorporated BIST in many of their products [6], [8], [14]–[16]. AT&T, for example, has incorporated BIST into more than 200 of their IC chips. The three large programmable logic arrays (PLAs) and microcode ROM in the Intel 80386 microprocessor were built-in self-tested [52]. The general-purpose microprocessor chip, Alpha AXP21164, and Motorola microprocessor 68020, were also tested using BIST techniques [8], [52]. More recently, Intel, for its Pentium Pro architecture microprocessor, with its unique requirements of meeting very high production goals, superior performance standards, and impeccable test quality put strong emphasis on its design-for-test (DFT) direction [16]. A set of constraints, however, limits Intel’s ability to tenaciously explore DFT and test generation techniques, viz. full or partial scan or scan-based BIST [2]. AMD’s K6 processor is a reduced instruction set computer (RISC) core named enhanced RISC86 microarchitecture [15]. K6 processor incorporates BIST into its DFT process. Each RAM array of K6 processor

0018-9456/$20.00 © 2005 IEEE

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

Fig. 1. Block diagram of the BIST environment.

has its own BIST controller. BIST executes simultaneously on all of the arrays for a predefined number of clock cycles that ensure completion for the largest array. Hence, BIST execution time depends on the size of the largest array [2]. AMD uses a commercial automatic test pattern generation (ATPG) tool to create scan test patterns for stuck-faults in their processor. The DFT framework for 500-MHz IBM S/390 microprocessor utilizes a wide range of tests and techniques to guarantee superb reliability of components within a system [2]. Register arrays are tested through the scan chain, while nonregister memories are tested with programmable RAM BIST. Hewlett-Packard’s PA8500 is a 0.25- m superscalar processor that achieves fast but thorough test with its cache test hardware’s ability to perform March tests, which is an effective way to detect several kinds of functional faults [14]. Digital’s Alpha 21164 processor combines both structured and ad hoc DFT solutions, for which a combination of hardware and software BIST was adopted [2]. Sun Microsystems’ UltraSparc processor incorporates several DFT constructs as well. The achievement of its quality performance coupled with reduced chip area conflicts with a design requirement that is easy to debug, test, and manufacture [2]. BIST is widely used to test embedded regular structures that exhibit a high degree of periodicity such as memory arrays (SRAMs, ROMs, FIFOs, and registers). This type of circuits does not require complex extra hardware for test generation and response compaction. Also, including BIST in these circuits can guarantee high fault coverage with zero aliasing. Unlike regular circuits, random-logic circuits cannot be adequately tested only with BIST techniques, since generating adequate on-chip test sets using simple hardware is a difficult task to be accomplished. Moreover, since test responses generated by random-logic circuits seldom exhibit regularity, it is extremely difficult to ensure zero-aliasing compaction. Therefore, random-logic circuits are usually tested using a combination of BIST, scan design techniques, and external test equipment. A typical BIST environment, as shown in Fig. 1, uses a TPG that sends its outputs to a CUT, and output streams from the CUT are fed into a test data analyzer. A fault is detected if the test sequence is different from the response of the faultfree circuit. The test data analyzer is comprised of a response compaction unit (RCU), storage for the fault-free responses of the CUT, and a comparator. In order to reduce the amount of data represented by the fault-free and faulty CUT responses, data compression is used to create signatures (short binary se-

1663

quences) from the CUT and its corresponding fault-free circuit. Signatures are compared and faults are detected if a match does not occur. BIST techniques may be used during normal functional operating conditions of the unit under test (on-line testing), as well as when a system is not carrying out its normal functions (offline testing). In the case where detecting real-time errors is not that important, systems, boards, and chips can be tested in offline BIST mode. BIST techniques use pseudoexhaustive or pseudorandom test patterns, or sometimes on-chip storing of reduced or compact test sets. Today, testing logic circuits exhaustively is seldom used, since only a few test vectors are needed to ensure full fault coverage for single stuck-line faults [45], [52]. Reduced pattern test sets can be generated using existing algorithms such as FAN, and others. Built-in test generators can often generate such reduced test sets at low cost, making BIST techniques suitable for on-chip self-testing. This paper focuses primarily on the response compaction process of BIST techniques that basically formulate into realizing appropriate means of reducing the test data volume coming from the CUT to a signature. Instead of comparing bit-by-bit the fault-free responses to the observed outputs of the CUT as in conventional testing methods, the observed signature is only compared to the correct one, thus reducing the storage needed for the correct circuit responses [45], [52], [53]. The response compaction in BIST is carried out through a space compaction unit followed by time compaction. In general, input sequences coming from a CUT are fed into a space compactor, providing output streams of bits such that ; most often, test . responses are compressed into only one sequence Space compaction brings a solution to the problem of achieving high-quality BIST of complex chips without the necessity of monitoring a large number of internal test points, thereby reducing testing time and area overhead by merging test sequences coming from these internal test points into a single stream of bits. This single bit stream of length is ultimately fed into a time compactor, and a shorter sequence of length is obtained at the output. The extra logic representing the compaction network, however, must be as simple as possible, to be easily embedded within the CUT, and should not introduce signal delays to affect either the test execution time or normal functionality of the circuit being tested. Moreover, the length of the signature must be as short as possible in order to minimize the amount of memory needed to store the fault-free response signatures. Also, signatures derived from faulty output responses and their corresponding fault-free signatures should not be the same, which unfortunately is not always the case. A fundamental problem with compaction techniques is error masking or aliasing [45], [52], [53] which occurs when the signatures from faulty output responses map into the fault-free signatures, usually calculated by identifying a good circuit, applying test patterns to it, and then having the compaction unit generate the fault-free references. Aliasing causes loss of information, which affects the test quality of BIST and reduces the fault coverage (the number of faults detected, after compaction, over the total number of faults injected). Several methods have been suggested in the literature for computing the aliasing probability. The exact computation of this aliasing probability is known to be an NP-hard problem [32], [52]–[54]. In practice, high fault coverage, over 99%, is required,

1664

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

and, hence, any compression technique that maintains more percentage error coverage information is considered worthy of investigation. This paper revisits the general problem of designing spaceefficient support hardware for built-in self-testing of full-scan VLSI circuits using nonexhaustive (deterministic compact and pseudorandom) test sets. The techniques utilized in achieving the objective are primarily based on identifying certain inherent properties of the test data responses of the CUT, in conjunction with the knowledge of their failure probabilities. A major issue in realizing efficient space compaction is development of techniques that are simple, suitable for on-chip self-testing, require low area overhead, and have little adverse impact on the CUT performance. With this perspective in focus, in the paper, taking advantage of some well-known concepts, viz. notions of Hamming distance, sequence weights, and derived sequences as developed earlier by the authors in sequence characterization [42], [45]–[47], [53] are utilized for deriving the optimal mergeability criteria of a pair of output sequences under conditions of both stochastic independence and stochastic dependence of line errors. The techniques used herein achieve a high measure of fault coverage for single stuck-line faults, with low CPU simulation time, and acceptable area overhead, as evident from extensive simulation runs on the ISCAS 89 full-scan sequential benchmark circuits, under conditions of stochastic independence as well as stochastic dependence of single and double line errors, using fault simulation programs ATALANTA, FSIM, and MinTest. II. SEQUENCE CHARACTERIZATION AND DESIGN OF COMPACTION TREES BASED ON STOCHASTIC INDEPENDENCE AND STOCHASTIC DEPENDENCE OF LINE ERRORS funcThe main idea in space compaction is to compress tional test outputs of the CUT possibly into a single test output line to derive the CUT signature without incurring substantial loss of information in the process [33]–[48], [50]–[53]. Space compression is generally accomplished using XOR gates connected in cascade or in a tree structure. In this paper, however, we use as our framework AND(NAND), OR(NOR), and XOR(XNOR) operators connected in combination of both cascade and tree structures (cascade-tree) for realizing space compression networks. The logic functions selected to construct the compaction trees are solely determined by the characteristics of the sequences that are inputs to the gates based on some optimal mergeability criteria proposed earlier by the authors [42], [45]–[47], [53]. The basic theme of the suggested approaches is to select appropriate logic gates to merge two candidate output lines of the CUT under conditions of both stochastic independence and stochastic dependence of single and double line errors, using sequence characterization and related concepts developed by the authors. In the following, we briefly introduce the mathematical basis of these approaches with appropriate notations and terminologies. A. Hamming Distance, Sequence Weights, and Derived Sequences Let length

represent a pair of output sequences of a CUT of each, where the length is the number of bit positions

and . Let represent the Hamming distance in between and (the number of bit positions in which and differ). Definition 1: The 1-weight, denoted by , of a sequence , is the number of 1’s in the sequence. Similarly, the , of a sequence , is the number 0-weight, denoted by of 0’s in the sequence. with Example 1: Consider an output sequence pair and . The length of both . The Hamming distance between output streams is 8 and is . The 1-weights and 0-weights of and are , and , respectively. Property 1: For any sequence , it can be shown that . For the output sequence given in and Example 1 above, we have found that . Therefore, it is obvious that . of Definition 2: Consider an output sequence pair equal length . Then the sequence pair derived by discarding the bit positions in which the two sequences differ (and indicated . by a dash—) is called its derived sequence pair In this paper, we will denote the derived sequence of a seby , its 1-weight by , and its 0-weight by quence . given in Example 1, Example 2: For . The 1-weights and 0-weights of the derived pair are: . , we have Property 2: For any derived sequence pair and . As shown in Example 2, for the same output sequence pair given in Example 1, we have shown that and . By the aforementioned property, when no ambiguity arises, we will denote 1-weights and 0-weights for the derived seand , respectively. The length of quence pair by simply the derived sequence pair will be denoted by , where . Also, since is always zero, we will simply use to denote . That is, for in Example 1 above, and . Therefore, and . Property 3: For every distinct pair of output sequences at the output of a CUT, the corresponding derived pair of equal length is distinct. Two derived sequence pairs may have the same length , but they are still distinct and not identical. and Property 4: Two derived sequence pairs of the original output sequence pairs and , respectively, having the same length are identical, that is, , if and only if, . Consider , and as four output response and be grouped streams of a CUT. Let to form two distinct output pairs, and be their corresponding derived sequence pairs, respectively, such and that

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

. Here, both the derived sequence pairs , but they are not identical. have the same length However, in general, it is not expected that any two distinct pairs of sequences at the output of a CUT will be identical, and, hence, the possibility of the corresponding derived pairs being identical is also remote. We can extend the concept of 1-weight (usually denoted as first-order 1-weight) to Nth-order 1-weight, that of 0-weight (usually denoted as first-order 0-weight) to Nth-order 0-weight, while that of derived sequences (usually denoted as second-order derived sequences) to Nth-order derived se[42], quences readily to deal with sequences [53], but since the concepts are not used in the present paper, their discussions are omitted. B. Optimal Mergeability Criteria and Gate Selection Under Stochastic Independence of Line Errors Here, we briefly summarize the key results concerning optimal pairwise mergeability of response data at the CUT output in the design of space compactors under condition of stochastic independence of single and double line errors. These will be provided mainly in the form of certain definitions and key theorems without proofs, of which the details could be found in [45], [53]. , for Definition 3: The maximum expectation of error, two-input logic functions, given two input sequences of length , is defined as the sum of all single line errors and all double line errors . The definition assumes stochastic independence of single and double line errors, and thus includes all of the possibilities of error occurrence. The concept of stochastic independence [45], [58] is that, given a set of events, these events are individually determined by properties that are in no way interrelated. Mathematically, we define two events and to be stochastically independent if being the probability or of an event. We agree to accept this definition even if . In a practical situation, two events may be taken to be stochastically independent if there is no causal relation, either temporal or spatial between them, that is, if they are causally independent [58]. We later enlarge the scope of our discussion, deviating from the assumption of independence of single and double line errors at the CUT output, to include the situation where their appearance will be dependent on some causal relation, so that we can assign distinct probabilities to their occurrence in different lines. be the number of single line errors and Now, let be the number of double line errors detected at the output of a gate . , for Definition 4: The maximum detectable error, two-input logic functions, given an input sequence pair of length , when the gate type is , is defined as . of length Theorem 1: For an output sequence pair and Hamming distance , an OR(NOR) gate or an AND(NAND) gate may be used for optimal merger, if the maximum number , where . of errors detected is Theorem 2: An output sequence pair of length and Hamming distance is optimally mergeable with an

1665

gate, if the maximum number of errors detected which is 2 , is greater than or equal to the maximum number (AND/NAND), or of errors detected by an AND(NAND) gate, by an OR(NOR) gate, (OR/NOR), when used for merger. Theorem 3: An AND(NAND) gate may be selected for optiof length mally merging an output sequence pair with Hamming distance , if , and single line errors single and double line errors double line errors . Theorem 4: An OR(NOR) gate may be selected for optimally of length with merging an output sequence pair Hamming distance , if , and (OR/NOR) = single line errors + 3 single and double line errors + double line errors 2 . However, if , either an OR(NOR) gate or an AND(NAND) gate may be used for optimal merger. of length Theorem 5: An output sequence pair and Hamming distance is optimally mergeable with an (AND/NAND) is maximized over all AND(NAND) gate, if sequence pairs for an -output CUT. of length Theorem 6: An output sequence pair and Hamming distance is optimally mergeable with an OR(NOR) gate, if (OR/NOR) is maximized over all sequence pairs for an -output CUT. Theorem 7: An output sequence pair of length is optimally mergeable with an AND(NAND) gate, if . Theorem 8: An output sequence pair of length is optimally mergeable with an OR(NOR) gate, if . XOR(XNOR)

C. Effects of Error Probabilities in Selection of Gates for Optimal Merger In the case of stochastic dependence of line errors at the CUT output, we can assign distinct probabilities of error occurrence in different lines. The gate selection was primarily based on optimal mergeability criteria established utilizing the properties of Hamming distance, sequence weights, and derived sequences, together with the concept of detectable error probability estimate for a two-input logic function, under condition of stochastic dependence of single and double line errors at the output of a CUT. Li and Robinson [35] defined the detectable error probability estimate for a two-input logic function, given two input sequences of length , as follows: , where is the probability of single error effect felt at the output of the CUT, is the probability of double error effect felt at the output of the CUT, is the number of single line errors at the output of gate if gate is used for the number of double line errors at the output merger, and of gate if gate is used for merger. Based on the computation of the detectable error probability estimate of Li and Robinson as given above, we deduce the following results that profoundly influence the selection of gates for optimal merger. Theorem 9: For an output sequence pair of length and Hamming distance , an AND(NAND) gate is preferable to an XOR(XNOR) gate for optimal merger if . Theorem 10: For an output sequence pair of length and Hamming distance , an OR (NOR) gate is preferable to

1666

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

an XOR(XNOR) gate for optimal merger if . Theorem 11: For an output sequence pair of length and Hamming distance , an AND (NAND) gate is preferable . to an OR (NOR) gate for optimal merger if and is preferable to Theorem 12: For two gates for optimal merger, if and only if, . III. OUTPUT RESPONSE COMPACTION—AN OVERVIEW Whenever the test vectors are generated on-chip and applied to the CUT, the corresponding output responses of the CUT are generated as well. If the circuit is fault-free, the resulting output responses match the corresponding fault-free output responses of the circuit. However, when the CUT has faults and their effects are propagated to the CUT outputs, then an important objective is to detect those faults, if possible, through testing. In theory, to detect faults in a given CUT, we need to examine each of the produced responses and must compare it with the expected CUT response. However, in BIST, it may not be possible to store all expected output responses and make an exhaustive comparison with every faulty CUT response due to the constraints imposed by excessive hardware. This brings into the picture a very important concept—concept of response compaction. The response compaction in BIST is usually carried out through a space compaction unit followed by time compaction. In general, input sequences coming from a CUT are fed into a space compactor, providing output streams such that ; most often, test responses are compressed into only one sequence . Space compaction provides solution to the problem of realizing highquality BIST of complex chips without the necessity of monitoring a large number of internal test points, reducing testing time and area overhead by merger of test sequences from these internal test points into a single stream of bits. This single bit stream of length is then fed into a time compactor, and finally a shorter sequence of length called signature is obtained at the output. However, sometimes a signature generated from response sequence carrying information of fault may map into the same signature as the good sequence, resulting in fault masking or aliasing errors [27], [45], [52], [53]. Aliasing affects the test quality of BIST, reducing the overall fault coverage. The choice of a compression technique is mainly influenced by hardware considerations and loss of effective fault coverage due to fault masking or aliasing. In this section, we briefly review first some of the important test compaction techniques in space for BIST that have been proposed in the literature. We discuss them focusing only on some of their relevant properties like the fault coverage, area overhead, error masking probability, etc. There exist several efficient time compaction schemes as well which are also considered herein. Some of the well-known space compression techniques include: parity tree space compaction, hybrid space compression, dynamic space compression, quadratic functions compaction, programmable space compaction, and cumulative balance testing. The parity tree compaction circuits [33], [36], [42], [45], [46] are comprised of only XOR gates. An XOR gate has very good signal-to-error propagation properties that are quite suitable for space compression. The functions realized by parity tree com. The pactors are linear and of the form

parity tree space compactor propagates all errors that appear on an odd number of its inputs. Thereby, errors that appear on an even number of parity tree circuit inputs are masked. As was experimentally demonstrated, most single stuck-line faults are detected in parity tree space compaction using pseudorandom input test patterns and deterministic reduced test sets [45], [53]. The hybrid space compression (HSC) technique, originally proposed by Li and Robinson [35], uses AND, OR, and XOR logic gates as output compaction tools to compress the multiple outputs of a CUT into a single line. The compaction network is constructed based on the detectable error probability estimates, . A modified version of the HSC method, called dynamic space compression (DSC), was subsequently proposed by Jone and Das [38]. Instead of assigning and double static values for the probabilities of single errors errors , the DSC method dynamically estimates those values based on the CUT structure during the computation process. and are determined based on the number The values of of single lines and shared lines connected to an output. A general theory to predict the performance of the space compression techniques was also developed by the authors. The experimental evidence shows that the information loss, combined with syndrome counting as time compactor, is between 0% and 12.7%. The DSC was later improved, in which some circuit-specific information was used to calculate the probabilities [39]. However, neither HSC nor DSC does provide an adequate measure of fault coverage because they both rely on estimates of error detection probabilities. The quadratic functions compaction (QFC) uses quadratic functions to construct the space compaction circuits, and was shown to reduce aliasing errors [37]. In QFC, the obof the CUT are processed served output responses and compressed in a serial fashion based on functionality of the , where and form are blocks of length , for . An approach termed programmable space compaction (PSC) was recently proposed for designing low-cost space compactors that provide high fault coverage [34]. In PSC, circuit-specific space compactors are designed to increase the likelihood of error propagation. However, PSC does not guarantee zero aliasing. A compaction circuit that minimizes aliasing and has the lowest cost -input can only be found by exhaustively enumerating all Boolean functions, where represents the number of primary outputs of the CUT. Of late, a new class of space compactors based on parity tree circuits was proposed by Chakrabarty and Hayes [52]. The method is based on multiplexed parity trees (MPTs), and introduces zero aliasing. The multiplexed parity trees perform space compaction of test responses by combining the error propagation properties of multiplexers and parity trees through multiple time-steps. The authors claim that the associated hardware overhead is moderate, and very high fault coverage is obtained for faults in the CUT including even those in the compactor. Now, we briefly examine some time compression methods like ones counting, syndrome testing, transition counting, signature analysis, and others. In ones counting [21], the number of 1’s in the binary circuit response stream is used as the signature. The hardware that represents the compaction unit consists of a simple counter, and is independent of the CUT; it only depends on the

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

nature of the test response. The signature values do not depend on the order in which the input test patterns are applied to the CUT. In syndrome counting [24], all the input patterns are exhaustively applied to an -input combinational circuit. The syndrome , which is given by the normalized number of 1’s in the response , being the number of fundastream, is defined as mental products of the function being implemented by the single output CUT. It was demonstrated that any switching function can be so implemented that all of its single stuck-line faults are syndrome-testable. The transition counting [22] counts the number of times the output bit stream changes from 1 to 0 and vice versa. In transition counting, the signature length is less than or equal to being the length of a response stream. The error masking probability takes high values when the signature value , and low values when it is close to 0 or . In Walsh is close to spectral analysis [25], [26] switching functions are represented by their spectral coefficients that are compared to known correct coefficients values. In a sense, in this method, the truth table of the given switching function is basically verified. The process of collecting and comparing a subset of the complete set of Walsh functions is described as a mechanism for data compaction. The use of spectral coefficients promises higher percentage of error coverage, whereas the resulting higher area overhead for generating them is deemed as a disadvantage. In parity checking [28], the response bit stream coming from a circuit under test reduces a multitude of output data to a signature of length only 1 bit. The single bit signature has a value that equals the parity of the test response sequence. The parity checking detects all errors involving an odd number of bits, while faults that give rise to an even number of error bits are not detected. This method is relatively ineffective since a large number of possible response bit streams from a faulty circuit will result in the same parity as that of the correct bit stream. All single stuck-line faults in fanout-free circuits are detected by the parity check technique. These obvious shortcomings of parity checking are eliminated in methods proposed based on single output parity bit signature [28] and multiple-output parity bit signature [29], [31]. However, signature analysis is probably the most popular time compaction technique currently available [23]. It uses linear feedback shift registers (LFSRs) consisting of flip-flops and XOR gates; LFSRs are not only used for generating pseudorandom input test patterns, but are, thus, used for response compaction as well. The signature analysis technique is based on the concept of cyclic redundancy checking (CRC). The nature of the generated sequence patterns is determined by the LFSR’s characteristic polynomial as defined by is fed its interconnection structure. A test-input sequence into the signature analyzer, which is divided by the characterof the signature analyzer’s LFSR. The istic polynomial obtained by dividing by over a Garemainder lois field such that represents the being the corresponding quotient. In state of the LFSR, represents the observed signature. The signaother words, to ture analysis involves comparing the observed signature a known fault-free signature . An error is detected if these is the correct response two signatures differ. Suppose that is the faulty one, where is an and error polynomial; it can be shown that aliasing occurs whenever

1667

is a multiple of . The error masking probability is , where is the number of flip-flop stages in estimated as the LFSR. When is larger than 16, the aliasing probability is negligible. Many commercial applications have reported good success with LFSR-implemented signature analysis. The different methods for computing and reducing the aliasing probability in signature analysis have been proposed, viz. the signature analysis model proposed by Williams et al. [27] that uses Markov chains and derives an upper bound on the aliasing probability in terms of the test length and probability of an error occurring at the output of the CUT. Another approach to the computation of aliasing probability was presented in [30]. An error pattern in signature analysis causes aliasing, if and only if, it is a codeword in the cyclic code generated by the LFSR’s characteristic polynomial. Unlike other methods, the fault coverage in signature analysis may be improved without changing the test set. This can be done by playing with the length of the LFSR, or by using a different characteristic poly. As demonstrated in [32], for short test length, nomial signature analysis detects all single bit errors. However, there is no known theory that characterizes fault detection in signature analysis. Testing using two different compaction schemes in parallel has also been extensively investigated. The combination of signature analysis and transition counting was also analyzed [45], which shows that using simultaneously both techniques leads to a very small overlap in their error masking. As a result of using two different compaction schemes in parallel, the fault coverage is definitely improved, while the fault signature size and hardware overhead are greatly increased as well. IV. EXPERIMENTAL RESULTS To demonstrate the usability of the proposed space compression schemes, independent simulations were performed in this paper on various ISCAS 89 full-scan sequential benchmark circuits using first ATALANTA (fault simulation program developed at the Virginia Polytechnic Institute and State University) to generate the fault-free output sequences needed to construct our space compactor circuits and to test the benchmark circuits using deterministic compact test sets accompanied next with a random test session with FSIM fault simulation program that generates pseudorandom test sets [55], [56], finally being followed by the MinTest [17], [19], [20] program that generates minimal complete test sets to detect all detectable single stuck-faults for the combinational parts of the benchmark circuits. For each circuit, we determined the number of test vectors used to construct the compaction trees, CPU time taken to construct the compactors, number of applied test vectors, simulation CPU time, and percentage fault coverage by running ATALANTA, FSIM, and MinTest programs on a SUN SPARC 5 workstation. The extensive simulation results are presented in the form of graphs and tables that follow. The hardware overhead for the designed space compactors is not provided in the paper, though for all the circuits, the overhead was well within acceptable limits. Figs. 2 and 3 show, respectively, fault coverage and CPU time for the benchmark circuits in graphical forms using ATALANTA, MinTest, and FSIM when stochastic independence of single and double line errors was assumed, whereas Figs. 4–7 show fault coverage and CPU time for these circuits using the

1668

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

Fig. 2. Simulation results of the ISCAS 89 full-scan sequential benchmark circuits using ATALANTA, MinTest, and FSIM under stochastic independence of single and double line errors.

Fig. 3. Simulation results of the ISCAS 89 full-scan sequential benchmark circuits using ATALANTA, MinTest, and FSIM under stochastic independence of single and double line errors.

Fig. 4. Simulation results of the ISCAS 89 full-scan sequential benchmark circuits using ATALANTA, MinTest, and FSIM under stochastic dependence of single and double line errors.

Fig. 5. Simulation results of the ISCAS 89 full-scan sequential benchmark circuits using ATALANTA, MinTest, and FSIM under stochastic dependence of single and double line errors.

Fig. 6. Simulation results of the ISCAS 89 full-scan sequential benchmark circuits using ATALANTA, MinTest, and FSIM under stochastic dependence of single and double line errors.

Fig. 7. Simulation results of the ISCAS 89 full-scan sequential benchmark circuits using ATALANTA, MinTest, and FSIM under stochastic dependence of single and double line errors.

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

1669

TABLE I FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING ATALANTA (COMPACTED INPUT TEST SETS)

TABLE II FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING MINTEST (PREFIXED INPUT TEST SETS)

TABLE III FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING FSIM (PSEUDORANDOM TESTING)

same simulation programs under conditions of stochastic dependence of single and double line errors for different probability values. On the other hand, Tables I–III show fault coverage for the benchmark circuits, with and without space compactors, using deterministic compacted and prefixed

input test sets generated, respectively, by ATALANTA and MinTest, and pseudorandom test sets generated by FSIM, under condition of stochastic independence of single and double line errors. Tables IV–XV show the same for the benchmark circuits, also using ATALANTA, MinTest, and

1670

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

TABLE IV FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING ATALANTA (COMPACTED INPUT TEST SETS)

TABLE V FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING MINTEST (PREFIXED INPUT TEST SETS)

TABLE VI FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING FSIM (PSEUDORANDOM TESTING)

FSIM, but now under conditions of stochastic dependence of single and double line errors with different probability values. Tables IV–VI show results for the probability values of single

and double line errors being equal to (0.66, 0.33), while Tables VII–IX show results for the probability values of being equal to (0.33, 0.66). Similarly, Tables X–XII

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

1671

TABLE VII FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING ATALANTA (COMPACTED INPUT TEST SETS)

TABLE VIII FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING MINTEST (PREFIXED INPUT TEST SETS)

TABLE IX FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING FSIM (PSEUDORANDOM TESTING)

show results for the probability values of being equal to (0.90, 0.10) and, finally, Tables XIII–XV show the same for the probability values of being equal to (0.10, 0.90) on application of compacted and prefixed input test sets and

under pseudorandom testing for combinational parts of the associated with the results in the circuits. The asterisks last columns of all the tables imply that faults were injected both in the CUT and its compressors.

1672

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

TABLE X FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING ATALANTA (COMPACTED INPUT TEST SETS)

TABLE XI FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING MINTEST (PREFIXED INPUT TEST SETS)

TABLE XII FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING FSIM (PSEUDORANDOM TESTING)

V. CONCLUDING REMARKS The design of efficient BIST support hardware has been of immense significance in the domain of digital integrated circuits. This paper revisits compression techniques of test data outputs for full-scan digital sequential circuits that facilitate the

design of such space-efficient support hardware. The proposed techniques use AND(NAND), OR(NOR), and XOR(XNOR) gates as appropriate to construct output compaction trees that compress the functional outputs of the CUT to a single line. The compaction trees are generated based on sequence characterization,

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

1673

TABLE XIII FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING ATALANTA (COMPACTED INPUT TEST SETS)

TABLE XIV FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING MINTEST (PREFIXED INPUT TEST SETS)

TABLE XV FAULT COVERAGE FOR ISCAS 89 FULL-SCAN SEQUENTIAL BENCHMARK CIRCUITS USING FSIM (PSEUDORANDOM TESTING)

and using the concepts of Hamming distance, sequence weights, and derived sequences. The logic functions selected to build the compression trees are determined solely by the characteristics of the sequences that are inputs to the gates. The optimal mergeability criteria were obtained on the assumption of both

stochastic independence and stochastic dependence of single and double line errors. In the case of stochastic dependence, output bit stream selection was based on calculating the detectable error probability estimates using an empirical formula developed by Li and Robinson. The effectiveness of the pro-

1674

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

posed approaches is, thus, critically dependent on the probabilities of error occurrence in different lines of the CUT, and this dependence may be affected by the circuit structure, partitioning, etc. Any given circuit structure has well-defined features which might change based on evolving design practices with technology trends. Besides, in actual situations, the probability values for error occurrence in particular circuits have to be experimentally determined, that is, these are a posteriori probabilities rather than a priori probabilities. If the circuit structure changes, these probability values change and obviously the corresponding compression networks that have to be designed based on optimal mergeability criteria change as well. Since the empirical formula used for computing the detectable error probability estimates in our gate selection process uses exact values of these a posteriori probabilities of error occurrence, unless the formula is modified, whenever only intervals on the probability values are given rather than exact values, they cannot be used as such in the gate selection process, except only to provide two extremes of selections consistent with the probability intervals. Since the major thrust of this paper is in devising efficient methods of synthesizing compaction networks which provide improved fault coverage for fixed complexity, thereby realizing a tradeoff in terms of coverage and complexity (storage) than conventional techniques, the complexity issues were not addressed in depth in the current study. Also, in this paper, as is evident, we did not emphasize designing zero-aliasing compression, but rather emphasized to reinforce the connection between the input test sets and their length and their reduction into recommended algorithms in the construction of the compaction trees. Loss of information may not be completely avoided when the size of all output responses is reduced. Therefore, depending on the amount of information loss, the corresponding space compactor designs will be affected. In our design experiments, we used the test sets provided by ATALANTA, FSIM, and MinTest to simulate the various ISCAS 89 full-scan sequential benchmark circuits (results on COMPACTEST [57] were not provided here). Some of the advantages in the proposed methods of space compression are their simplicity and the resulting low area overhead with improved fault coverage, which might make them suitable in a VLSI design environment as BIST support hardware, even though they do not guarantee 100% fault coverage. REFERENCES [1] R. Rajsuman, System-on-a-Chip: Design and Test. Boston, MA: Artech House, 2000. [2] S. Mourad and Y. Zorian, Principles of Testing Electronic Systems. New York: Wiley, 2000. [3] K. Chakrabarty, Ed., SOC (System-on-a-Chip) Testing for Plug and Play Test Automation. Boston, MA: Kluwer, 2002. [4] T. W. Williams and K. P. Parker, “Testing logic networks and design for testability,” Comput., vol. 21, no. 10, pp. 9–21, Oct. 1979. [5] S. M. Thatte and J. A. Abraham, “Test generation for microprocessors,” IEEE Trans. Comput., vol. C-29, no. 6, pp. 429–441, Jun. 1980. [6] J. R. Kuban and W. C. Bruce, “Self-testing the Motorola MC6804P2,” IEEE Des. Test Comput., vol. 1, no. 2, pp. 33–41, May 1984. [7] E. J. McCluskey, “Built-in self-test techniques,” IEEE Des. Test Comput., vol. 2, no. 2, pp. 21–28, Apr. 1985. [8] R. G. Daniels and W. B. Bruce, “Built-in self-test trends in Motorola microprocessors,” IEEE Des. Test Comput., vol. 2, no. 2, pp. 64–71, Apr. 1985. [9] S. R. Das, “Built-in self-testing of VLSI circuits,” IEEE Potentials, vol. 10, no. 3, pp. 23–26, Oct. 1991.

[10] V. D. Agrawal, C. R. Kime, and K. K. Saluja, “A tutorial on built-in self-test—Part II: Applications,” IEEE Des. Test Comput., vol. 10, no. 2, pp. 69–77, Jun. 1993. [11] Y. Zorian, “A distributed BIST control scheme for complex VLSI devices,” in Proc. VLSI Test Symp., 1993, pp. 6–11. [12] H. J. Wunderlich and G. Kiefer, “Scan-based BIST with complete fault coverage and low hardware overhead,” in Proc. Euro. Test Workshop, 1996, pp. 60–64. [13] Y. Zorian, “Test requirements for embedded core-based systems and IEEE P-1500,” in Proc. Int. Test Conf., 1997, pp. 191–199. [14] J. Brauch and J. Fleischman, “Design of cache test hardware on the HP PA8500,” IEEE Des. Test Comput., vol. 15, no. 3, pp. 58–63, Jun. 1998. [15] R. Fetherston, “Testability features of the AMD-K6 microprocessor,” IEEE Des. Test Comput., vol. 15, no. 3, pp. 64–69, Jun. 1998. [16] A. Carbine, “Pentium pro processor design for test and debug,” IEEE Des. Test Comput., vol. 15, no. 3, pp. 77–82, Jun. 1998. [17] I. Hamzaoglu and J. H. Patel, “Test set compaction algorithms for combinational circuits,” IEEE Trans. Comput.-Aided Des. Integr. Circuits Sys., vol. 19, no. 8, pp. 957–963, Aug. 2000. [18] K. Chakrabarty and S. R. Das, “Test-set embedding based on width compression for mixed-mode BIST,” IEEE Trans. Instrum. Meas., vol. 49, no. 3, pp. 671–678, Jun. 2000. [19] I. Hamzaoglu and J. H. Patel, “New techniques for deterministic test pattern generation,” J. Electronic Testing: Theory Applications, vol. 15, no. 4, pp. 63–73, Aug. 1999. , “Compact two-pattern test set generation for combinational and [20] full scan circuits,” in Proc. Int. Test Conf., 1998, pp. 944–953. [21] J. P. Hayes, “Check sum methods for test data compression,” J. Des. Automat. Fault-Tolerant Comput., vol. 1, no. 1, pp. 3–7, Jan. 1976. , “Transition count testing of combinational logic circuits,” IEEE [22] Trans. Comput., vol. C-25, no. 6, pp. 613–620, Jun. 1976. [23] R. A. Frohwerk, “Signature analysis—A new digital field service method,” Hewlett-Packard J., vol. 28, pp. 2–8, May 1977. [24] J. Savir, “Syndrome-testable design of combinational circuits,” IEEE Trans. Comput., vol. C-29, no. 6, pp. 442–451, Jun. 1980. [25] A. K. Susskind, “Testing by verifying Walsh coefficients,” IEEE Trans. Comput., vol. C-32, no. 2, pp. 198–201, Feb. 1983. [26] T.-C. Hsiao and S. C. Seth, “An analysis of the use of Rademacher-Walsh spectrum in compact testing,” IEEE Trans. Comput., vol. C-33, no. 10, pp. 934–937, Oct. 1984. [27] T. W. Williams, W. Daehn, M. Gruetzner, and C. W. Starke, “Aliasing errors in signature analysis registers,” IEEE Des. Test Comput., vol. 4, no. 2, pp. 39–45, Apr. 1987. [28] S. B. Akers, “A parity bit signature for exhaustive testing,” IEEE Trans. Comput.-Aided Des., vol. 7, no. 3, pp. 333–338, Mar. 1988. [29] W.-B. Jone and S. R. Das, “Multiple-output parity bit signature for exhaustive testing,” J. Electronic Testing: Theory Applications, vol. 1, no. 2, pp. 175–178, Jun. 1990. [30] D. K. Pradhan and S. K. Gupta, “A new framework for designing and analyzing BIST techniques and zero aliasing compression,” IEEE Trans. Comput., vol. C-40, no. 6, pp. 743–763, Jun. 1991. [31] S. R. Das, M. Sudarma, M. H. Assaf, E. M. Petriu, W.-B. Jone, K. Chakrabarty, and M. Sahinoglu, “Parity bit signature in response data compaction and built-in self-testing of VLSI circuits with nonexhaustive test sets,” IEEE Trans. Instrum. Meas., vol. 52, no. 5, pp. 1363–1380, Oct. 2003. [32] N. R. Saxena and J. P. Robinson, “A unified view of test response compression methods,” IEEE Trans. Comput., vol. C-36, no. 1, pp. 94–99, Jan. 1987. [33] K. K. Saluja and M. Karpovsky, “Testing computer hardware through compression in space and time,” in Proc. Int. Test Conf., 1983, pp. 83–88. [34] Y. Zorian and V. K. Agarwal, “A general scheme to optimize error masking in built-in self testing,” in Proc. Int. Symp. Fault-Tolerant Comput., 1986, pp. 410–415. [35] Y. K. Li and J. P. Robinson, “Space compression method with output data modification,” IEEE Trans. Comput.-Aided Des., vol. CAD-6, no. 3, pp. 290–294, Mar. 1987. [36] S. M. Reddy, K. K. Saluja, and M. G. Karpovsky, “Data compression technique for test responses,” IEEE Trans. Comput., vol. C-37, no. 9, pp. 1151–1157, Sep. 1988. [37] M. Karpovsky and P. Nagvajara, “Optimal robust compression of test responses,” IEEE Trans. Comput., vol. C-39, no. 1, pp. 138–141, Jan. 1990. [38] W.-B. Jone and S. R. Das, “Space compression method for built-in selftesting of VLSI circuits,” Int. J. Comput. Aided VLSI Des., vol. 3, no. 3, pp. 309–322, Sep. 1991. [39] S. R. Das, H. T. Ho, W.-B. Jone, and A. R. Nayak, “An improved output compaction technique for built-in self-test in VLSI circuits,” in Proc. Int. Conf. VLSI Des., 1994, pp. 403–407.

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

[40] K. Chakrabarty and J. P. Hayes, “Efficient test response compression for multiple-output circuits,” in Proc. Int. Test Conf., 1994, pp. 501–510. [41] B. Pouya and N. A. Touba, “Synthesis of zero-aliasing elementary-tree space compactors,” in Proc. VLSI Test Symp., 1998, pp. 70–77. [42] S. R. Das, T. F. Barakat, E. M. Petriu, M. H. Assaf, and K. Chakrabarty, “Space compression revisited,” IEEE Trans. Instrum. Meas., vol. 49, no. 3, pp. 690–705, Jun. 2000. [43] M. Seuring and K. Chakrabarty, “Space compaction of test responses for IP cores using orthogonal transmission functions,” in Proc. VLSI Test Symp., 2000, pp. 1–7. [44] S. R. Das, M. H. Assaf, E. M. Petriu, W.-B. Jone, and K. Chakrabarty, “A novel approach to designing aliasing-free space compactors based on switching theory formulation,” in Proc. IEEE Instrum. Meas. Tech. Conf., vol. 1, 2001, pp. 198–201. [45] S. R. Das, C. V. Ramamoorthy, M. H. Assaf, E. M. Petriu, and W.-B. Jone, “Fault tolerance in systems design in VLSI using data compression under constraints of failure probabilities,” IEEE Trans. Instrum. Meas., vol. 50, no. 6, pp. 1725–1747, Dec. 2001. [46] S. R. Das, J. Y. Liang, E. M. Petriu, W.-B. Jone, and K. Chakrabarty, “Data compression in space under generalized mergeability based on concepts of cover table and frequency ordering,” IEEE Trans. Instrum. Meas., vol. 51, no. 1, pp. 150–172, Feb. 2002. [47] S. R. Das, M. H. Assaf, E. M. Petriu, and W.-B. Jone, “Fault simulation and response compaction in full-scan circuits using HOPE,” in IEEE Instrum. Meas. Tech. Conf., vol. 1, 2002, pp. 607–612. [48] S. Mitra and K. S. Kim, “X-compact: An efficient response compaction technique for test cost reduction,” in Proc. Int. Test Conf., 2002, pp. 311–320. [49] M. B. Tahoori, S. Mitra, S. Toutounchi, and E. J. McCluskey, “Fault grading FPGA test configuration,” in Proc. Int. Test Conf., 2002, pp. 608–617. [50] J. Rajski, J. Tyszer, C. Wang, and S. M. Reddy, “Convolutional compaction of test responses,” in Proc. Int. Test Conf., 2003, pp. 745–754. [51] S. R. Das, M. H. Assaf, E. M. Petriu, and M. Sahinoglu, “Aliasing-free compaction in testing cores-based system-on-chip (SOC) using compatibility of response data outputs,” Trans. SDPS, vol. 8, no. 1, pp. 1–17, Mar. 2004. [52] K. Chakrabarty, “Test response compaction for built-in self testing,” Ph.D. dissertation, Dept. Comput. Sci. Eng., Univ. Michigan, Ann Arbor, MI, 1995. [53] M. H. Assaf, “Digital core output test data compression architecture based on switching theory concepts,” Ph.D. dissertation, School of Inform. Technol. Eng., Univ. Ottawa, Ottawa, ON, Canada, 2003. [54] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. San Francisco, CA: Freeman, 1979. [55] H. K. Lee and D. S. Ha, “On the Generation of Test Patterns for Combinational Circuits,” Dept. Elect. Eng., Virginia Polytech. Inst. State Univ., Blacksburg, VA, Tech. Rep. 12-93, 1993. , “An efficient forward fault simulation algorithm based on the par[56] allel pattern single fault propagation,” in Proc. Int. Test Conf., 1991, pp. 946–955. [57] I. Pomeranz, L. N. Reddy, and S. M. Reddy, “COMPACTEST: A method to generate compact test sets for combinational circuits,” in Proc. Int. Test Conf., 1991, pp. 194–203. [58] A. Gupta, Groundwork of Mathematical Probability and Statistics, 3rd ed. Calcutta, West Bengal, India: Academic, 1988.

Sunil R. Das (M’70–SM’90–F’94–LF’04) received the B.Sc. degree (Honors) in physics, the M.Sc. (Tech) degree, and the Ph.D. degree in radiophysics and electronics from the University of Calcutta, Calcutta, West Bengal, India. He is an Emeritus Professor of Electrical and Computer Engineering at the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada and a Professor of Computer and Information Science, Troy State University, Montgomery, AL. He previously held academic and research positions with the Department of Electrical Engineering and Computer Sciences, Computer Science Division, University of California, Berkeley, the Center for Reliable Computing (CRC), Computer Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA (on sabbatical leave), the Institute of Computer Engineering, National Chiao Tung University, Hsinchu, Taiwan, R.O.C., and the Center of Advanced Study (CAS), Institute of Radiophysics and Electronics, University of Calcutta. He has published around 300 papers in the areas of switching and automata theory, digital logic design, threshold logic, fault-tolerant computing, built-in self-test with emphasis on embedded cores-based system-on-chip (SOC), microprogramming and microarchitecture, microcode optimization, applied theory of graphs, and combinatorics.

1675

Dr. Das served in the Technical Program Committees and Organizing Committees of many IEEE and non-IEEE International Conferences, Symposia, and Workshops, and also acted as Session Organizer, Session Chair, and Panelist. He was elected one of the delegates of the prestigious Good People, Good Deeds of the Republic of China in 1981 in recognition of his outstanding contributions in the field of research and education. He is listed in the Marquis Who’s Who Biographical Directory of the Computer Graphics Industry, Chicago, IL (First Edition, 1984). He served as the Managing Editor of the IEEE Very Large Scale Integration (VLSI) Technical Bulletin, a publication of the IEEE Computer Society Technical Committee (TC) on VLSI since its very inception, and also was an Executive Committee Member of the IEEE Computer Society Technical Committee (TC) on VLSI. He also has served as an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS (subsequently Part A, Part B, and Part C) since 1991 until very recently. He is currently an Associate Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, an Associate Editor of the International Journal of Computers and Applications (Calgary, AB, Canada: Acta Press), a Regional Editor for Information Technology Journal, an official publication of Asian Network for Scientific Information, and a former Member of the Editorial Board and a Regional Editor for Canada of the VLSI Design: An International Journal of Custom-Chip Design, Simulation and Testing (NY: Gordon and Breach). He is a former Administrative Committee (ADCOM) Member of the IEEE Systems, Man, and Cybernetics Society, a former Associate Editor of the IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS (for two consecutive terms), a former Associate Editor of the SIGDA Newsletter, the publication of the ACM Special Interest Group on Design Automation, a former Associate Editor of the International Journal of Computer Aided VLSI Design (Norwood, NJ: Ablex), and a former Associate Editor of International Journal of Parallel and Distributed Systems and Networks (Acta Press). He also served as the Co-Chair of the IEEE Computer Society Students Activities Committee from Region 7 (Canada). He was the Associate Guest Editor of the IEEE JOURNAL OF SOLID-STATE CIRCUITS Special Issues on Microelectronic Systems (Third and Fourth Special Issues), and Guest Editor of the International Journal of Computer Aided VLSI Design (September 1991) as well as VLSI Design: An International Journal of Custom-Chip Design, Simulation and Testing (March 1993, September 1996, and December 2001), Special Issues on VLSI Testing. He also Guest Edited jointly with Rochit Rajsuman a Special Section of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT in the area of VLSI Testing (Innovations in VLSI Test Equipments) published in October 2003. He is currently Guest Editing jointly with Rochit Rajsuman another Special Section of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT in the area of VLSI Testing (Future of Semiconductor Test) for October 2005. He edited jointly with P. K. Srimani a book entitled Distributed Mutual Exclusion Algorithms (Los Alamitos, CA: IEEE Computer Society Press, 1992) in its Technology Series. He is also the author jointly with C. L. Sheng of a text on Digital Logic Design to be published by Ablex Publishing Corporation. He is a Member of the IEEE Computer Society, IEEE Systems, Man, and Cybernetics Society, IEEE Circuits and Systems Society, and IEEE Instrumentation and Measurement Society, and a Member of the Association for Computing Machinery (ACM). He was elected a Fellow of the IEEE in 1994 for contributions to switching theory and computer design. He is the 1996 recipient of the IEEE Computer Society’s highly esteemed Technical Achievement Award for his pioneering contributions in the fields of switching theory and modern digital design, digital circuits testing, microarchitecture and microprogram optimization, and combinatorics and graph theory. He is also the 1997 recipient of the IEEE Computer Society’s Meritorious Service Award for excellent service contributions to IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS and the Society, and was elected a Fellow of the Society for Design and Process Science in 1998 for his accomplishments in integration of disciplines, theories and methodologies, development of scientific principles and methods for design and process science as applied to traditional disciplines of engineering, industrial leadership and innovation, and educational leadership and creativity. In recognition as one of the distinguished core of dedicated volunteers and staff whose leadership and services made the IEEE Computer Society the world’s preeminent association of computing professionals, he was made a Golden Core Member of the Computer Society in 1998. Besides, he is the recipient of the IEEE Circuit and Systems Society’s Certificates of Appreciation for services rendered as Associate Editor for the IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS during 1995–1996 and during 1997–1998, and of the IEEE Computer Society’s Certificates of Appreciation for services rendered to the Society as Member of the Society’s Fellow Evaluation Committee, once in 1998 and then in 1999. Dr Das served as a Member of the IEEE Computer Society’s Fellow Evaluation Committee for 2001 as well. He was elected a Fellow of the Canadian Academy of Engineering in 2002 for pioneering contributions to computer engineering research—specifically in the fields of switching theory and computer design, fault-tolerant computing, microarchitecture and microprogram optimization, and to some problem areas in applied theory of graphs and combinatorics.

1676

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 5, OCTOBER 2005

He is the recipient of the prestigious Rudolph Christian Karl Diesel Best Paper Award of the Society for Design and Process Science in recognition of the excellence of their paper presented at the Fifth Biennial World Conference on Integrated Design and Process Technology held in Dallas, TX, from June 4–8, 2000. He is also the corecipient of the IEEE’s esteemed Donald G. Fink Prize Paper Award for 2003 for their paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT.

Chittoor V. Ramamoorthy (M’57–SM’76–F’78– LF’93) received degrees in physics and technology from the University of Madras, Madras, India, two graduate degrees in mechanical engineering from the University of California, Berkeley, and the A.M. and Ph.D. degrees in electrical engineering and computer science (applied mathematics) from Harvard University, Cambridge, MA, in1964. His education at Harvard was supported by Honeywell Inc. with whom he was associated last as a Senior Staff Scientist. He later joined the University of Texas, Austin, as a Professor in the Department of Electrical Engineering and Computer Science. After serving as Chairman of the Department, he joined the University of California, Berkeley, in 1972 as Professor of Electrical Engineering and Computer Sciences, Computer Science Division, a position that he still holds as Professor Emeritus. He has supervised more than 70 doctoral students in his career. He has held the Control Data Distinguished Professorship at the University of Minnesota, Minneapolis, and Grace Hopper Chair at the U.S. Naval Postgraduate School, Monterey, CA. He was also a Visiting Professor at Northwestern University, Evanston, IL, and a Visiting Research Professor at the University of Illinois, Urbana-Champaign. He is a Senior Research Fellow at the ICC Institute of the University of Texas. He has published more than 200 papers and has also coedited three books. He has worked on and holds patent in computer architecture, software engineering, computer testing and diagnosis, and in databases. He is currently involved in research on models and methods to assess the evolutionary trends in information technology. He is also the founding Co-Editor-in-Chief of the International Journal of Systems Integration and the Journal for Design and Process Science. Dr. Ramamoorthy received the Group Award and Taylor Booth Award for education from the IEEE Computer Society, the Richard Merwin Award for outstanding professional contributions, and the Golden Core Recognition, and is the recipient of the IEEE Centennial Medal and the IEEE Millennium Medal. He also received the Computer Society’s Kanai–Hitachi Award for the year 2000 for pioneering and fundamental contributions in parallel and distributed computing. He was the cowinner (with Prof S. Das of Troy State University) of the IEEE’s Best Paper Award—the Donald Fink Prize Paper Award for 2003 for their paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. He is a Fellow of the Society of Design and Process Science from which he received the R. T. Yeh Distinguished Achievement Award in 1997. He also received the Best Paper Award from the IEEE Computer Society in 1987. Three international conferences were organized in his honor, and one UC Berkeley Graduate Student Research Award, and two International Conferences/Society Awards have been established in his name. He served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. He is the founding Editor-in-Chief of the IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, which recently published a Special Issue in his honor. He served in various capacities in the IEEE Computer Society including its First Vice President, and a Governing Board Member. He served on several Advisory Boards of the Federal Government and of the Academia that include the United States Army, Navy, Air Force, DOEs, Los Alamos Lab, University of Texas, and State University System of Florida. He is one of the founding Directors of the International Institute of Systems Integration in Campinas, Brazil, supported by the Federal Government of Brazil, and for several years, was a Member of the International Board of Advisors of the Institute of Systems Science of the National University of Singapore, Singapore.

Mansour H. Assaf (M’02) received the Honors degree in applied physics from the Lebanese University, Beirut, Lebanon, in 1989, and the B.A.Sc., M.A.Sc., and Ph.D. degrees in electrical engineering from the University of Ottawa, Ottawa, ON, Canada, in 1994, 1996, and 2003, respectively. From 1994 to 1996, he was associated with the Fault-Tolerant Computing Group of the University of Ottawa, where he studied and worked as a Researcher. After working at Applications Technology, a subsidiary of Lernout and Hauspie Speech, McLean, VA, in the area of software localization and natural language processing, he joined the Sensing and Modeling Research Laboratory, University of Ottawa, where he worked on projects in the field of human-computer interaction, 3-D modeling, and virtual environments. His research interests are in the areas of human–computer interactions and perceptual-user interfaces and in fault diagnosis in digital systems. Dr. Assaf is the corecipient of the IEEE’s esteemed Donald G. Fink Prize Paper Award for 2003 for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT.

Emil M. Petriu (M’86–SM’88–F’01) is a Professor and University Research Chair in the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada. His research interests include robot sensing and perception, intelligent sensors, interactive virtual environments, soft computing, and digital integrated circuit testing. During his career, he has published more than 200 technical papers, authored two books, edited two other books, and received two patents. Dr. Petriu is a Fellow of the Canadian Academy of Engineering and Fellow of the Engineering Institute of Canada. He is a corecipient of the IEEE’s Donald G. Fink Prize Paper Award for 2003 and recipient of the 2003 IEEE Instrumentation and Measurement Society Award. He is currently serving as Chair of TC-15 Virtual Systems and Co-Chair of TC-28 Instrumentation and Measurement for Robotics and Automation and TC-30 Security and Contraband Detection of the IEEE Instrumentation and Measurement Society. He is an Associate Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT and Member of the Editorial Board of the IEEE Instrumentation and Measurement Magazine.

DAS et al.: REVISITING RESPONSE COMPACTION IN SPACE FOR FULL-SCAN CIRCUITS

Wen-Ben Jone (S’85–M’88–SM’01) was born in Taipei, Taiwan, R.O.C. He received the B.S. degree in computer science in 1979, the M.S. degree in computer engineering in 1981, both from National Chiao-Tung University, Hsinchu, Taiwan, and the Ph.D. degree in computer engineering and science from Case Western Reserve University, Cleveland, OH, in 1987. In 1987, he joined the Department of Computer Science, New Mexico Institute of Mining and Technology, Socorro, where he was promoted to an Associate Professor in 1992. From 1993 to 2000, he was with the Department of Computer Engineering and Information Science, National Chung-Cheng University, Chiayi, Taiwan. He was a Visiting Research Fellow with the Department of Computer Science and Engineering, the Chinese University of Hong Kong, in Summer 1997. Since 2001, he has been with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH. He was a Visiting Scholar with the Institute of Information Science, Academia Sinica, Taipei, in Summer 2002. His research interests include VLSI design for testability, built-in self-testing, memory testing, high-performance circuit testing, MEMS testing and repairing, and low-power circuit design. He has published more than 100 papers and holds one U.S. patent. Dr. Jone served as a Reviewer in these research areas in various technical journals and conferences. He is also listed in the Marquis Who’s Who in the World (New Providence, NJ: Marquis Who’s Who, 15th Ed., 1998, 2001). He also served on the program committee of VLSI Design/CAD Symposium (1993–1997, in Taiwan), was the General Chair of the 1998 VLSI Design/CAD Symposium, served on the program committee of the 1995, 1996, and 2000 Asian Test Conference, the 1995–1998 Asia and South Pacific Design Automation Conference, the 1998 International Conference on Chip Technology, the 2000 International Symposium on Defect and Fault Tolerance in VLSI Systems, and the 2002 and 2003 Great Lake Symposium on VLSI. He received the Best Thesis Award from The Chinese Institute of Electrical Engineering (Republic of China) in 1981. He is a corecipient of the 2003 IEEE Donald G. Fink Prize Paper Award. He is a Member of the IEEE Computer Society Test Technology Technical Committee.

1677

Mehmet Sahinoglu (S’78–M’81–SM’93) received the B.S. degree from the Middle East Technical University (METU), Ankara, Turkey, and the M.S. degree from the University of Manchester Institute of Science and Technology (UMIST), Manchester, U.K., both in electrical and computer engineering, and the Ph.D. degree in electrical engineering and statistics from Texas A&M, College Station. He is the Eminent Scholar for the Endowed Chair of the Alabama Commission of Higher Education (ACHE) and Chairman of the Computer and Information Science (CIS) at Troy State University, Montgomery, AL, since 1999. Following his 20-year-long tenure at METU as an Assistant/Associate/Full Professor, he served as the First Dean, and the Founding Department Chair in the College of Arts and Sciences at DEU, Izmir, Turkey, from 1992 to 1997. He was a Chief Reliability Consultant to the Turkish Electricity Authority (TEK) from 1982 to 1997. He became an Emeritus Professor at METU and DEU in 2000. He lectured at Purdue University, West Lafayette, IN (1989–1990, 1997–1998) and Case Western Reserve University, Cleveland, OH (1998–1999) in the capacity of a Fullbright, and a NATO Scholar, respectively. Dr. Sahinoglu is a Fellow of the Society of Design and Process Science (SDPS), a Member of ACM, AFCEA, and ASA and an Elected Member of ISI. He is accredited for the “Compound Poisson Software Reliability Model” to account for the multiple (clumped) failures in predicting the total number of failures at the end of a mission time, and the “MESAT: Compound Poisson Stopping Rule Algorithm” in cost-effective digital software testing. He is also jointly responsible with Dr. D. Libby for the original derivation of the Generalized Three-Parameter Beta (G3B) pdf in 1981, also known as Sahinoglu and Libby (SL) pdf in 1999.

Suggest Documents