Aliasing Probability Calculations for Arbitrary Compaction under

0 downloads 0 Views 370KB Size Report
combinational circuit to a sequence of independently selected, random test input vectors. .... is referred to as signature analysis, the aliasing probability has been ...
Aliasing Probability Calculations for Arbitrary Compaction under Independently Selected Random Test Vectors∗ Christoforos N. Hadjicostis†

Abstract This paper discusses a systematic methodology for calculating the exact aliasing probability associated with schemes that use an arbitrary finite-state machine to compact the response of a combinational circuit to a sequence of independently selected, random test input vectors. The proposed approach identifies the strong influence of fault activation probabilities on the probability of aliasing and uses an asymmetric error model to simultaneously track the states of two (fictitious) compactors, one driven by the response of the fault-free combinational circuit and one driven by the response of the faulty combinational circuit. By deriving the overall Markov chain that describes the combined behavior of these two compactors, we are able to calculate the exact aliasing probability for any test sequence length. In particular, for long enough sequences the probability of aliasing is shown to only depend on the stationary distribution of the Markov chain. The insights provided by our analysis are used to evaluate the testing performance of simple examples of nonlinear compactors and to demonstrate regimes where they exhibit lower aliasing probability than linear compactors with the same number of states. Finally, by establishing connections with previous work that evaluated aliasing probability in linear compactors, our analysis clarifies the role played by the entropy of the stationary distribution of the compactor states. ∗

This material is based upon work supported by the National Science Foundation under NSF Career Award 0092696 and NSF ITR Award 0218939, and by the Air Force Office of Scientific Research under AFOSR DoD URI Award F4962001-1-0365URI. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of NSF or AFOSR. † Address for correspondence: Coordinated Science Lab. and Dept. of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 357 CSL, 1308 West Main Street, Urbana, IL 61801-2307, USA. E-mail: [email protected].

1

1

Introduction

Compaction techniques and, in particular, signature analysis have attracted the attention of numerous researchers within the testing community [1, 2]. The basic setup that we considered in this paper is shown in Figure 1. The combinational circuit under test (CUT) is driven by a known sequence of test input vectors i[0], i[1], i[2], ..., i[n − 1] that have been chosen at random, independently at each time step (more generally the sequence of test input vectors could be generated in a deterministic or pseudorandom fashion). Instead of comparing the possibly corrupted output sequence of [0], of [1], of [2], ..., of [n − 1] produced by the possibly faulty CUT against the error-free sequence o[0], o[1], o[2], ..., o[n − 1] that is expected out of a fault-free CUT, compaction techniques use the output sequence of [0], of [1], of [2], ..., of [n − 1] to drive a compactor, i.e., a finite-state machine (FSM) that is initialized in some known state q[0]. Since we know the error-free sequence o[0], o[1], o[2], ..., o[n − 1] expected out of a fault-free CUT, we can easily pre-calculate the expected error-free final state q[n] of the compactor and determine whether the circuit is faulty or not based on the actual final state qf [n] of the compactor. More specifically, the presence of one or more faults in the CUT will result in a corrupted output sequence of [0], of [1], of [2], ..., of [n − 1] which will drive the compactor to a state qf [n] that will (hopefully) be different from q[n] and will lead to the detection of the fault(s) in the CUT. Aliasing occurs when q[n] = qf [n], i.e., when both the error-free and the corrupted sequences drive the compactor to the same final state; in such case, the compaction scheme fails to detect the fault(s) in the CUT. Clearly, the big advantage of compaction methodologies is that they do not have to perform comparisons after each application of a test input vector. More importantly, the error-free sequence does not need to be stored which saves hardware (memory) and time, reduces the complexity of the testing equipment and makes compaction methodologies suitable for built-in self-test [3, 4]. These advantages, however, come at the cost of failing to detect faults that result in corrupted sequences of [0], of [1], ..., of [n−1] that drive the compactor to the error-free final state q[n]. If one assumes that the sequence of test input vectors is randomly or pseudorandomly generated, the natural question that arises is that of evaluating the probability of aliasing. The design of a compaction scheme becomes challenging when, in addition to minimizing the probability of aliasing, one also tries to minimize the length of the testing sequence, simplify the structure of the compactor and the overall testing hardware required, retain compatibility with the test generation procedure and maximize the 2

CUT

i[n-1], ..., i[1], i[0]

of [n-1], ..., of [1], of [0]

Compactor

? q [n] = q[n] f

FSM with known q[0]

Figure 1: Compaction of the response of a combinational circuit via an arbitrary finite-state machine. fault coverage achieved [1, 2]. The probabilistic aspects of compaction methodologies have been thoroughly studied in settings where the compactor is a linear sequential circuit, such as a linear feedback shift register or, more generally, a linear finite-state machine (LFSM) [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. In this case, which is referred to as signature analysis, the aliasing probability has been characterized in terms of various quantities of interest, such as the number of states of the compactor and the required testing sequence length. Naturally, in order to achieve this characterization, an error model has to be used, i.e., a model that describes the (probabilistic) relationship between the error-free and the corrupted outputs. For a CUT with a single output (later on, we describe how to generalize these ideas to circuits with multiple outputs), common error models include the uniform error model, the independent (or Bernoulli) error model and the asymmetric error model (see, for example, the discussions in [16, 17]). Good overviews of these models and related references can be found in [1, 3, 2]. Under the independent error model, if we assume that the sequence of test input vectors is long enough and that the linear compactor is well-designed (e.g., a linear feedback shift register with a primitive polynomial), then the aliasing probability is essentially given by

1 N,

where N is the number of states

of the compactor. Similar analysis has also been applied to special cases of nonlinear compactors such as syndrome-based compactors, counter-based compactors, accumulator-based compactors or rotate carry adders [18, 19, 20, 21, 22, 23, 24, 25]. Testing efficiency and performance analysis for this type of compactors have also appeared in the literature (see, for example, [26, 27]). Note that the above analyses, as well as the analysis in this paper, are not applicable when the CUT is a sequential circuit because they assume that the corrupted CUT outputs (that are provided as inputs to the compactor) are statistically independent. An extension to the case of delay faults (which can be thought of as converting a combinational circuit into a sequential circuit) has appeared 3

in [28, 29]. An effort to capture state-transition faults in sequential circuits for a certain class of correlated faults has appeared in [30]. In this paper we develop a systematic approach for calculating the aliasing probability in compaction schemes that use an arbitrary FSM to compact the response of a combinational circuit to a sequence of test input vectors that are randomly selected independently at each time step. The major insight is provided by the use of an asymmetric error model, i.e., an error model in which the probability of error depends on the fault-free value of the output. Although the advantages of the asymmetric error model in modeling, analyzing and evaluating the performance of compaction schemes were realized early (see, for example, [31, 22, 16, 25]), the realization that its use not only amounts to a more explicit error model but also allows for the exact calculation of the aliasing probability in an arbitrary compactor has been absent. Our analysis in this paper clearly indicates that in the case of a single output CUT the aliasing probability depends on the syndrome of the Boolean function implemented by a fault-free CUT (i.e., how many outputs are “1” and how many are “0”) as well as the fault activation properties (i.e., how many “1’s” get corrupted to “0’s” and vice-versa). The insights obtained by our analysis can be used to evaluate the effect of different choices of compactors on the aliasing probability and to demonstrate examples and regimes where compaction using a nonlinear FSM can result in a smaller aliasing probability than the aliasing probability resulting from an LFSM with the same number of states. More specifically, aliasing depends on the behavior of a given fault and, in particular, its activation probabilities. We also use our analysis to clarify the role of the entropy of the stationary distribution of compactor states in linear/nonlinear compaction techniques, an idea that first appeared in [32] (see also the work in [33]). The paper is organized as follows. In Section 2 we introduce the necessary notation and mathematical background, and in Section 3 we motivate and discuss the asymmetric error model for combinational circuits. In Section 4 we calculate the exact aliasing probability when an arbitrary finite-state machine is used as a compacting device; we also describe extensions of these ideas to CUTs with multiple outputs and the associated computational complexity. In Section 5 we make connections between our analysis and existing approaches for calculating the aliasing probability in linear compactors; in the process, we demonstrate the limitations of linear compactors and clarify the role of the entropy of the stationary distribution as a test quality measure. In Section 6 we experimentally validate our model and confirm our insights about the potential advantages of nonlinear

4

compaction by demonstrating regimes in which nonlinear compaction can reduce aliasing probability beyond what has traditionally been achieved using linear compactors with the same number of states. We conclude in Section 7 with a summary of our results and future research directions.

2

Mathematical Preliminaries and Notation

The next state q[t + 1] of a finite-state machine (FSM) S with state set Q = {q1 , q2 , ..., qN } and input set X = {x1 , x2 , ..., xK } is specified by its state q[t] and input x[t] at time step t via the next-state function q[t + 1] = δ(q[t], x[t])

(1)

(without loss of generality we assume that δ is defined for all pairs in Q × X). In order to make the connection with Markov chains more transparent, we will denote the state q[t] of FSM S by an N -dimensional binary indicator vector q[t] which has exactly one nonzero entry with value “1” to denote the state of the system (i.e., if the jth entry of vector q[t] equals “1,” then the system is in state qj at time step t). If input xk is applied at time step t, the state evolution of the system can be captured by an equation of the form q[t + 1] = Ak q[t] , where Ak is the N × N state-transition matrix associated with input x[t] = xk . Specifically, each column of the matrix Ak has exactly one nonzero entry with value “1” (a nonzero entry at the "th-jth position of Ak denotes a transition from state qj to state q! under input xk ). If the inputs applied to a given FSM are white, i.e., if the inputs are statistically independent from one time step to another and their probability distribution at any given time step t is fixed so that input x1 takes place with probability p1 , input x2 takes place with probability p2 , ..., and input xK takes place with probability pK , then the FSM behaves like a homogeneous Markov chain with N states and state-transition matrix A=

K ! k=1

5

pk Ak .

(2)

(To see this, notice that the probability with which the FSM transitions from state qj at time step t to state q! at time step t + 1 is given by A(", j), the "th-jth entry of A [34].) If v[0] denotes the initial probability distribution of states in the Markov chain [i.e., the initial state is qj with probability v[0](j)], then the probability distribution of states at time step n is easily calculated to be v[n] = An v[0] , where An is the n-step transition matrix of the Markov chain [35]. A probability distribution v that satisfies v = Av

(3)

is called a stationary distribution [35]. A Markov chain has a unique stationary distribution if it is irreducible, i.e., for any pair of states " and j, there exists a finite M such that the "th-jth entry of AM is nonzero. In such case, it can be shown that matrix A has a single eigenvalue at λ = 1 and its corresponding eigenvector (scaled so that it is a probability vector) is the stationary distribution. In our context, the requirement that a Markov chain with the transition matrix A in Eq. (2) is irreducible is equivalent to the underlying FSM S being connected. Note that when a Markov chain is reducible, there exist multiple solutions to Eq. (3); the stationary distribution, however, is still well-defined if we know the initial state of FSM S [35]. Also, when an irreducible Markov chain is periodic with period D (i.e., its states can be partitioned into D classes C0 , C1 , ..., CD−1 , CD = C0 , " so that for all d ∈ {0, 1, 2, ..., (D − 1)} and for all qj ∈ Cd , l∈Cd+1 A(l, j) = 1), even though the stationary distribution is still unique, the probability distribution of states at time n depends on the

initial probability distribution of the chain (given by v[0]) and the remainder of n with respect to D (i.e., on the value of n mod D). For the Markov chain to be periodic, the underlying FSM has to be periodic, i.e., its states can be partitioned into D classes C0 , C1 , ..., CD−1 , CD = C0 so that δ(qj , xk ) ∈ Cd+1 for all inputs xk ∈ X, all qj ∈ Cd and all d ∈ {0, 1, 2, ..., (D − 1)}. For simplicity, in this paper we mostly assume that our compactors correspond to FSMs that are connected and aperiodic (so that they give rise to irreducible, aperiodic Markov chains and so that for large values of n the probability distribution of states v[n] is given by the unique stationary distribution v in Eq. (3)).

6

In our analysis we will need to consider two identical FSMs that operate in parallel with correlated inputs, as well as the Markov chain that describes their behavior. To capture this concisely, we will make use of the Kronecker product notation [36]. Recall that the Kronecker product of an N1 × M1 matrix A with an N2 × M2 matrix B is denoted by A ⊗ B and is the N1 N2 × M1 M2 partitioned matrix



a11 B

a12 B

···

  a22 B · · ·  a21 B A⊗B=  .. .. ..  . . .  aN1 1 B aN1 2 B · · ·

a1M1 B a2M1 B .. . aN1 M1 B



    ,   

where aij is the entry at the ith-row, jth-column position of matrix A.

The following theorem from [36] states an important property of the Kronecker product. Theorem 2.1 Let A be a square matrix with eigenvalues {λi } and corresponding eigenvectors {xi } and let B be a square matrix with eigenvalues {µi } and corresponding eigenvectors {yi }. Then, A ⊗ B has eigenvalues {λi µj } and corresponding eigenvectors {xi ⊗ yj }. As it will become clearer in the next sections, for long sequences of test input vectors the probability of aliasing depends only on the stationary distribution of the Markov chain that describes the behavior of the compactor. The computation of this stationary distribution is largely a linear algebraic problem (the stationary distribution of a certain class of large Markov chains is discussed in [37]) and some discussion on the complexity of these computations is provided at the end of Section 4. When necessary, an analysis of the rate of convergence can be used to indicate the appropriate length of the sequence of test input vectors (see, for example, the discussion in [12] for the case of LFSMs or the discussion in [24] for accumulator-based compaction). There are, of course, many ways one can capture the convergence of a Markov chain to its stationary distribution [35, 38, 39].

3

Asymmetric Error Model

We consider permanent faults that affect the input-output behavior of a combinational circuit but do not change its combinational nature. For simplicity, we primarily discuss single-output combinational circuits in which faults cause the output to be incorrect, i.e., “0” instead of “1” and vice-versa (extensions of this error model to combinational circuits with multiple outputs are described briefly 7

at the end of Section 4). Consider a CUT with L inputs and a single output. There are 2L possible input vectors and the functionality of the circuit is fully described by its truth table. If the sequence of test input vectors i[0], i[1], i[2], ..., i[n − 1] is randomly generated so that the input vector i[t] at time step t is chosen independently from other times, the error-free output sequence of the fault-free CUT is a white sequence of “0’s” and “1’s,” where a “0” appears with some probability p and a “1” appears with probability 1 − p (independently between different time steps). Moreover, the probability p is independent of the circuit implementation and only depends on the fraction of times a “0” appears in the truth table of the function implemented by the CUT. If we assume for simplicity that the test input vector at each time step is chosen with equal probability among the 2L possible inputs, p is essentially the syndrome of the CUT [1, 2, 19]. The presence of a permanent1 fault in the CUT (e.g., a stuck-at fault) will cause errors when certain affected combinations of inputs are applied. The corrupted sequence will still be white as long as test input vectors are randomly chosen, independently between different time steps. In essence, the faulty CUT will correspond to a different truth table and will produce an output sequence in which “0’s” appear with probability p# and “1’s” appear with probability 1 − p# . We can provide a more accurate probabilistic characterization of the relationship between the output sequences resulting from a fault-free and a faulty CUT under a particular fault by comparing the corresponding truth tables. More specifically, we can talk about the joint probability that the error-free and the corrupted outputs at time step t take particular values. For a single-output CUT one only has to consider four possibilities (namely, Pr(o[t] = 0, of [t] = 0), Pr(o[t] = 0, of [t] = 1), Pr(o[t] = 1, of [t] = 1), Pr(o[t] = 1, of [t] = 0)). A convenient (and equivalent) way of capturing the joint probabilities of error-free and corrupted outputs at each time step is to describe the conditional probabilities with which an output from a faulty CUT is in error given that the corresponding output of the fault-free CUT is “0” or “1”: Pr(error|o = 0) = Pr(of = 1|o = 0) ≡ e0 ,

(4)

Pr(error|o = 1) = Pr(of = 0|o = 1) ≡ e1 .

(5)

Note that these conditional probabilities hold for each time step and only depend on the number of 1

Although transient faults are not discussed here, they can easily be incorporated in this framework.

8

input combinations that have been affected by the fault. Clearly, p# ≡ Pr(of = 0) = p(1 − e0 ) + (1 − p)e1 and 1 − p# ≡ Pr(of = 1) = pe0 + (1 − p)(1 − e1 ). Note that in the commonly used independent (also known as Bernoulli) error model (see, for example, the discussions in [31, 16, 40, 17]) the output is considered to be corrupted with some probability e. In terms of the above notation, that probability is given by e = pe0 + (1 − p)e1 ,

(6)

which is also referred to as the fault activation probability. In this sense, the error model that we use here is more detailed than the independent error model because it includes information about the probability p and the conditional probabilities e0 and e1 . Also notice that even when test input vectors are not chosen at random, the asymmetric error model has more flexibility in capturing the probabilistic relationship between the error-free and the corrupted outputs (see, for example, the experimental validations in [31, 16] and the discussions in [41, 17]).

4

Calculation of Aliasing Probability

In Section 2, we argued that a combinational circuit that is driven by a randomly generated sequence of test input vectors will output a white sequence so that the compactor of Figure 1 behaves like a homogeneous Markov chain. More specifically, if the compactor is described by the FSM S in " Eq. (1), then the corresponding Markov chain will have transition matrix A = K k=1 pk Ak , where pk

and Ak represent the probability and transition matrix associated with input xk . More specifically,

for the case of a CUT with a single output, we have A = pA0 + (1 − p)A1 ,

(7)

where p is the probability that the output is “0,” 1 − p is the probability that the output is “1,” and A0 and A1 are the transition matrices associated with inputs “0” and “1.” When the CUT is faulty, the compactor is driven by a white input sequence in which a “0” (respectively, a “1”) appears with probability p# (respectively, 1 − p# ). Therefore, the behavior of the compactor is captured by a

9

Fault-free CUT

o[n-1], ..., o[1], o[0]

Faulty CUT

of [n-1], ..., of [1], of [0]

Compactor

q[n]

FSM with known q[0]

i[n-1], ..., i[1], i[0]

Compactor

q [n] f

FSM with known q[0]

Figure 2: Compactors driven by fault-free and faulty CUT output sequences. Markov chain with transition matrix A# = p# A0 + (1 − p# )A1 .

(8)

It is easy to show that if the FSM that is used as a compactor is connected, then both of the above Markov chains are irreducible as long as p '= 0, 1 and p# '= 0, 1; if, in addition, the compactor corresponds to an aperiodic FSM, then the above Markov chains are also aperiodic so that, for large values of n, their state distribution at time step n is captured by their corresponding stationary distributions given by v = Av ,

(9)

v # = A# v # .

(10)

This essentially means that for large values of n the compactor driven by the outputs of the fault-free (respectively, faulty) CUT is in state ql at time step n with a probability that is given by the lth entry of v (respectively, v# ). To make the discussion easier we will assume for now that the compactor corresponds to a connected and aperiodic FSM (this is not a strict requirement and we discuss how it can be relaxed later on). In what follows we calculate the probability of aliasing for a fault characterized by known e0 and e1 (recall the definitions in Eqs. (4) and (5)). More specifically, the probability of aliasing that we

10

calculate is parameterized by e0 and e1 , and should be interpreted as the probability that a given a fault (with known e0 and e1 ) will not be detected if a sequence of test input vectors is chosen randomly, independently at each time. In order to calculate this (parameterized) aliasing probability we will make use of the probabilistic relationship between the output sequences generated by a faultfree and a faulty CUT. Essentially, what we want to do is to simultaneously keep track of the state reached by the fictitious compactor (which is driven by the output of the fault-free CUT) and the state reached by the actual compactor (which is driven by the output of the faulty CUT), as shown in Figure 2. Clearly, since the sequence of test input vectors has length n, we need to calculate

Pr(alias) = Pr(q[n] = qf [n]) =

N !

Pr(q[n] = qf [n] = ql ) .

l=1

Strictly speaking, the aliasing probability should exclude the case when all applied inputs are inputs that correspond to unaffected input vectors. In other words, the probability of aliasing is given by

Pr(alias) =

)

N !

Pr(q[n] = qf [n] = ql )

l=1

*

− (1 − e)n ,

where e was defined in Eq. (6). Since (1 − e)n goes to zero exponentially with n, we avoid this additional nuisance and assume that n is large enough so that this factor can be ignored. Note that the probabilistic description of each of the sequences o = {o[0], o[1], ..., o[n − 1]} and of = {of [0], of [1], ..., of [n − 1]} (as summarized respectively by p and p# ) completely captures the behavior of the corresponding compactor [see Eqs. (7) through (10)]. To calculate the probability of aliasing, however, we need to be able to jointly describe the behavior of the two compactors in Figure 2. If the test input sequence is randomly generated, Figure 3 shows an equivalent way of interpreting the situation in Figure 2: sequence o is a white binary sequence in which the output o[t] at a particular time step t is “0” with probability p and “1” with probability 1 − p, whereas sequence of is a white binary sequence that is related to the output sequence o via the conditional error probabilities e0 and e1 . More specifically, at time step t, of [t] '= o[t], with probability e0 if o[t] = 0 and with probability e1 if o[t] = 1. Note that the input at time step t to the dotted deterministic system H of Figure 3 is given by a

11

o[k]

Compactor FSM with known q[0]

Cond. Probs. e and e 1 0

of [k]

Compactor FSM with known q[0]

FSM H Figure 3: Alternative model for compaction under error-free and corrupted output sequences. pair of inputs of the form (o[t], of [t]) whose probabilistic description can be summarized as follows: x#1 ≡ (o[t] = 0, of [t] = 0)

with probability p#1 ≡ p(1 − e0 ) ,

(11)

x#2 ≡ (o[t] = 0, of [t] = 1)

with probability p#2 ≡ pe0 ,

(12)

x#3 ≡ (o[t] = 1, of [t] = 1)

with probability p#3 ≡ (1 − p)(1 − e1 ) ,

(13)

x#4 ≡ (o[t] = 1, of [t] = 0)

with probability p#4 ≡ (1 − p)e1 .

(14)

The dotted system H in Figure 3 is an FSM with N 2 states that can be conveniently described in terms of pairs of the form (qj , qj " ), where qj ∈ Q represents the state of the top compactor and qj " ∈ Q represents the state of the bottom compactor. In the spirit of Section 2 and to make our connection with Markov chains more transparent, we will indicate the state of H at time step t using a binary indicator vector qh [t] with N 2 entries and exactly one nonzero entry with value “1.” This single nonzero entry denotes the state of the system. In particular, we will arrange the states of H to be indicated in the order +

(q1 , q1 ) (q1 , q2 ) · · ·

(q1 , qN ) (q2 , q1 ) (q2 , q2 ) · · ·

(qN , q1 ) · · ·

(qN , qN )

,T

so that, when H is in state (qj , qj " ) at time step t, the ((j − 1)N + j # )th entry of vector qh [t] is “1” 12

(and every other entry is “0”). Note that with this choice of notation the indicator vector qh [t] for FSM H is simply the Kronecker product qh [t] = q[t] ⊗ q# [t], where q[t] (q# [t]) is the indicator vector for the state of the top (bottom) compactor in Figure 3. In terms of this notation, we can develop explicit formulas for the transition matrices associated with the four inputs to FSM H using the Kronecker product notation: Ax"1

= A0 ⊗ A0 ,

Ax"2

= A0 ⊗ A1 ,

Ax"3

= A1 ⊗ A1 ,

Ax"4

= A1 ⊗ A0 .

The easiest way to see this, is to invoke the Kronecker product property (Ak q[t]) ⊗ (Ak" q# [t]) = (Ak ⊗ Ak" )(q[t] ⊗ q# [t]) = (Ak ⊗ Ak" )qh [t] (see [36] for a proof). Since the FSM H in Figure 3 is driven by a white input sequence whose input at time step t is chosen with fixed probabilities [as indicated by Eqs. (11) through (14)], the resulting behavior of H is captured by a Markov chain with N 2 states and transition matrix

Ah =

4 !

p#k Ax"k .

(15)

k=1

It is easy to show that if FSM S is connected and aperiodic, then FSM H will also be connected and aperiodic. In such case, we are guaranteed to have a unique stationary distribution vh that satisfies vh = Ah vh

(16)

and also describes the probabilities with which states are occupied (at least for a sufficiently long n). The probability of aliasing is simply the probability that the sequence of test input vectors causes both the top and the bottom comparator in Figure 3 to reach a final state of the form (qj , qj ), 1 ≤ j ≤ N ; thus, the probability of aliasing is given by the sum of the entries of vh that correspond to this type of states. Essentially this discussion leads to the following theorem and corollary. 13

Theorem 4.1 Consider a two-input, connected, aperiodic FSM S with N states and transition matrices A0 and A1 that is used to compact the response of a single-output combinational CUT. Assuming that S is initialized in state qj and that each test input vector is chosen independently between different time steps (so that the probabilities p, e0 and e1 for the output sequence are well-defined as described in Section 3), the resulting probability of aliasing at time step n is given by

Pr(alias) =

)N !

vh [n]((l − 1)N + l)

l=1

*

− (1 − e)n ,

where vh [n] = Anh vh [0] is the probability distribution of states at time step n and vh [0] is a vector with a unique non-zero entry with value “1” at its ((j − 1)N + j)th position. Corollary 4.1 Under the conditions in Theorem 4.1 and assuming that the sequence of test input vectors is long enough, the resulting probability of aliasing is given by

Pr(alias) =

N !

vh ((l − 1)N + l) ,

l=1

where vh is the unique stationary distribution that satisfies Eq. (16). Note that the assumption of aperiodicity in the above statements can be relaxed in a straightforward way: if the compactor S is (connected but) periodic with period D then FSM H will be disconnected and periodic with period D. In fact, it can be shown that H comprises of D disconnected components (subsets of states), each of which forms a periodic submachine with period D. The fact that H consists of D disconnected subsets of states implies that there exist exactly D stationary distributions that satisfy vhi = Ah vhi , i ∈ {1, 2, ..., D}. Since in our setup both compactors of Figure 3 are initialized in the same state, say q[0] = qf [0] = qj , we are only interested in the stationary distribution that is reached from initial state (qj , qj ). Notice that, regardless of the actual initial state (qj , qj ), 1 ≤ j ≤ N , the stationary distribution of the Markov chain is well defined because states of the form (qj , qj ) for 1 ≤ j ≤ N are always reachable from each other through a sequence of inputs of the form x#1 and/or x#3 in Eqs. (11) and (13). However, the fact that machine H is periodic with period D implies that there are multiple eigenvalues of unit magnitude and, as a consequence, one has be careful with the choice of n because the value of n mod D is a determining factor for the probability distribution of states at time step n, given by vh [n] = Anh vh [0]. 14

We now turn our attention to a special case when the calculation of the aliasing probability can be simplified because the Markov chain that describes the joint behavior of the two compactors in Figure 3 is equivalent to the “product” of the smaller Markov chains that describe the behavior of each compactor separately. We refer to this case as the case of decoupled compactors and show that, when the stationary distributions of the two (decoupled) compactors are equal, the problem of minimizing the probability of aliasing is equivalent to the problem of maximizing the entropy of the stationary distribution of compactor states. Maximization of the entropy of the stationary distribution was an attribute that was studied in [32] for the class of linear compactors and a certain subclass of nonlinear compactors. In general, the transition matrix Ah defined in Eq. (15) is not equal to A ⊗ A# [A and A# were defined in Eqs. (7) and (8)]. As shown in the following corollary, however, this will be true if the following (sufficient) conditions are satisfied. Corollary 4.2 Consider the transition matrices A, A# and Ah in Eqs. (7), (8) and (15), where p, p# , e0 , e1 are as defined in Section 3. If e0 + e1 = 1, then Ah = A ⊗ A# . Proof: Since the Kronecker product distributes over matrix addition (see, for example, [36]), we have A ⊗ A# = (pA0 + (1 − p)A1 ) ⊗ (p# A0 + (1 − p# )A1 ) = pp# A0 ⊗ A0 + p(1 − p# )A0 ⊗ A1 + (1 − p)(1 − p# )A1 ⊗ A1 + (1 − p)p# A1 ⊗ A0 . It is easy to check that if e0 + e1 = 1, then the coefficients pp# , p(1 − p# ), (1 − p)(1 − p# ) and (1 − p)p# above match the corresponding coefficients of Ah in Eq. (15).

!

The following lemma makes use of the fact that under the above conditions, the stationary distribution of the Markov chain capturing the behavior of H can be obtained from the individual stationary distributions v and v# . This greatly simplifies the calculation of the aliasing probability because the probability that a particular sequence of test input vectors will drive the compactor to the final state ql under both fault-free and faulty conditions is simply given by the product v(l)v# (l)

15

of the individual stationary distributions. Lemma 4.1 Consider a two-input, connected, aperiodic FSM S with N states and transition matrices A0 and A1 that is used to compact the response of a single-output combinational circuit. Let v denote the stationary distribution arising when S is driven with a white input sequence where input “0” appears with probability p and input “1” appears with probability 1 − p, and let v# denote the stationary distribution arising when S is driven with a white input sequence where input “0” appears with probability p# and “1” appears with probability 1 − p# [as in Eqs. (9) and (10)]. If the relationship between p and p# is given by p# = p(1 − e0 ) + (1 − p)e1 and e0 + e1 = 1, then the probability of aliasing (for long sequences of inputs) is given by Pr(alias) = vT v# ≡

N !

v(l)v# (l) .

l=1

Proof: When e0 +e1 = 1, then A⊗A# = Ah . In such case, we can invoke Theorem 2.1 in Section 2 to conclude that the stationary distribution vector vh associated with the transition matrix Ah satisfies vh = v ⊗ v# . Note that in the case of a connected, aperiodic FSM S the above stationary distribution is guaranteed to be unique because A has a unique eigenvalue with unit magnitude at λ = 1 and A# has a unique eigenvalue with unit magnitude at λ# = 1. (If FSM S was connected but periodic, then there would be multiple eigenvalues with unit magnitude for both A and A# , resulting in a matrix Ah with multiple eigenvalues at λh = 1 and thus multiple stationary distributions; this is another way of seeing how a connected but periodic FSM S can lead to a disconnected H.) Once the stationary distribution vh is available, the aliasing probability is simply given by the sum of entries of the form vh ((l − 1)N + l) = v(l)v# (l) over all l.

!

Before closing this section, we comment briefly on the complexity of our analysis methodology as well as its extension to CUTs with P > 1 outputs. Complexity of computations. Computational cost is dominated by the computation of the probability distribution vh [n] (for relatively short sequences of test input vectors) or by the computation of the stationary probability distribution vh (for relatively long sequences). To obtain vh [n], we need 16

to calculate Anh vh [0] where Ah is the N 2 × N 2 transition matrix of the associated Markov chain; therefore, the computational complexity is O(nN 4 ) (actually, for large n we might want to calcun/2

n/2

late A2h = Ah Ah , A4h = A2h A2h , ..., Anh = Ah Ah , and obtain vh [n] as vh [n] = Anh vh [0] which is O(N 6 log n) complexity). To obtain vh we need to perform an eigenvector decomposition which costs roughly O(N 6 ) (calculating the eigenvalues/eigenvectors of an η × η matrix costs roughly O(η 3 )). Notice that in our case, our task is made easier because we normally deal with a sparse matrix and we look for a single eigenvector (that corresponds to eigenvalue one). The graphs for the examples in the next section, for instance, were generated using the Matlab function eig that calculates the eigenvalues and eigenvectors of a matrix relatively easily; for the larger example we used a sparse matrix representation and the command eigs.2 Extensions to CUTs with multiple outputs. Apart from methods that essentially reduce the problem to the single-output case (e.g., by XORing all P outputs of the CUT to obtain a single overall output), an interesting generalization to what we described in this section is the case of a compactor that admits P different inputs so that its functionality is completely captured by 2P next-state transition matrices Ai where i can be thought of as a binary P -dimensional indicator vector in {000..0, 000..1, ..., 111..0, 111..1}. Given a combinational circuit with L inputs and P outputs, the functionality of the CUT is fully described by a table of 2L entries, each associated with a P -dimensional binary vector. A given fault can corrupt each one of these P -dimensional binary output vectors into some other P -dimensional output vector. To calculate the aliasing probability for a given fault, what is important is the fraction of times a particular P -dimensional binary output vector is corrupted into another P -dimensional binary output vector. In fact, all we need is a table with 2P × 2P probability values pi,j for i, j ∈ {000..0, 000..1, ..., 111..0, 111..1}: each pi,j indicates the probability that output vector i gets corrupted to output vector j when we randomly select an input to our combinational circuit. [Notice that several of these probability values could have zero probability; see also the discussion in [13].] With these probabilities known, given a compactor with 2

Memory was the main concern when running the larger example; by invoking sparse matrix representations Matlab was able to generate the plots of Figure 6 in Section 6, each of which consists of 100 points, relatively easily: it took about six hours on a Sun workstation with dual Sun 1 GHz Ultrasparc III CPUS (RISC) and 2GB of RAM.

17

N states and P inputs, the associated Markov chain has transition matrix Ah =

!

pi,j Ai ⊗ Aj ,

i,j

where i and j range over all possible indices in the set {000..0, 000..1, ..., 111..0, 111..1}. By calculating the stationary distribution vh and by summing up the appropriate elements as in Corollary 4.1, we can obtain the probability of aliasing parameterized for the given pi,j (for shorter sequences we can use the equation in Theorem 4.1 to calculate the probability of aliasing). Note that when looking at CUTs with multiple outputs the size of the matrix Ah remains at N 2 × N 2 ; the only thing that affects the computational complexity in this multiple output case is the fact that the matrix becomes denser as the number of inputs increases.

5

Limitations of Linear Compactors

In this section we demonstrate the limitations of linear compactors under the asymmetric error model discussed in Section 3. Armed with the result in Lemma 4.1, it is relatively straightforward to show that, when e0 + e1 = 1 and v = v# , then the smallest possible aliasing probability is

1 N.

Lemma 5.1 Consider a two-input, connected, aperiodic FSM S with N states and transition matrices A0 and A1 that is used to compact the response of a single-output combinational circuit. Let v denote the stationary distribution arising when S is driven with a white input sequence where input “0” appears with probability p and “1” appears with probability 1 − p, and let v# denote the stationary distribution arising when S is driven with a white input sequence where input “0” appears with probability p# and “1” appears with probability 1 − p# [as in Eqs. (9) and (10)]. Let the relationship between p and p# be given by p# = p(1 − e0 ) + (1 − p)e1 . If e0 + e1 = 1 and v = v# , then the probability of aliasing satisfies Pr(alias) ≥

1 . N

Proof: When e0 + e1 = 1, then A ⊗ A# = Ah and (by Lemma 4.1), the probability of aliasing is given by Pr(alias) = vT v# . Since v = v# , the goal of minimizing the aliasing probability is equivalent

18

to minimizing min(vT v) ≡ min

N !

v2 (l)

l=1

subject to the constraints that

"N

l=1 v(l)

minimize the function above as v(l) =

= 1 and v(l) ≥ 0 for all l. It is easy to obtain the values that

1 N,

l = {1, 2, ..., N }, which result in an aliasing probability of

1 N.

! Note that the values v(l) =

1 N,

l = {1, 2, ..., N } which minimize the aliasing probability in the

above minimization are also the values that maximize the entropy of the probability vector v, given by Entropy(v) = −

N !

v(l) log[v(l)] .

l=1

Maximizing the entropy of v was the criterion used in [32] for minimizing the probability of aliasing (without an explicit justification). The above discussion shows that this intuition is justified as long as (i) the probability vectors v and v# are independent,3 and (ii) the two stationary distributions of the two fictitious compactors satisfy v = v# . We now argue that requirement (i) is true for most linear finite-state machines (LFSMs) of interest. When the two compactors in Figure 2 are identical LFSMs, their state evolution is captured by q[t + 1] = Cq[t] ⊕ bo[t] , q# [t + 1] = Cq# [t] ⊕ bof [t] , where q[t] (q# [t]) is the state of the top (bottom) compactor at time step t, and o[t] (of [t]) is the input of the top (bottom) compactor at time step t. We assume that q[·] and q# [·] are d-dimensional binary vectors (so that the number of states is N = 2d ), C is a d × d matrix and b is a d-dimensional column vector. All vectors and matrices have entries in GF (2), the finite (Galois) field of order 2, and matrix-vector multiplication (denoted by juxtaposition) and vector-vector addition (denoted 3

The case of decoupled compactors essentially means that the stationary distribution of states in the compactor on the top of Figure 4 is independent from the stationary distribution of states in the compactor at the bottom of Figure 4. Therefore, if we pick a sequence of test input vectors, the probability that the top compactor ends up in state ql is given by v(l) and the probability that the bottom compactor ends up in state ql is given by v" (l). Since distributions v and v" are independent, the probability of aliasing is simply the probability that the of test P sequence " input vectors ends up driving the two compactors to the same state ql ∈ Q or Pr(alias) = vT v" ≡ N l=1 v(l)v (l).

19

by ⊕) are performed as usual except that element-wise addition and multiplication are taken as the operations (⊕ and ⊗ respectively) in GF (2). Under this setup, the final states of the two compactors in Figure 2 are given by q[n] = Cn q[0] ⊕

n−1 !

q# [n] = Cn q# [0] ⊕

Cn−1−t bo[t] ,

t=0 n−1 !

Cn−1−t bof [t] .

t=0

Since q[0] = q# [0], aliasing occurs when 

   n−1−k C b (o[t] ⊕ of [t]) = 0  1 23 4  k=0 

 n−1 !

e[t]

(note that subtraction in GF (2) is equivalent to addition). Clearly, aliasing does not depend on sequences o = {o[0], o[1], ..., o[n − 1]} and of = {of [0], of [1], ..., of [n − 1]} but only on the sequence e = {e[0], e[1], ..., e[n − 1]}. Also, if test input vectors are randomly chosen, the sequence e will be a white binary sequence and will satisfy Pr(e[t] = 1) = e , Pr(e[t] = 0) = 1 − e for all t, where e was defined in Eq. (6). This argument shows that the aliasing probability for LFSMs is only dependent on e and not on the particular values for p, e0 and e1 . More importantly, for any value of e, we can choose p, e0 and e1 so that 1 = e0 + e1 , i.e., so that requirement (i) above is satisfied (for example, we can choose p = 1, e0 = e and e1 = 1 − e). Note that the requirement of aperiodicity will also be satisfied for any controllable LFSM, i.e., an LFSM that can be driven to any state from any initial state with a sequence of d inputs (recall that the given LFSM will be + , controllable if the d × d matrix b Cb C2 b · · · Cd−1 b is invertible). This would be true for

example for linear feedback shift registers (like the ones studied in [5, 8]).

Requirement (ii), which requires the stationary distributions of the two fictitious compactors to be the same, will also be satisfied for most LFSMs of interest (e.g., if our LFSM is a linear feedback

20

x

x

x

3

p q

q

1-p

p

3

1-p

1-p

q5

q4

q

2

1

p

p

0

1

1

1

q5 x

x

x

1

1-p

0

q4

q

2

1

1

x

0

x q

q

x

x

0

0

p

1-p

Figure 4: Compactor (top) and corresponding Markov chain (bottom). shift register whose feedback function includes the last bit [5, 8]).

6

Examples and Experimental Results

In this section, we analyze the aliasing probability for compactors of the form shown on the top of Figure 4. These compactors have in general N states (N = 5 in Figure 4), K = 2 inputs (denoted by x0 = 0 and x1 = 1 in the figure) and can be easily checked to be aperiodic. We start with N = 5 states for notational simplicity and we then extend our examples to larger N (N = 16 and N = 256). For N = 5, the transition matrices associated with inputs x0 and x1 are given by 

0 0 0 0 0

   1 0 0   A0 =  0 1 0    0 0 1  0 0 0





1 1 0 0 0

   0 0 1   A1 =  0 0 0    0 0 0  0 0 0

  0 0    0 0  ,   0 0   1 1



  0 0    1 0  .   0 1   0 0

We assume that the test input vectors to the CUT (not shown) are drawn independently between different time steps so that, under fault-free conditions, they generate a white output sequence in which “0’s” appear with probability p and “1’s” appear with probability 1 − p. In such case, the

21

resulting Markov chain (shown at the bottom of Figure 4) has state-transition matrix A = pA0 + (1 − p)A1  p p 0 0 0    1−p 0 p 0 0   =  0 1−p 0 p 0    0 0 1−p 0 p  0 0 0 1−p 1−p

(17)           

and stationary distribution v= where α =

1−p p .

,T 1−α + 2 α3 α4 , 1 α α 1 − α5

(18)

If the CUT is faulty, it will generate a white output sequence in which “0’s” appear

with probability p# and “1’s” appear with probability 1 − p# . In such case, the resulting Markov chain has state-transition matrix A# and stationary distribution v# given by Eqs. (17) and (18) with p replaced with p# . According to the analysis in Section 4, if the behavior of the faulty CUT causes the corruption of its output with probability e0 when the error-free output is “0” and with probability e1 when the error-free output is “1,” the associated Markov chain has transition matrix Ah described by Eq. (15). In general, the aliasing probability is a function of p (which is determined by the error-free functionality of the CUT) and e0 and e1 (which depend on the particular fault that influences the functionality of the CUT). For simplicity, in Figure 5 we keep e0 = e1 and plot the aliasing probability as a function of p and e (where e = e0 = e1 ); later we revisit this example for the case when e0 and e1 are different. As seen in Figure 5, the aliasing probability is generally low for high values of e and generally high for low values of e; also, as expected (since e0 = e1 ), the plot is symmetric along the plane p = 0.5. More interestingly, for e > 0.2 =

1 N,

1 2

the aliasing probability is lower or equal to

where N is the number of states of the compactor. Notice that since

1 N

is the aliasing

probability expected out of a properly designed LFSM (e.g., a linear feedback shift register with a primitive polynomial) with N states, the plot suggests that compaction using the FSM on the top of Figure 4 will perform better than linear compactors when the fault causes a high value for e.

22

Aliasing probability (predicted) versus p and e

1

Pr(alias)

0.8

0.6

0.4

0.2

0 1 0.8

1 0.6

0.8 0.6

0.4

0.4

0.2 p

0.2 0

0

e

Figure 5: Predicted values for Pr(alias) as a function of p and e for the compactor in Figure 4. (Strictly speaking, since the number of states of an LFSM is always a power of 2, this comparison should be made for an N that is a power of two.) In Figure 6 we present the probability of aliasing for compactors with similar structure as the one in Figure 4 but with N = 16 (top) and N = 256 (bottom) states. In all three cases (N = 5, 16, 256), the probability of aliasing is calculated for e0 = e1 = e and is plotted as a function of p and e; in order to highlight the regions where the nonlinear compactor has lower aliasing probability, the plots in Figure 6 are rotated and use a dB 8 9 AP scale for the aliasing probability (i.e., they plot 20 log10 1/N where 1/N is the aliasing probability for a linear compactor with N states). We observe that the probability of aliasing is less than or 8 9 AP equal to N1 (or, equivalently 20 log10 1/N is smaller than zero) when e > 12 (i.e., when the error rate is relatively high); furthermore, the transition around e =

1 2

appears to become sharper as N

increases. Note that the above predictions have also been verified experimentally. More specifically, for each compactor, we randomly chose p, e0 and e1 , and generated random binary sequences by independently choosing each sample to be “0” with probability p and “1” with probability 1 − p. Each of these error-free sequences was then “corrupted” to a corresponding erroneous sequence according to e0 and

23

e1 . Finally, each pair of error-free and corrupted binary sequences was used to drive the compactor starting from the same initial state. By tracking the percentage of sequence pairs (error-free and corrupted) that drove the compactor to the same final state, we verified that this percentage agreed with our theoretical calculation of the probability of aliasing. In Figure 7, we show the predicted aliasing probability as a function of e0 and e1 for p = 0.1, p = 0.4 and p = 0.5 (naturally, things are symmetric for p = 0.9 = 1 − 0.1 and p = 0.6 = 1 − 0.4). For p = 0.1, we see that the value of e0 appears to be largely irrelevant for the aliasing probability, whereas the value of e1 is extremely important. When p = 0.4, the plot suggests that the aliasing probability does not depend as heavily on e1 but e1 still appears to be the determining factor. When p = 0.5, the probability of aliasing is symmetric with respect to e0 and e1 and the plot suggests that the aliasing probability is equal to 0.2 =

1 N

along the line e0 + e1 = 1.

Some of the above observations can also be confirmed analytically using our results on decoupled compactors. For example, in the plots of Figures 5 and 6, the Pr(alias) equals

1 N

along the line

e = e0 = e1 = 12 . Since in all plots e0 = e1 so that e0 + e1 = 1 when e0 = e1 = 12 , then according to Lemma 4.1 the distributions of the two compactors on the top and at the bottom of Figure 4 along the line e =

1 2

are independent. The distribution of the top compactor is given by v=

where α =

1−p p

αN −1

,T

,

(19)

[see Eq. (18)]. Similarly, the distribution of the bottom compactor is given as v# =

where α# =

1−α + 1 α α2 · · · 1 − αN

1−p" p"

1 − α# + 1 α# (α# )2 · · · 1 − (α# )N

(α# )N −1

,T

,

(20)

and p# = p(1 − e0 ) + (1 − p)e1 . Since we are restricting attention along the line

e0 = e1 = 12 , we have p# =

1 2

and α# = 1. Therefore, v# =

1 + 1 1 ··· N

24

1

,T

.

Aliasing Probability over Aliasing Probability of Linear Compactor in dB versus p and e

50 0

Pr(Alias) in dB

−50 −100 −150 −200 −250 −300 0 0.2 0.4

1 0.8

0.6

0.6 0.4

0.8 0.2 1 e

0

p

Aliasing Probability over Aliasing Probability of Linear Compactor in dB versus p and e

50 0

Pr(Alias) in dB

−50 −100 −150 −200 −250 −300 −350 0 0.2 0.4 0.6 0.8 1

0

0.2

e

0.4

0.6

0.8

1

p

Figure 6: Predicted values (in dB) for Pr(alias) as a function of p and e for a compactor with similar structure as the one in Figure 4 but with N = 16 (top) and N = 256 (bottom) states.

25

Aliasing probability (predicted) versus e0 and e1

1

Pr(Alias)

0.8

0.6

0.4

0.2

0 1 0.8

1 0.6

0.8 0.6

0.4

0.4

0.2

0.2 0

e0

0

e1

Aliasing probability (predicted) versus e0 and e1

1

Pr(Alias)

0.8

0.6

0.4

0.2

0 1 0.8

1 0.6

0.8 0.6

0.4

0.4

0.2

0.2 0

e0

0

e1

Aliasing probability (predicted) versus e0 and e1

1

Pr(Alias)

0.8

0.6

0.4

0.2

0 1 0.8

1 0.6

0.8 0.6

0.4

0.4

0.2 e0

0.2 0

0

e1

Figure 7: Predicted values for Pr(alias) as a function of e0 and e1 for the compactor in Figure 4 for p = 0.1 (top), p = 0.4 (middle) and p = 0.5 (bottom).

26

By Lemma 4.1, the aliasing probability is given by

Pr(alias) =

N !

v(l)v# (l)

l=1

=

; N : 1 ! l−1 1 − α α N 1 − αN l=1

=

1 . N

We conclude that along the line e0 = e1 = 12 , the aliasing probability in the plots of Figures 5 and 6 is exactly

1 N.

Similarly, we can also explain why, in the plot at the bottom of Figure 7, the probability

of aliasing along the line e0 + e1 = 1 is exactly 0.2 =

1 N

(N in this case is 5) and what happens along

the line e0 + e1 = 1 in the top and middle plots of Figure 7. The examples in this section demonstrate that, at least for certain classes of faults, a nonlinear compactor can achieve a lower aliasing probability than a linear one. Specifically, for certain values of p, e0 and e1 , the compactor in Figure 4 with N states can result in a lower aliasing probability than the limit of

1 N

that is exhibited by linear compactors. Also, from these examples, it is clear

that maximizing the entropy of the stationary distribution of compactor states (i.e., having v = + , 1 as in linear compaction schemes) will not necessarily minimize the probability of 1 1 · · · 1 N

error. Note that we are not advocating that nonlinear saturating up/down counters should replace linear compactors as this would be highly dependent on the type of faults that we typically expect in our CUTs and, in particular, the resulting e0 and e1 (note that p is not dependent on the particular fault). What is made clear by our analysis is that the activation probabilities associated with a particular fault are extremely important in determining the aliasing probability in an arbitrary compactor. Since fault activation probabilities do not appear explicitly in linear compaction schemes, very few researchers have studied the distribution of errors under different faults or different types of faults. There has been a fair amount of related work on probabilistic estimation of digital circuit testability (see, for example, [42, 43, 44, 45, 46, 47, 48]) but careful investigation of the fault activation profile of different faults in specific circuits has been scarcely considered. Reference [49] reports the minimum nonzero, average and maximum fault activation probabilities for single stuck-at faults for several circuits in the ISCAS’85 benchmark suite [50]. What is more important for our analysis are the fault 27

activation profiles of circuits of interest. For instance, the second output of benchmark c17 from the ISCAS’85 benchmark suite [50] (c17 has 5 inputs and 2 outputs) has the fault activation profile shown in Figure 8. This histogram shows the number of single stuck-at faults (there are 12 gates and thus 24 single stuck-at faults in c17) that produce a particular fault activation probability e as defined in Eq. (6). Note that there are only three faults that result in 18 out of a total 32 outputs to be erroneous (e = 0.5625); these faults correspond to (i) gate 6 stuck-at 0, (ii) gate 10 stuck-at 0, and (iii) gate 12 stuck-at 0. Note that for c17, p = 14/32 and for all three of these faults e0 = 0 and e1 = 1. With this information the probability of aliasing for the compactor in Figure 4 can be calculated easily as shown below. N

Pr(Alias)

5

0.0196

16

0.0052

256

1.4499e-15

As expected, for each N , the aliasing probability is smaller than

1 N.

Note that the fault activation

profile (more specifically, the detection probability) for faults in some ISCAS’85 benchmarks has also appeared in [45]. For instance, benchmark C6288, which is shown in [45] to have a significant portion of single stuck-at faults that result in high error rates, might fair well4 when using the nonlinear compaction schemes studied in this section.

7

Conclusions

In this paper we described a systematic methodology for calculating the aliasing probability when an arbitrary finite-state machine is used to compact the response of a combinational circuit to a sequence of random test input vectors that are chosen independently at each time step. By identifying the importance of an asymmetric error model and the correlation between the output sequences generated by a fault-free and a faulty combinational circuit, our approach is able to simultaneously track the states of two fictitious compactors, one driven by the response of the fault-free combinational circuit and the other one driven by the response of the faulty combinational circuit, and calculate the exact aliasing probability. To demonstrate our approach, we analyzed in the paper a simple example of a 4

Note that the overall aliasing probability will also depend on the distribution of the underlying faults.

28

Distribution of error value e among all stuck−at faults for Output 2 of circuit c17 9

8

Number of Stuck−at Faults for Output 2

7

6

5

4

3

2

1

0

0.2

0.25

0.3

0.35

0.4 0.45 Error value e

0.5

0.55

0.6

0.65

Figure 8: Fault activation profile of stuck-at faults in benchmark circuit c17. nonlinear compactor and showed that there exist regimes (classes of fault) for which this compactor exhibits a probability of aliasing that is lower than the probability of aliasing of a linear compactor with the same number of states. Using the intuition provided by our analysis, we then studied a special case in which the stationary distributions of the states of the two compactors are independent. This facilitated our analysis and also clarified the role of the entropy of the stationary distribution of states in linear and nonlinear compactors. Notice that, even for the simple nonlinear compactor that we analyzed in this paper, the question of how it fares in a “real” scenario will depend heavily on how faults are associated with fault activation probabilities e0 and e1 (i.e., what percentage of faults is associated with high e0 and e1 ) and how this distribution of faults affects the “average” aliasing probability. The results in this paper prompt a number of interesting questions, including the investigation of other types of nonlinear compactors as well as potential advantages of their use along with linear compactors (much like it was done in [17] for signature and syndrome compression). Since our examples in Section 6 indicate the existence of nonlinear compactors with lower aliasing probabilities when the error rates (e0 and/or e1 ) are high, it might be interesting to investigate whether nonlinear compactors together with output modification techniques [51] and/or weighted pseudorandom test input vectors [52, 53, 54] might result in lower aliasing probabilities. Moreover, we believe that pseudorandomness might favor nonlinear compaction even more heavily (for instance, if we find and 29

eliminate input test vectors that do not affect the output for any fault in a given class of interest, then we effectively force e0 and e1 for each fault to increase or remain the same). Note, however, that one may have to employ different techniques to evaluate the aliasing probability when pseudorandom test input generation is used. Finding ways to ensure that the rate of convergence to the stationary distribution is fast enough (so that the length of the testing sequence is minimized or so that bounds on the length of the testing sequence can be guaranteed) is also an interesting research direction. We also believe that it will be fruitful to combine the analysis in this paper with techniques for randomly testing a combinational circuit based on its syndrome [19, 55, 56, 40, 17]. Analysis of the aliasing probability and the testing performance when the CUT is a sequential circuit also appears to be a promising direction; some preliminary ideas in this direction can be found in [57, 58]. Finally, we are also interested in investigating similar techniques in the context of system verification.

References [1] M. L. Bushnell and V. D. Agrawal, Essentials of Electronic Testing. Boston, Massachusetts: Kluwer Academic Publishers, 2000. [2] M. Abramovici, M. Breuer, and D. Friedman, Digital Systems Testing and Testable Design. Piscataway, New Jersey: IEEE Press, 1990. [3] J. Rajski and J. Tyszer, Arithmetic Built-In Self-Test. Upper Saddle River, New Jersey: Prentice Hall, 1998. [4] P. H. Bardell, W. H. McAnney, and J. Savir, Built-In Test for VLSI: Pseudorandom Techniques. New York: John Wiley & Sons, 1987. [5] T. W. Williams, W. Daehn, M. Gruetzner, and C. W. Starke, “Bounds and analysis of aliasing errors in linear feedback shift registers,” IEEE Transactions on Computer-Aided Design, vol. 7, pp. 75–83, January 1988. [6] S. M. Reddy, K. K. Saluja, and M. G. Karpovsky, “A data compression technique for build-in self-test,” IEEE Transactions on Computers, vol. 37, pp. 1151–1156, September 1988. [7] A. Ivanov and V. K. Agarwal, “An analysis of the probabilistic behavior of linear feedback signature registers,” IEEE Transactions on Computer-Aided Design, vol. 8, pp. 1074–1088, October 1989. [8] W. Daehn, T. W. Williams, and K. D. Wagner, “Aliasing errors in linear automata used as multiple-input signature analyzers,” IBM Journal of Research and Development, vol. 34, pp. 363–380, March–May 1990. 30

[9] D. K. Pradhan, S. K. Gupta, and M. G. Karpovsky, “Aliasing probability for multiple input signature analyzer,” IEEE Transactions on Computers, vol. 39, pp. 586–591, April 1990. [10] K. Iwasaki and F. Arakawa, “An analysis of the aliasing probability of multiple-input signature registers in the case of 2m-ary symmetric channel,” IEEE Transactions on Computer-Aided Design, vol. 9, pp. 427–438, April 1990. [11] M. Serra, T. Slater, J. Muzio, and D. Miller, “The analysis of one-dimensional linear cellular automata and their aliasing properties,” IEEE Transactions on Computer-Aided Design, vol. 9, pp. 767–778, July 1990. [12] M. Damiani, P. Olivo, and B. Ricco, “Analysis and design of linear finite state machines for signature analysis testing,” IEEE Transactions on Computers, vol. 40, pp. 1034–1045, September 1991. [13] D. K. Pradhan and S. K. Gupta, “A new framework for designing and analyzing BIST techniques and zero aliasing compression,” IEEE Transactions on Computers, vol. 40, pp. 743–763, June 1991. [14] M. S. Elsaholy, S. I. Shaheen, and R. H. Seireg, “A unified analytical expression for aliasing error probability using single-input external- and internal-XOR LFSR,” IEEE Transactions on Computers, vol. 47, pp. 1414–1417, December 1998. [15] M. S. Elsaholy, “Exact AEP model in signature analysis using LCM,” IEE Proceedings E: Computers and Digital Techniques, vol. 146, pp. 247–252, September 1999. [16] D. Xavier, R. C. Aitken, A. Ivanov, and V. K. Agarwal, “Using an asymmetric error model to study aliasing in signature analysis registers,” IEEE Transactions on Computer-Aided Design, vol. 11, pp. 16–25, January 1992. [17] N. Saxena and E. McCluskey, “Parallel signature analysis design with bounds on aliasing,” IEEE Transactions on Computers, vol. 46, pp. 425–438, April 1997. [18] J. P. Hayes, “Transition count testing of combinational logic circuits,” IEEE Transactions on Computers, vol. 25, pp. 613–620, June 1976. [19] J. Savir, “Syndrome-testable design of combinational circuits,” IEEE Transactions on Computers, vol. 29, pp. 442–451, June 1980. [20] J. P. Robinson and N. R. Saxena, “A unified view of test compression methods,” IEEE Transactions on Computers, vol. 36, pp. 94–99, January 1987. [21] J. P. Robinson and N. R. Saxena, “Simultaneous signature and syndrome compression,” IEEE Transactions on Computer-Aided Design, vol. 7, pp. 584–589, May 1988. [22] S. Pilarski and K. Wiebe, “On counter-based compaction,” in Proceedings of ISCAS 1991, the 1991 IEEE Int. Symp. on Circuits and Systems, vol. 3, pp. 1885–1888, 1991. [23] J. Rajski and J. Tyszer, “Test responses compaction in accumulators with rotate carry adders,” IEEE Transactions on Computer-Aided Design, vol. 12, pp. 531–539, April 1993. 31

[24] J. Rajski and J. Tyszer, “Accumulator-based compaction of test responses,” IEEE Transactions on Computers, vol. 42, pp. 643–650, June 1993. [25] K. Chakrabarty and J. P. Hayes, “On the quality of accumulator-based compaction of test responses,” IEEE Transactions on Computer-Aided Design, vol. 16, pp. 916–922, August 1997. [26] S. Sastry and A. Majumdar, “Test efficiency analysis of random self-test of sequential circuits,” IEEE Transactions on Computer-Aided Design, vol. 10, pp. 390–398, March 1991. [27] S. Pilarski, “Comments on “Test efficiency analysis of random self-test of sequential circuits”,” IEEE Transactions on Computer-Aided Design, vol. 14, pp. 1044–1045, August 1995. [28] S. Pilarski, T. Kameda, and A. Ivanov, “Sequential faults and aliasing,” IEEE Transactions on Computer-Aided Design, vol. 12, pp. 1068–1074, July 1993. [29] S. Pilarski and K. J. Wiebe, “Counter-based compaction: delay and stuck-open faults,” IEEE Transactions on Computers, vol. 44, pp. 780–791, June 1995. [30] G. Edirisooriya and J. P. Robinson, “Performance of signature registers in the presence of correlated errors,” IEE Proceedings E: Computers and Digital Techniques, vol. 139, pp. 393– 400, September 1992. [31] D. Xavier, R. C. Aitken, A. Ivanov, and V. K. Agarwal, “Experiments on aliasing in signature analysis registers,” in Proceedings of the Int. Test Conference, pp. 344–354, 1989. [32] M. Kopec, “Can nonlinear compactors be better than linear ones?,” IEEE Transactions on Computers, vol. 44, pp. 1275–1282, November 1995. [33] V. D. Agrawal, “An information theoretic approach to digital fault testing,” IEEE Transactions on Computers, vol. 30, pp. 582–587, August 1981. [34] J. G. Kemeny, J. L. Snell, and A. W. Knapp, Denumerable Markov Chains. New York: SpringerVerlag, 1976. [35] P. Bremaud, Markov Chains : Gibbs Fields, Monte Carlo Simulation, and Queues. New York: Springer-Verlag, 1999. [36] A. Graham, Kronecker Products and Matrix Calculus with Applications. Mathematics and its Applications, Chichester, UK: Ellis Horwood Ltd, 1981. [37] G. D. Hachtel, E. Macii, A. Pardo, and F. Somenzi, “Markovian analysis of large finite state machines,” IEEE Transactions on Computer-Aided Design, vol. 15, pp. 1479–1493, December 1996. [38] S. Meyn and R. Tweedie, Markov Chains and Stochastic Stability. New York: Springer-Verlag, 1993. [39] P. W. Glynn and D. Ormoneit, “Hoeffding’s inequality for uniformly ergodic Markov chains,” Statistics and Probability Letters, vol. 56, pp. 143–146, 2002.

32

[40] N. Saxena, P. Franco, and E. McCluskey, “Simple bounds on serial signature analysis aliasing for random testing,” IEEE Transactions on Computers, vol. 41, pp. 638–645, May 1992. [41] E. J. McCluskey, S. Makar, S. Mourad, and K. D. Wagner, “Probability models for pseudorandom test sequences,” IEEE Transactions on Computer-Aided Design, vol. 7, pp. 68–74, January 1988. [42] F. Brglez, “On testability analysis of combinational circuits,” in Proceedings of ISCAS 1984, the 1984 IEEE Int. Symp. on Circuits and Systems, (Montreal, Canada), pp. 221–225, May 1984. [43] S. K. Jain and V. D. Agrawal, “Statistical fault analysis,” IEEE Design and Test of Computers, vol. 2, pp. 38–44, February 1985. [44] S. C. Seth, L. Pan, and V. D. Agrawal, “PREDICT—probabilistic estimation of digital circuit testability,” in Proceedings of the 15th International Symposium on Fault-Tolerant Computing, Digest of Papers, pp. 220–225, June 1985. [45] S. C. Seth, V. D. Agrawal, and H. Farhat, “A statistical theory of digital circuit testability,” IEEE Transactions on Computers, vol. 39, pp. 582–586, April 1990. [46] H. A. Farhat and H. Saidian, “Testability profile estimation of VLSI circuits from fault coverage,” in Proceedings of the First Great Lakes Symp. on VLSI, pp. 238–242, 1991. [47] H. A. Farhat and S. G. From, “A Beta model for estimating the testability and coverage distributions of a VLSI circuit,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 12, pp. 550–554, April 1993. [48] M. Dalpasso, M. Favalli, P. Olivo, and J. P. Teixeira, “Realistic testability estimates for CMOS IC’s,” Electronics Letters, vol. 30, pp. 1593–1595, September 1994. [49] H. Farhat, A. Lioy, and M. Poncino, “Computation of exact random pattern detection probability,” in Proceedings of IEEE 1993 Custom Integrated Circuit Conference, pp. 26.7.1–26.7.4, 1993. [50] F. Brglez and H. Fujiwara, “A neutral netlist of 10 combinational benchmark circuits and a target translator in Fortran,” in Proceedings of ISCAS 1985, the 1985 IEEE Int. Symp. on Circuits and Systems, (Kyoto, Japan), pp. 663–698, June 1985. [51] Y. Zorian and V. K. Agarwal, “Optimizing error masking in BIST by output data modification,” Journal of Electronic Testing: Theory and Applications, vol. 1, pp. 59–71, 1990. [52] H. D. Schnurmann, E. Lindbloom, and R. G. Carpenter, “The weighted random test-generator,” IEEE Transactions on Computers, vol. 24, pp. 695–700, July 1975. [53] F. Siavoshi, “WTPGA: A novel weighted test-pattern generation approach for VLSI built-in self test,” in Proceedings of the Int. Test Conference, pp. 256–262, 1988. [54] S. Bou-Ghazale and P. N. Marinos, “Testing with correlated test vectors,” in Proceedings of 22nd Int. Symp. on Fault-Tolerant Computing, Digest of Papers, pp. 256–262, 1992.

33

[55] J. Savir, “Distributed generation of weighted random patterns,” IEEE Transactions on Computers, vol. 48, pp. 1364–1368, December 1999. [56] J. Savir, G. S. Ditlow, and P. H. Bardell, “Random pattern testability,” IEEE Transactions on Computers, vol. 33, pp. 79–89, January 1980. [57] C. N. Hadjicostis, “Probabilistic fault detection in finite-state machines based on state occupancy measurements,” in Proceedings of CDC 2002, the 41st IEEE Conf. on Decision and Control, vol. 4, (Las Vegas, Nevada), pp. 3994–3999, December 2002. [58] E. Athanasopoulou and C. N. Hadjicostis, “Aliasing probability calculations in testing sequential circuits,” in Proceedings of MED 2003, the 11th Mediterranean Conf. on Control and Automation, (Rhodes, Greece), 18–20 June 2003.

34

Suggest Documents