ALIASING PROBABILITY CALCULATIONS IN

1 downloads 0 Views 93KB Size Report
Coordinated Science Lab. and Dept. of Electrical and Computer Engineering. University of ... of the fault-free combinational circuit and the other one driven.
ALIASING PROBABILITY CALCULATIONS IN NONLINEAR COMPACTORS Christoforos N. Hadjicostis Coordinated Science Lab. and Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign, IL 61801-2307, USA E-mail: [email protected]

This paper discusses a systematic methodology for calculating the aliasing probability when an arbitrary finite-state machine is used to compact the response of a combinational circuit to a sequence of randomly generated test input vectors. The proposed approach is general and is based on simultaneously tracking the states of two (fictitious) compactors, one driven by the response of the fault-free combinational circuit and the other one driven by the response of the faulty combinational circuit. By deriving the overall Markov chain that describes the combined behavior of these two compactors, we are able to calculate the exact aliasing probability based on its stationary distribution and to demonstrate regimes over which nonlinear compactors may be preferable over linear compactors. 1. INTRODUCTION Compaction techniques and, in particular, signature analysis have attracted the attention of many researchers within the testing community [1]. The basic setup is shown in Figure 1. The combinational circuit under test (CUT) is driven by a known, randomly generated sequence of test input vectors [0℄, [1℄, ..., [n 1℄. Instead of comparing the possibly corrupted output sequence f [0℄, f [1℄, ..., f [n 1℄ produced by the CUT against the error-free sequence [0℄, [1℄, ..., [n 1℄ that is expected from a fault-free CUT, compaction techniques use the output sequence to drive a compactor, i.e., a finite-state machine (FSM) that is initialized in some known state q [0℄. Since we know the error-free sequence [0℄, [1℄, ..., [n 1℄ expected out of a fault-free CUT, we can easily pre-calculate the expected error-free final state q [n℄ of the compactor and determine whether the circuit is faulty or not based on the actual final state qf [n℄ of the compactor. Aliasing occurs when q [n℄ = qf [n℄, i.e., when both the error-free and the corrupted sequences drive the compactor to the same final state; in such case, the compaction scheme fails to detect the fault(s) in the CUT. The big advantage of compaction methodologies is that they do not have to perform comparisons after each application of a test input vector; this saves hardware (memory) and time, but comes at the cost of failing to detect fault(s) that result in corrupted sequences f [0℄, f [1℄, ..., f [n 1℄ that drive the compactor to the error-free final state q [n℄. Since test input vectors are randomly generated one is naturally led to the question of calculating the probability of aliasing. The probabilistic aspects of compaction methodologies have been thoroughly studied in settings where the

o

o o o

o o

i i

o

i

o

o

o

o

o

This work has been supported in part by NSF Career Award 0092696 and in part by NSF ITR Award 0218939.

i[n-1], ..., i[1], i[0]

CUT

ABSTRACT of [n-1], ..., of [1], of [0]

? q [n] = q[n]

Compactor

f

FSM with known q[0]

Figure 1: Compaction of the response of a combinational circuit using an arbitrary finite-state machine.

compactor is a linear sequential circuit, such as a linear feedback shift register or, more generally, a linear finite-state machine [2, 3]. In this case, which is referred to as signature analysis, the aliasing probability has been characterized in terms of various quantities of interest, such as the number of states of the compactor and the required testing sequence length; a good (but somewhat dated) overview of these results and related references can be found in [1] (space limitations do not allow for a complete reference list). In these settings, if we assume that the sequence of test input vectors is long enough and that the linear compactor is well-designed (e.g., a linear feedback shift register with a primitive polynomial), then the aliasing probability is essentially given by N1 , where N is the number of states of the compactor. Due to space limitations we do not address these convergence issues here. In this paper we develop an approach for calculating the aliasing probability when an arbitrary FSM is used to compact the response of a combinational circuit to a randomly generated sequence of test input vectors. Our analysis provides insights regarding different choices of compactors and their effect on the aliasing probability. It also demonstrates that there exist regimes over which nonlinear compactors may be preferable over linear compactors. 2. MATHEMATICAL PRELIMINARIES AND NOTATION

S

The next-state q [k + 1℄ of a finite-state machine (FSM) with state set Q = q1 ; q2 ; :::; qN and input set X = x1 ; x2 ; :::; xK is specified by its state q [k℄ and input x[k℄ at time step k via the next-state function q [k + 1℄ = Æ (q [k℄; x[k℄) (we assume that Æ is defined for all pairs in Q X ). In order to make the connection with Markov chains more transparent, we will denote the state q [k℄ of FSM by an N -dimensional binary state vector [k℄ which has exactly one nonzero entry with value “1.” This single nonzero entry denotes the state of the system (i.e., if the j th entry of vector [k ℄ equals “1,” then the system is in state qj at time step k ). If input xi is applied at time step k, the state evolution of the system can be captured by an equation of the form [k + 1℄ = i [k℄, where i is the N N state-transition matrix associated with

f

g

f



q

S

q

A



q

Aq

g

A

i1

input xi . Specifically, each column of matrix i has exactly one nonzero entry with value “1” (so that i has a total of N nonzero entries, all with value “1”). A nonzero entry at the `th-j th position of i denotes a transition from state qj to state q` under input xi . If the input sequence applied to a given FSM is white, i.e., if the inputs are statistically independent from one time step to another and their probability distribution at any given time step k is fixed so that input xi takes place PKwith probability pi (0 < pi 1; 2; :::; K and 1 for i i=1 pi = 1), then the FSM behaves like a homogeneous Markov chain with N states and statetransition matrix

A

A

2f



g

A= v

K X i=1

pi

Ai :

(1)

If [0℄ denotes the initial probability distribution of states in the Markov chain (i.e., q [0℄ = qj with probability [0℄(j )), then the probability distribution of states at time step n is easily calculated to be [n℄ = n [0℄. The j th entry of [n℄ (denoted by [n℄(j )) essentially tells us the probability that at time step n the Markov chain (and the corresponding underlying FSM) is in state qj . A probability distribution that satisfies

v

v

Av

v

v

v v = Av

A

A A

A

S

A

S

S

v

g

2

v

A



A B

2f 2

2

2 6 A B = 664

a11 a21

.. .

B



B B

aN1 1

B

a12 a22

.. .

B  B 

a N1 2

..

.

B 

a1M1 a2M1

.. .

B B

aN1 M1

B

3 77 75

;

where aij is the entry at the ith-row, j th-column position of matrix . Note that is of dimension N1 N2 M1 M2 .

A

A B

0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

i3 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

i4 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

o 0 1 0 0 1 0 1 1 1 0 1 0 1 1 0 1

of

0 0 0 0 1 1 0 1 0 1 1 0 0 1 1 1

Figure 2: Truth table for the error-free ( o) and corrupted (of ) output of a CUT with L = 4 inputs.

(2)

is called a stationary distribution [4]. A Markov chain has a unique stationary distribution if it is irreducible, i.e., for any pair of states ` and j , there exists a finite M such that the `th-j th entry of M (denoted by M (`; j )) is nonzero. In such case, it can be shown that matrix has a single eigenvalue at  = 1 and its corresponding eigenvector (scaled so that it is a probability vector) is the unique stationary distribution [4]. In our context, the requirement that a Markov chain with the transition matrix in Eq. (1) is irreducible is equivalent to the underlying FSM being connected, i.e., all states being reachable from each other through a finite sequence of inputs (which has a nonzero probability because each input is assumed to have nonzero probability). Due to space considerations and to avoid some additional complicalities, we will assume that the Markov chain is aperiodic, i.e., we will assume that there exists a finite M N such that M > 0 (element-wise). This is a slightly stricter requirement than what we had earlier for irreducible (but possibly periodic) Markov chains and essentially means that the underlying FSM has to be aperiodic. (FSM is periodic if its state set Q can be partitioned into D classes C0 , C1 , 0; 1; 2; :::; (D 1) it is ..., CD 1 , CD = C0 so that for all d true that Æ (qj ; xi ) Cd+1 for al qj Cd and all xi X .) When FSM is connected and aperiodic, it gives rise to an irreducible, aperiodic Markov chain; in such case, the probability distribution of states [n℄ is given for large values of n by the unique stationary distribution in Eq. (2). To concisely capture the operation of two identical FSMs that operate in parallel with correlated inputs, as well as the Markov chain that describes their behavior, we will make use of the Kronecker product notation [5]. Recall that the Kronecker product of an N1 M1 matrix with an N2 M2 matrix is denoted by and is defined as the partitioned matrix



i2

0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1



3. FAULT MODEL In this paper we consider permanent faults that affect the inputoutput behavior of a combinational circuit under test (CUT) but retain its combinational nature. Due to space considerations, we focus on single-output combinational circuits in which faults cause the output to be incorrect (i.e., “0” instead of “1” and vice-versa). A permanent fault is a fault that manifests itself every time the CUT is supplied with a certain, affected subset of test input vectors by causing the corresponding output to be corrupted. (Permanent faults could be caused by design flaws, software errors, manufacturing defects or irreversible physical damage [6].) Consider a CUT with L inputs and a single output. At any given time step, there are 2L possible input vectors and the functionality of the circuit is fully described by a truth table like the one shown in Figure 2 for L = 4. Recall that the sequence of test input vectors [0℄, [1℄, ..., [n 1℄ in Figure 1 is randomly generated so that the input vector [k℄ at time step k is chosen independently from other times. This implies that the error-free output sequence of the fault-free CUT is a white sequence of “0’s” and “1’s,” where a “0” appears with probability p and a “1” appears with probability 1 p (independently between different time steps). Moreover, the probability p is independent of the circuit implementation and only depends on the fraction of times a “0” appears in the truth table of the function implemented by the CUT (and, of course, on how test input vectors are chosen). For example, if test error vectors are chosen with equal probability among the 24 possible inputs, then the fault-free CUT in the example of Figure 2 will produce a 7 white output sequence in which “0’s” appear with probability 16 9 and “1’s” appear with probability 16 (the error-free output is given by the column labeled “o”). We will assume for simplicity that the test error vector at each time step is chosen with equal probability among the 2L possible inputs; in this case, p is essentially the syndrome of the CUT [1, 7]. The presence of a permanent fault in the CUT will cause errors when certain combinations of inputs are applied. In essence, the faulty CUT will correspond to a different truth table and will produce an output sequence in which “0’s” appear with probability p0 and “1’s” appear with probability 1 p0 . Note that this corrupted

i i

i

i

sequence will still be white (because the test input vectors are chosen independently between different time steps). In the example of Figure 2, assuming that each test input vector is chosen with equal probability among the 24 possibilities, the faulty CUT results in a 8 (the corrupted output is given sequence in which p0 = 1 p0 = 16 by the column labeled “of ”). We can provide a more accurate characterization of the relationship between the output sequences resulting from a fault-free and a faulty CUT under a particular fault by comparing the corresponding truth tables. More specifically, we can talk about the joint probability that the error-free and the corrupted outputs at time step k take particular values; for our example in Figure 2, assuming that each test input vector is chosen with equal probability at each time step, we have the following values: Pr(o[k ℄ Pr(o[k ℄ Pr(o[k ℄ Pr(o[k ℄

= = = =

0; o 0; o 1; o 1; o

f [k℄ = 0) = 4=16 ; f [k℄ = 1) = 3=16 ; f [k℄ = 1) = 5=16 ; f [k℄ = 0) = 4=16 :

j j

= =

Pr(o Pr(o

f f

j j

= 1 o = 0) = 0 o = 1)

 e0 ;  e1 :

In the example of Figure 2, when test input vectors are chosen with equal probability and the fault is permanent, we have e0 = 37 and e1 = 94 . 4. CALCULATION OF ALIASING PROBABILITY In order to calculate the exact aliasing probability we will make use of the probabilistic relationship between the output sequences generated by a fault-free and a faulty CUT. Essentially, what we want to do is to simultaneously keep track of the state reached by a fictitious compactor that is driven by the output of a fault-free CUT and the state reached by the actual compactor that is driven by the output of a faulty CUT (for a particular permanent fault). Clearly, when the random sequence of test input vectors has length n, we need to calculate

f

N X

Pr(alias) = Pr(q [n℄ = q [n℄) =

j=1

f

j

Pr(q [n℄ = q [n℄ = q ) :

An equivalent way of interpreting the situation is to view it in terms of Figure 3. Sequence is a white sequence in which the output o[k℄ at a particular time step k is “0” with probability p and “1” with probability 1 p, whereas sequence f is related to the output sequence via the conditional error probabilities e0 and e1 . More specifically, at time step k, of [k℄ = o[k℄, with probability e0 if o[k℄ = 0 and with probability e1 if o[k℄ = 1. Note that the input at time step k to the dotted determinisof Figure 3 is given by a pair of inputs of the form tic system (o[k ℄; of [k ℄) whose probabilistic description can be summarized as follows:

o

o

o

q[n]

Cond. Probs. e and e 1 0

of [k]

Compactor

q [n] f

FSM with known q[0]

FSM H

Figure 3: Alternative model for the operation under error-free and corrupted output sequences. The dotted system in Figure 3 is an FSM with N 2 states that can be conveniently described in terms of pairs of the form (qj ; qj 0 ), where qj represents the state of the top compactor and qj 0 represents the state of the bottom compactor. In the spirit of Section 2 and to make our connection with Markov chains more transparent, we will indicate the state of using a binary indicator column vector h [k℄ with N 2 entries and exactly one nonzero entry with value “1.” This single nonzero entry denotes the state of the system. In particular, we will arrange the states of to be indicated in the order (q1 ; q1 ), (q1 ; q2 ), ..., (q1 ; qN ), (q2 ; q1 ), ..., (qN ; q1 ), ..., (qN ; qN ), so that when is in state (qj ; qj 0 ) at time step k the ((j 1)N + j 0 )th entry of vector h [k℄ is “1” (and every other entry is “ 0”). Note that with this choice of notation the indicator vector h [k℄ for FSM is simply the Kronecker product h [k℄ = [k℄ f [k℄, where [k℄ ( f [k℄) is the indicator vector for the state in the top (bottom) compactor in Figure 3. In terms of this notation, we can develop explicit formulas for the transition matrices associated with the four inputs to FSM using the Kronecker product notation:

 (o[k℄ = 0; of [k℄ = 0) w. prob. p1  p(1 e0 );  (o[k℄ = 0; of [k℄ = 1) w. prob. p2  pe0;  (o[k℄ = 1; of [k℄ = 1) w. prob. p3  (1 p)(1  (o[k℄ = 1; of [k℄ = 0) w. prob. p4  (1 p)e1:

e1 );

H

q

H

H

q

q q q

q

H

q q

H

Ax1 Ax2 Ax3 Ax4

= = = =

A0 A0 ; A0 A1 ; A1 A1 ; A1 A0 :

The easiest way to see this, is to invoke the Kronecker product property ( xi [k℄) ( xj f [k℄) = ( xi xj )( [k℄ f [k℄) (see [5] for a proof). Since FSM in Figure 3 is driven by a white input sequence whose input at time step k is chosen with probabilities that remain constant over time, the resulting behavior of is captured by a Markov chain with N 2 states and transition matrix

A A q q

A q A q H

H

A=

6

H

x1 x2 x3 x4

Compactor FSM with known q[0]

H

Under these assumptions, a convenient (and equivalent) way of capturing the joint probabilities of error-free and corrupted outputs at each time step is to describe the conditional probabilities with which an output from a faulty CUT is in error given that the corresponding output of the fault-free CUT is “ 0” or “1”: Pr(error o = 0) Pr(error o = 1)

o[k]

4 X

i=1

pi

Ax

i

:

(3)

S

It is easy to show that if FSM is connected and aperiodic, then FSM will also be connected and aperiodic. In such case, we are guaranteed to have a unique stationary distribution that satisfies = : (4)

H

v

v Av

For long test input sequences the probability of aliasing is simply the probability that the randomly generated sequence of test input vectors causes both the top and the bottom comparator in Figure 3

x

x

Aliasing probability versus p and e

0

3

x

x

x

1

1

1

0

q5

q4

q

2

1

1

x

0

x q

q

x

x

0

0

x

1

1

Figure 4: FSM used as a compactor.

 

to end in a state of the form (qj ; qj ), 1 j N ; thus, the probability of aliasing is given by the sum of the entries of that correspond to this type of states. Essentially the discussion in this section has led the following conclusion: Let be the single-input connected, aperiodic FSM with transition matrices 0 and 1 that is used to compact the response of a faulty single-output combinational CUT; assuming that the sequence of test input vectors is long enough and each test input vector is chosen independently between different time steps (so that the probabilities p, e0 and e1 for the output sequence are well-defined as described in Section 3), the resulting probability of aliasing is given by

S

S

Pr(alias) =

where

v

N X j=1

v((j

A

A

1)N + j ) ;

5. EXAMPLE AND EXPERIMENTAL RESULTS In this example, we analyze the aliasing probability for the compactor shown in Figure 4. The compactor has N = 5 states, K = 2 inputs (x0 = 0 and x1 = 1); it is clearly connected and can be easily checked to be aperiodic. The transition matrices associated with inputs x0 and x1 are given by 0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 1

3 7 7 ; 7 5

2 6 A1 = 664

1 0 0 0 0

1 0 0 0 0

0.6

0.4

0.2

0 1 0.8

1 0.6

0.8 0.6

0.4 0.4

0.2 p

0.2 0

0

e

Figure 5: Values for Pr(alias) as a function of p and e for the compactor in Figure 4.

v is the unique stationary distribution that satisfies Eq. (4).

2 6 A0 = 664

Pr(alias)

0.8

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

3 7 7 : 7 5

According to our analysis in Section 4, if the behavior of the faulty CUT causes a corrupted output with probability e0 when the error-free output is “ 0” and with probability e1 when the error-free output is “1,” the associated Markov chain has transition matrix described in Eq. (3). It is quite straightforward to calculate the aliasing probability for specific values of p, e0 , and e1 . In Figure 5 we keep e0 = e1 and plot the aliasing probability as a function of p and e (where e = e0 = e1 ). The aliasing probability is generally low for high values of e and generally high for low values of e; also, as expected, it is symmetric around p = 0:5. More interestingly, for e > 12 the aliasing probability is lower than 0:2 = 1 N is the number of states of the compactor. Notice that N , where since N1 is the aliasing probability expected out of a linear finitestate machine (e.g., a linear feedback shift register with a primitive polynomial) with N states, the plot suggests that compaction using the FSM in Figure 4 will perform better than linear compactors when the fault causes a high value for e.

A

6. CONCLUSIONS In this paper we described a systematic methodology for calculating the aliasing probability when an arbitrary finite-state machine

is used to compact the response of a combinational circuit to a sequence of randomly generated test input vectors. Our approach exploits the correlation between the output sequences generated by a fault-free and a faulty combinational circuit by simultaneously tracking the states of two fictitious compactors, one driven by the response of the fault-free combinational circuit and the other one driven by the response of the faulty combinational circuit. For simplicity, the discussion in the paper focused on the compaction of the response of single-output combinational circuits and demonstrated the existence of regimes over which nonlinear compactors may be preferable over linear compactors. 7. REFERENCES [1] M. Abramovici, M. Breuer, and D. Friedman, Digital Systems Testing and Testable Design. Piscataway, New Jersey: IEEE Press, 1990. [2] W. Daehn, T. W. Williams, and K. D. Wagner, “Aliasing errors in linear automata used as multiple-input signature analyzers,” IBM Journal of Research and Development, vol. 34, pp. 363– 380, March–May 1990. [3] M. Damiani, P. Olivo, and B. Ricco, “Analysis and design of linear finite state machines for signature analysis testing,” IEEE Transactions on Computers, vol. 40, pp. 1034–1045, September 1991. [4] P. Bremaud, Markov Chains : Gibbs Fields, Monte Carlo Simulation, and Queues. New York: Springer-Verlag, 1999. [5] A. Graham, Kronecker Products and Matrix Calculus with Applications. Mathematics and its Applications, Chichester, UK: Ellis Horwood Ltd, 1981. [6] B. Johnson, Design and Analysis of Fault-Tolerant Digital Systems. Reading, Massachusetts: Addison-Wesley, 1989. [7] J. Savir, “Syndrome-testable design of combinational circuits,” IEEE Transactions on Computers, vol. 29, pp. 442– 451, June 1980.

Suggest Documents