The fundamental idea of test of mismatch is to deterministically test analog circuits for the ... Unlike a hard fault, this circuit still outputs a good enough gain.
Test of Mismatch Rasit Onur Topaloglu and Alex Orailoglu Computer Science and Engineering Department University of California, San Diego La Jolla, CA 92093 rtopalog, alex @cse.ucsd.edu
Abstract The fundamental idea of test of mismatch is to deterministically test analog circuits for the presence of mismatch. We propose an approach for testing of mismatch that delivers a compact and low-cost test set that is specialized in deterministically deciding whether a circuit has mismatch or not. This mismatch specialized test set will potentially help reduce the superfluous and time consuming functional test vector sets that have been previously used for testing analog circuits. The proposed technique enforces a separation between design and test of circuits to account for IP protection and production burst. Proper testing of mismatch will decrease test time and cost. Test set generation methodologies introduced here efficiently detect mismatch in high-end analog circuits. The proposed method is general and is applicable to a broad range of circuits.
1 Introduction The general approach underlying the test of a device should not be solely to test it for understanding whether it is working functionally or not. If this were the case, digital circuits would still be tested by purely observing their functional response. Instead they are tested for specific purposes, such as whether they have stuck-at lines. While testing digital circuits for stuck-at faults, other faulty behaviors are typically discovered in the process. One reason that digital stuck-at fault test is successful is due to the fact that most faults can be captured through stuck-at nodes. An additional reason is that most other faults can also be caught when we test the circuit using stuck-at fault test only. This flow of ideas has led us to consider the development of a similar approach for analog circuits. The logical correspondence to a stuck-at fault in digital circuits is mismatch in analog circuits, the analogy being related to the importance of their effects on the malfunctioning of the circuit. A second analogy can be construed when one considers the fact that as stuck-at test also catches other ancillary faults most of the time, so does the test
methodology proposed here. The result of these ideas has evolved into a methodology that we denote as the test of mismatch, abbreviated as TOM. For analog circuits, the main problem caused by mismatch is performance degradation. Unlike digital circuits, a circuit experiencing performance degradation may still output reasonable signals for some of its output functions. For example, a circuit outputting a good enough gain but experiencing a lower than expected output resistance could be a typical mismatch-effected circuit. Unlike a hard fault, this circuit still outputs a good enough gain function, but experiences different performance problems for some of its other functions. The naive approach would be to test all functions exhaustively, as is usually done, to measure performance parameters. The approaches presented in this paper use a rather different approach. As this paper aims at resolving the issue of mismatch detection, we will be denoting a circuit having mismatch as a faulty circuit. The purpose of this study is to generate the test vector set that will identify the faulty and fault-free circuits.
2 Previous Work Vector generation techniques have been quite successful for digital circuits. The reasons are quite apparent. As a reminder, the assignability of a certain signal flow to the circuit and the possibility of assigning a binary value to a signal at a node have allowed researchers to develop algorithms that detect faults in an effective manner. Test vector generation has typically had little to do with the design of the circuit, absolving the test engineer of the necessity of knowing much about the design. Apparently, a similar approach would have been quite a success in the analog domain. The closest approach has been the introduction of inductive fault analysis (IFA) [1]. However, IFA is only appropriate for physical defects and cannot cope with performance degradation effects, which constitute the core of mismatch. Another disadvantage of IFA is that a tremendous amount of simulations needs to be run to construct fault dictionaries, thus inheriting has the shortcomings of any general dictionary-based testing approach. Previous general analog test approaches mainly consist of HW modifications [10] or DSP-related approaches [8], both of which are sometimes infeasible to attain and usually costly to implement. Today, analog circuits are still tested largely by functional test [2]. As analog circuits can have quite a number of high level specifications, like the gain-bandwidth product, or output resistance, the test of all these parameters consumes a considerable amount of time. Additionally, most of these specification tests require a very expensive high-end tester. As shown in [3], most of these functional tests overlap, in the sense that the faulty part causing one specification to fail also shows its effects in another parameter. To exploit this observation, an approach has been proposed
consisting of the reordering of functional test vectors. Although it reduces test time, this approach can not deal with parametric degradations.
3 The Goals of TOM A comparison of the digital to the analog domain yields the observation that in the analog domain, observability and controllability constitute a much larger problem. Setting voltages or currents inside a circuit necessitates design knowledge. A better test methodology, however, should assume that the test engineer is just given a black box, with the relevant specifications as to how the circuit works. Given such an input, test vectors need to be created. Such an approach is bound to decrease the analysis time of the design, as well as protect the propriety of the design. It will also enable the separation of testing and design, allowing intellectual property transfer between the concerned parties. At this point, it will be beneficial to introduce the 4 principles of the TOM vector generation methodology we propose: 1) Low cost test : The cost of test can manifest itself in various ways. One cost is the price of the tester. If high end functional tests are required, a multi-million dollar high-end tester will have to be used. Still, an improper selection of test vectors will cause a great deal of cost in terms of time. As opposed to current test methodologies, the proposed test methodology aims at getting rid of the necessity to use an expensive tester, in addition to reducing test time considerably. 2) Separation of design from test : As designs get more and more complex, the analysis of the circuits is getting tougher. We should not expect the test engineer to spend the predominant portion of his time on analyzing and understanding the circuit in order to create test vectors. 3) Deterministic test : One of the fundamental reasons of success of digital test is its provision of a deterministic pass and fail mechanism for the chips tested. The aim of our test mechanism is to be deterministic as opposed to being probabilistic [9], notwithstanding the fact that the phenomenon of mismatch is probabilistic in its essence. 4) Self-adequacy : As mismatch is a parametric fault, the deterministic detection of mismatch gives way to the detection of all hard faults as well. So the test vector set provided here should not be seen as an additional burden on top of functional test, as the TOM vector set will be replacing the functional test vectors. Full fault coverage is aimed for in light of this.
− +
− +
BIAS V T
Figure 1. Logic Behind Mismatch Amplification
4 A Theoretical Basis Observability of analog circuits is limited because we lack the mechanism to propagate inner signals to the output ports efficiently. Some circuits will not allow this, while others would allow this within limits set by the design. In any case, an in-depth design knowledge is necessary to create such signal transitions. Similar arguments could be made for controllability. The introduction of design for test (DfT) approaches in the last decade [7] has enabled the construction of circuits with test input and output ports in critical parts of the design. For example, a critical OpAmp in a digitalanalog converter is provided test ports both on its inputs and outputs. This is also possible in mixed signal designs, where the test input/outputs are easily switched between various points to the circuit output using switch matrices. The controllability and observability issues mentioned above however are still there; this time they create problems within the OpAmp itself. Extra test ports in an OpAmp can be included during the debug stage of the chip; however, they can not remain within the chip that is in the production cycle, since the connection of these inner test points would tremendously increase the pin count and interconnect complexity of the chip. A test engineer has also access to the two most important input variants in addition to input and output test port access. These two additional input variants are the voltage and the temperature. Most of the time, circuits are designed to be able to operate in a range of supply voltage and temperature. Table 1 gives an example of how this flexibility is attained in the design specifications. Whenever there is a defect in a circuit, this defect first needs to be activated, followed up by the propagation of the results to the observable outputs of the circuit. In order to apply this method to circuits having mismatch, we
Table 1. Circuit Input Specification Summary
Supply Voltage Temperature Signal Frequency
Min 2.5 -30 10k
Typical 2.6 27 100M
Max 2.7 130 1G
Unit V C Hz
have devised an analogous method specialized for them. This method is called mismatch amplification. The main idea is illustrated in Figure 1. The aim is to amplify the effects of mismatch by properly activating it through some input mechanism, then to observe the results through an output mechanism being supplied. The relevant input and output mechanisms are discussed next. One merit of a specialized test for mismatch is that test vectors can also detect hard faults. The underlying reason is the fact that mismatch caused errors are performance related, while hard faults are mostly catastrophic.
4.1 Choice of Input Mechanism In many designs, simulations of the circuits are performed under various process, voltage and temperature corners, commonly abbreviated as PVT corners. If we relate its significance to our problem, both voltage and temperature can be input to the circuit artificially. The third variable, in our case, the mismatch caused by the process, is already present. In addition, the circuit may have a number of bias inputs. These can as well be used as part of our input mechanism assuming we have access to these ports by test ports. If they do not exist, this will not have any deteriorating effect in our test mechanism and the proposed test methods will still be applicable. Having figured out the input mechanism for testing the circuit, analysis mechanisms should also be identified. This runs parallel to the general idea of testing, that of stimulating a circuit by inputs, and analyzing the effects. When there is mismatch, the results will change appreciably. The aim of our approach is to find the least number of vectors that makes the detection of mismatch possible, while making no concessions in fault coverage.
4.2 Choice of Analysis Methods The choices for comparison used in TOM are comprised of AC analysis, DC analysis, pole-zero analysis, transient analysis, IDDQ analysis and sensitivity analysis. The first three boil down to the analysis of the transfer function. One thing that makes transfer functions useful is that they are universal and that they can be applied to almost any circuit. Its constituents are the well known magnitude response and phase response. Experimentation with these analysis techniques has been performed by artificially creating mismatch in benchmark circuits and analyzing the results of the faulty (with mismatch) and fault-free (without mismatch) circuits. The primary criterion is the cost of the test. Generally speaking, a tester that has the capability of accurately
observing phase is very expensive. On the other hand, we have observed experimentally that the difference in phase response between faulty and fault-free responses is rather insignificant in most cases. Since we are seeking determinism in our test mechanism, an accurate phase response measurement would have been necessary to detect mismatch using phase responses. The limited utility and the necessary high cost have resulted in phase responses being excluded from TOM. The differences in magnitude response, however, are rather large and significant. The faulty and fault-free responses are easily separable by even a visual inspection of the results. Besides, this difference becomes so much intensified depending on the frequency in AC analysis. In addition to AC magnitude response, we include DC analysis in order to cope with the effects of mismatch in digital circuits in particular. The reason is that in digital circuits, mismatch shows its effects by causing the wrong transitions [5]. Pole-zero analysis on the other hand is not included as it would require test equipment that is quite expensive. Although pole-zero analysis theoretically could have been used, simulations that we have conducted show that non-dominant poles or zeros can complicate the detection of mismatch, thus causing one of the primary goals of our system, that of being deterministic, to fail at large. The success of IDDQ test in the digital domain is used to advantage for the analog domain in our system. This advantage stems from the observation that IDDQ results differ significantly between faulty and fault-free circuits. In all of the above tests, we have observed that the results differ between fault-free and faulty responses, enabling us to easily separate a faulty and fault-free circuit. As RF IC design is increasing its pace everyday and as more circuits are designed in the GHZ range, transient analysis would require very accurate and expensive testers. Luckily, the differences between faulty and fault-free results is quite small in transient analysis in most circuits. These points preclude the inclusion of a transient analysis mechanism in the proposed test generation system. The notion of sensitivity analysis, as implemented in SPICE, is that of measuring the output sensitivity as compared to the changes in any of the circuit inputs. We have modified this, and instead have used sensitivity to measure the differences of the chosen analysis techniques presented so far as compared to the changes in circuit bias inputs. The inclusion of sensitivity thus necessitates the replication of each analysis method once more for the changes of each bias inputs. To give an example for the application of sensitivity to the analysis of AC magnitude response, let’s assume that we have two current biases connected as input to our circuit. In order to measure the sensitivity of AC magnitude response for the first bias current, we have to perturb this current by a small amount and observe the difference. We repeat this procedure for the second bias current to end up with four analysis results. The implication of the
Gain @100kHz
14 12 10 8 6 4 2 0 16.2 16.2
16.1 16.1
16 16 15.9
15.9 15.8
width of M0 (um)
15.8
width of M1 (um)
Figure 2. AC Gain Sampled at 100 kHz vs. widths of a mismatch pair scanned between 15.84 m and 16.16 m
sensitivity technique is that the effects of mismatch can manifest themselves not directly in the response, but in the first order derivative of the response. This derivative differ quite a lot whenever the circuit is faulty. In DC analysis only, sensitivities are not used, as it is observed that sensitivities do not make a noticable difference in faulty and fault-free results.
4.3 Generation of Test Vectors Process variations are always present, but they may or may not necessarily lead to mismatch. For example, let’s have two transistors that have to be matched. Let’s assume their widths are 16 m. Assume that there is a process
variation over the wafer that sets each one to 15 m. In our study, we regard such a situation as not constituting a
mismatch. However if the two are different, then we say that there is a mismatch. The reason for making such a distinction is dependent on the performance for these two cases. The nature of this discrepancy can be understood better by examining the experimental data shown in Figure 2. Whenever the widths of M0 and M1, which are the two transistors to be matched in the circuit, increase and decrease equally from their nominal values of 16 m, the change in gain is negligible. This corresponds to the case with process
variation causing no mismatch. Whenever we are away from the line passing from origin and nominal point, there is a big difference between the nominal value of gain and the observed value. Any part of this region corresponds to having mismatch. We denote plots like the one in Figure 2 as excitation plots. To verify that this sharp drop-off is still present on either side of the line through the origin, the range of M0 and M1 is increased and the same parameter is plotted in Figure 3. The only difference that can be observed is that the gain happens to be different when the widths get smaller at the same time. This does not usually indicate that our circuit could be designed better. Rather the reduction in the widths of both transistors would cause the
Gain @ 100kHz
25
20
15
10
5
0 30 25
30
20
25 15
20 15
10
10
5 width of M1 (um)
5 0
0
width of M0 (um)
Figure 3. AC Gain Sampled at 100 kHz vs. widths of a mismatch pair scanned between 2 m and 30 m
deterioration of some other circuit parameter such as the input resistance of the circuit. The point that needs notice here is that there is nonetheless a sharp drop-off on both sides. The widths of the transistors are chosen for the horizontal axes of the plot because width is a physical parameter. The cause of mismatch is known to be physical [4]. To account for this, similar plots for the same mismatch pair are obtained for the remaining physical parameters that are known to cause mismatch. To possibly further amplify the effects of mismatch, one needs to include the design space flexibility as defined in the circuit specifications. This is done by using the ranges defined for voltage sources and temperature, such as ones in Table 1. To reduce the number of simulations, only worst-case points for the voltage and temperature combinations should be considered. Another supporting reason for this choice is that these parameters do not effect the circuit functions drastically but rather gradually. Mismatch groups are provided to the test engineer by the designer. Some of these groups may contain more than two transistors. In such a case, it is not possible to plot the effects of a physical parameter on all the transistors
in the same mismatch group. For this reason, each such plot for all mismatch pairs in a mismatch group is plotted individually, giving
plots if the mismatch group has n transistors.
Figure 4 shows the effects on the overall frequency domain in the above case when the same set of data is used as in the previous figures. As can be observed, at high frequencies, both faulty and fault-free signals approach each other and it is becoming impossible to differentiate the two kinds of results from each other. However at lower frequencies, there is a vast space between faulty and fault-free responses. At low frequencies, a bunch of responses are gathered together at a higher gain band. Each response in this group can be recognized as constituting a response that would be expected from a fault-free opamp. The responses in this group correspond to process variations with no mismatch.
Figure 4. Frequency Response; widths of a mismatch pair scanned between 15.84 m and 16.16 m
In the DC domain, again faulty and fault-free responses are separated as can be seen in Figure 5. This time, fault-free responses are gathered together in the left branch in the figure. The corresponding test signal should aim at this DC range where fault-free and faulty responses are distinctly separated.
*outopa 2.8
2.6
2.4
2.2
2
1.8
Voltages (lin)
1.6
1.4
1.2
1
800m
600m
400m
200m 0
1.1
1.12
1.14
1.16
1.18
1.2
Voltage X (lin) (VOLTS)
Figure 5. DC Response; widths of a mismatch pair scanned between 15.84 m and 16.16 m
To summarize, the test generation methodology consists of obtaining similar plots for each mismatch pair, for each physical parameter and for all worst-case (V,T) combinations. The algorithm is shown in Figure 6. Having observed the drop-off, the aim in generating the test vectors consists of finding the vectors that amplify this drop-off for each mismatch pair and each physical parameter. In order to measure and automate the calculation
of this drop-off, we have devised a figure of merit called the mismatch factor. In order to calculate the mismatch factor (
), it should be noticed that the excitation plots are actually two dimensional matrices where each entry
gives a point to constitute the
axis in the plot. These points can be thought of as being samples of the real
distribution. The dimension of the matrix is proportional to the scan rate of the mismatch parameters within a
given range. Let and
denote the indices of the first and second parameters respectively of this square matrix.
Let the entries of the matrix belonging to a plot be
. Then MF is given by Equation (3), where stepsize is the
s is given in Figure 7 for odd and even matrix sizes. The matrix can be converted to a rectangular one by
maximum range for the parameter of an axis divided by the number of samples. A graphic illustration for the
expanding the presented ideas as necessary. It turns out that mismatch factor is an efficient tool for comparing the magnitude of mismatch between different excitation plots.
!# "%$ !' &( , $ )*!# & + !' " - !. "%$ !/ & ( , $ !/ & !/ " 102 !' "%$ !' &3( , $ !' & !' " - 4 ) !'! "%$ ) !'! &3( , $ ) !'! & ) !'! " 576 98 :;=