as current or output resistance, to acquire a probability distribu- tion. Estimation ... of integrating probabilistic device performance at the circuit level due to run-time ... propagates these probabilities across a device formula hierarchy, such as the ... A foremost one consists of the dependence of Monte Carlo tech- niques on a ...
2D-7s
Forward Discrete Probability Propagation Method for Device Performance Characterization under Process Variations Rasit Onur Topaloglu and Alex Orailoglu University of California San Diego Computer Science and Engineering Department La Jolla, CA, 92093 rtopalog,alex @cse.ucsd.edu
Process variations are becoming influential at the device level in deep sub-micron and sub-wavelength design regimes, whereas they used to be a few generations away only influential at circuit level. Process variations cause device performance parameters, such as current or output resistance, to acquire a probability distribution. Estimation of these distributions has been accomplished using Monte Carlo techniques so far. The large number of samples needed by Monte Carlo methods adversely affects the possibility of integrating probabilistic device performance at the circuit level due to run-time inefficiency. In this paper, we introduce a novel technique called Forward Discrete Probability Propagation (FDPP). This method discretizes the probability distributions and effectively propagates these probabilities across a device formula hierarchy, such as the one present in the SPICE3v3 model. Consequently, probability distributions for process parameters are propagated to the device level. It is shown in the paper that with far fewer number of samples, comparable accuracy to a Monte Carlo method is achieved.
I. I NTRODUCTION Estimation of the effects of process variations on device performance has long been a concern. The computational complexity of current simulators precludes incorporation of process variations to device performance. This can be attributed to the lack of accurate methods and models for process variations. Designers have been trying to cope with this absence through worstcase analysis, Monte Carlo techniques or through the invocation of Gaussian distribution assumptions. But these approaches can no longer be counted upon to provide sufficiently accurate and fast results, as deep sub-micron silicon technologies rapidly push manufacturers to device parameter characterizations of increased accuracy in order to obviate the increasing number of design iterations. The effects of process variations on device parameters further indicate that the relationships between factors causing process variations and device parameters are deviating from a linear approximation even for a small input domain. This implies that the Gaussian distribution assumption attributed to device performance parameters is no longer accurate. Therefore, a more accurate methodology is necessary to estimate the effects of mismatch on high-level parameters. The paper presents a methodology to deterministically estimate the results of process variations on device parameters
This project is partially supported by the Semiconductor Research Corporation, under task ID 906.001.
0-7803-8736-8/05/$20.00 ©2005 IEEE.
using connectivity graphs. The proposal consists of an algebraically tractable method, leading to the possibility of manual or simulator-guided implementations. In contrast to the Monte Carlo approach, there does not exist any non-determinism in the system. In contrast to Gaussian-based methods, the system is not restricted to Gaussian distributions, thus providing accurate device characterization. The method can also outperform the accuracy and run-time of a Monte Carlo-based approach in certain applications or conditions as indicated in the paper. With no reliance on a random method, it can nonetheless take advantage of analytical fabrication and device models already present in the literature and present the probabilistic device parameters formed as a result of process variations. The paper proceeds by presenting the motivation and the previous work. The discussion is followed up by the introduction of a mathematical basis for discretization of probability distribution functions, introducing formalism through new operators and domains, followed by experimental data comparing Monte Carlo methods and FDPP. II. M OTIVATION Monte Carlo methods are frequently used in engineering applications [1] [2], though they exhibit a number of shortcomings. A foremost one consists of the dependence of Monte Carlo techniques on a random number generator, signifying Monte Carlo as a non-deterministic method. Most computational packages only provide random number generators for a limited set of wellknown distributions such as Gaussian or uniform. As a result, the users of Monte Carlo methods are limited in assigning distributions to low level parameters, such as process parameters in device characterization. Though a number of remedies have been suggested, such as importance sampling [18], these modifications usually necessitate an increased number of samples for sufficient accuracy. An approach for accurate consideration of arbitrary distributions may be of utmost importance for certain engineering applications where Gaussian or other distribution assumptions for low level parameters may cause large error build-up during the computation of the distributions of high level parameters. Another shortcoming is that Monte Carlo, due to its random sampling mechanism, may require an increasingly high number of samples to reduce the error for regions of the probability distribution that have a reduced occurrence probability. This last bottleneck may cause certain regions in a distribution to be missed by the method altogether when computational effectiveness issues limit the number of samples, causing an under-estimation of the probability. Similarly, fewer than the adequate number of
220
ASP-DAC 2005
samples may cause an over-estimation at certain regions. This over-estimation may be the result of choosing a point at a low probability region and not being able to normalize with an adequate number of drawn samples. Increasing the number of samples to prevent these bottlenecks, on the other hand, may result in an unmanageable run-time complexity. In most engineering applications, reducing this complexity without an accuracy compromise is nevertheless of utmost interest. Finally, sampling from correlated parameters may bring forth a large inaccuracy. In a directed acyclic graph, if a node has more than one outgoing edge, assigning the same sample to each of the outgoing edges will over-estimate the variance of a high level node that is a function of these nodes. A user directed sampling mechanism can avoid this error, yet its implementation can be quite cumbersome and error-prone for complex trees. Instead, a probabilistic approach on a tree, on the other hand, may take advantage of Bayes rule [17]. Hence, ancestral nodes can be treated as being conditionally independent while calculating the posterior distribution of the descendant node. III. P REVIOUS W ORK Monte Carlo based methods are predominantly used in device parameter characterization [3] [4]. In [3], a Monte Carlo based method has been used for the simulation of impact ionization while in [4], a Schottky barrier is simulated with a Monte Carlo method. Monte Carlo methods are used for the newest technologies as well [5]. [6] and [7] have pointed out the inaccuracy of Monte Carlo methods and formulated it as a variance representing the deviation from estimated values. Process variations can be attributed to physical parameters as suggested in [8]. In [9], a technique is presented to estimate the device characteristics using the sensitivities of device parameters to physical parameters. Means and variances of device parameters can be approximated in this method. This technique though falls short of being sufficiently accurate in deep sub-micron and sub-wavelength technologies due to the Gaussian distribution assumption attributed to device parameters, as device parameters are sharply deviating from Gaussian distributions with newer technologies, as can be seen in [10], [11] and [12]. Inaccurate information regarding the distribution of device parameters provided to the designers may cause a major bottleneck in the design cycle increasing or elongating iterations. The importance of avoiding such worst-case approximations in deep sub-micron designs has been identified in [13]. The effects of various steps in a semiconductor fabrication on device parameters has been analytically modeled in a number of papers in the literature [14] [15] [16]. However, a continuous time probabilistic analysis is usually not provided when process variations need to be accounted for. Powerful models have been so far presented in the literature. These models should be incorporated into the design in an accurate manner as we progress to newer technologies. IV. P ROBABILITY D ISCRETIZATION T HEORY Accurate simulation of devices has exceeded computational practicality thresholds. The computational cost of simulating process variations introduces an additional exponential increase to this already inordinately high computational time requirements. The necessity to accurately estimate device parameters
has become quite significant as a result of this. To close this gap, we propose a methodology that provides a way for the estimation of device parameters. This methodology is both manually tractable and can be incorporated into a simulator. In order to introduce the proposed technique, FDPP, a number of definitions will be useful. Let be a random variable. We will denote the probability distribution of as . is assumed to be continuous. We propose to attain an approximation of this by sampling the at equidistant points of the random variable . may extend to positive or negative infinity for In reality, certain distributions. In these situations, the tails of the will be terminated after a certain value of , which corresponds to band-pass filtering the . This will define a boundary of the form for , where and are practical lower and upper limits. The probability that will fall within this region is given by:
!
"# %$'&(
) !
(1)
This difference should be chosen as small as possible to reduce the filtering error. The sampling can be done by dividing the band-pass filtered to bins and approximating the values that fall in any bin by the value at the mid-point of the bin. Let *,+ be an enumeration over the bins where &.-0/1-32 and 2 is the total number of bins. *4+ will be defined to be bounded by 6! 57#/89&: ;;? , where ; is the step-size defined by @ . We denote the sampled as A# or BC , and we introduce two domains such that # is in the p-domain and A# is in the r-domain. D The procedure of converting a " to an B will be represented with the EGF operator:
A 8$HE @ I
(2)
The domain of this operator is a band-pass filtered , and the range of this operator is an B . The result of this operator on the of a random variable , A# , is essentially a Riemann sum of impulses and is given by:
where
A %$KJ +QPR#ST U(+" +ML D4NON @ + $
! +WV ! X +
DCY V
CZS
[ + $=\5=#/]^&: ; _
(3)
(4)
(5)
In these equations, + corresponds to the probability that a sample of the random variable falls within the / ’th bin *`+ and [ + denotes the approximation of values of samples of within *`+ .
a
This nomenclature has been motivated by the similarity of these domains to the s-domain (Laplace domain), as some operations such as filtering and bandpassing can be depicted and formulated easily in the r-domain than in the pdomain, just as some operations are easily applied in the s-domain than in the time domain.
221
[ + is the mid-point in the particular bin. Hence it is given as \5 /0& V .
W
Assume that we have a number of random variables given D ,.., , whose sampled ’s are respectively given by A D ,.., A# ` . Let be another random variable that is given by a deterministic function of the given random variables: $ WC . Then A is given by the operator as:
L
U0
D A 8$ QA
as
D 4 W A
set of all samples belonging to the random variable