SoC Yield Optimization via an Embedded-Memory Test and Repair ...

5 downloads 7932 Views 137KB Size Report
IEEE Design & Test of Computers. THE CHALLENGES to traditional yield and reliability for embedded memories require new design solutions. In a previous ...
Design for Yield and Reliability

SoC Yield Optimization via an Embedded-Memory Test and Repair Infrastructure Samvel Shoukourian, Valery Vardanian, and Yervant Zorian Virage Logic

embedding such algorithms, designers can improve yield only moderately. Yield prediction is very important because too much redundancy wastes silicon area, and too little leads to poor yield. Conventional memory test algorithms detect memory failures to determine whether or not a chip is defective. For repairable memories, however, fault detection is not enough. Repairable memories need fault location to determine which cells must be replaced. The greater the fault location coverage, the higher the repair efficiency, and hence, the obtained yield. Three enhancements increase fault localization coverage. The first adds dedicated infrastructure IP modes, called test modes, to the memory IP design. The test modes include read margin control, stress test, ground and substrate isolation, adjustable setup and hold time, supply voltage operation range, and selftiming clock bypass. The second enhancement adds nothing to a memory design but leverages its design information, specifically its scrambling knowledge, to determine topological data background patterns. The infrastructure IP uses background patterns to locate coupling faults between cells and between bitlines and to detect memory periphery weaknesses. Memory design information necessary to determining the exact scrambling includes bitline and address line twisting, bitline mirroring, contact and well sharing, column multiplexing, decoder optimization, port scrambling, and placement of redundant elements. This type of enhancement is not possible without intimate knowledge of the memory design and its compilation.

Editor’s note: Today, embedded memories are the most important contributor to SoC yield. To maximize embedded-memory yield, advanced test and repair solutions must be an integral part of the memory block. This article analyzes factors that affect memory yield and presents advanced techniques for maximizing their positive impact. —Dimitris Gizopoulos, University of Piraeus

THE CHALLENGES to traditional yield and reliability for embedded memories require new design solutions. In a previous article, we presented a test and repair infrastructure IP that addresses this need.1 Here, we present factors that affect such solutions, including optimizations for memory test algorithms, data background patterns, and redundancy allocation algorithms with high repair efficiency. We analyze the impact of these factors on each component of the infrastructure and propose optimization mechanisms that increase yield. First, we review the previous article.

Impact of test and repair infrastructure on yield optimization Embedded memory yield is likely to worsen as SoCs move to memory-dominant chips. An effective way to obtain further memory yield improvement is to use redundant, or spare, elements during manufacturing test.2 Historically, embedded memories have been selftestable but not repairable. Providing the right redundant elements does not entirely solve the problem. SoC designers must also know how to detect and locate memory defects and allocate redundant elements. This requires defect distribution knowledge and corresponding algorithms that utilize the knowledge. Without

200

0740-7475/04/$20.00 © 2004 IEEE

Copublished by the IEEE CS and the IEEE CASS

IEEE Design & Test of Computers

The third enhancement optimizes the fault detection and localization algorithm for a given memory. One way to implement this enhancement is to perform inductive fault analysis on memory design information leveraged from the layout level and consequently generate a corresponding fault detection and localization algorithm. Another way is to use test chips or real SoCs to leverage the failure history of a given cell design and process technology. This information helps generate a dedicated test algorithm. Predefined test algorithms are not always sufficient to detect all memory defects because subtle process variations cannot be predicted and accounted for ahead of time. The three enhancements complement one another. They can be implemented only if detailed knowledge of the memory design and manufacturing data is available. An appropriate number of redundant elements and an enhanced fault location solution are not enough to guarantee repair. The analysis phase that determines the best allocation of redundant elements to replace defective elements is critical. If the design uses only one type of redundant element, such as columns or rows, redundancy allocation is simple.2 However, if the design uses a mix of redundant element types and multiple hierarchy levels, optimal redundancy allocation becomes complex. Unlike the offline or postprocessing algorithms used with the external test and repair method, the redundancy allocation algorithms in the infrastructure IP must run in real time during detection and location. The infrastructure IP has no storage space for the failed-bit map. There are two types of redundancy allocation algorithms. The primitive algorithm includes a predetermined sequence of redundant-element allocations based on failure history.3 The intelligent algorithm performs analysis at every step before allocating a redundant element.4 If it does not find a solution with the first allocation, it usually makes secondary attempts, either in parallel with the first to reduce execution time, or serially to save infrastructure-IP area. Selecting the best redundancy allocation algorithm requires a methodology for evaluating algorithms’ efficiency.5 A repair strategy describes the conditions for determining redundancy allocation and performing repair. There are four repair strategies. ■

Hard. This strategy requires the use of permanent storage to hold repair information after power is turned off. It does not require retesting and reconfiguration at every power-up.

May–June 2004







Soft. The repair signature is generated at every power-up. No fuse box is needed to implement this method because repair information is not stored upon power-down. Combination. This method delivers the best of hard and soft repair. It uses hard repair in the factory to start the repair and cover faults undetectable by soft repair in the field. Cumulative. To build on repair information, a memory stores its repair data in a reprogrammable, nonvolatile fuse box and uses prior data at the start of the next test and repair cycle.

Test algorithm programmability The programmability of memory test algorithms is becoming a strong requirement for embedded memories. The ability to define a new algorithm or to choose from a database of test algorithms, and to pass various test algorithms to the test and repair infrastructure, gives users a flexible way of handling the complicated issues of memory fault detection, diagnosis, localization, and repair in nanometer technologies. Applying only one (default) test algorithm is often not sufficient in these technologies, so many users apply different test algorithms and background patterns to find the experimentally typical defects and relevant test algorithms that provide the highest defect coverage. March test algorithms are widely used for fault detection in memories because of their effectiveness, simplicity, and linear computational complexity with respect to memory size. Researchers have developed a specific notation for describing march test algorithms.6 For effective testing, diagnosis, and fault location of new fault types, especially in new-technology memories, an enlarged class of march test algorithms, with an expanded notation, is necessary. For example, locating a given coupling fault’s aggressor word (bit) requires an addressing mechanism that can jump to the coupling fault’s victim word (bit) and read it after applying a sensitizing operation on the aggressor. Recently introduced notations for march-like and march-based test algorithms7,8 can serve as march test extensions for better fault diagnosis and location. Previously developed special addressing stresses in test algorithms, such as fastrow, fast-column, and fast-bank addressing, also extend march test algorithms. Another type of memory-testing feature that is not reflected in existing notation is hardware support of testing modes—for example, hardware generation of special stress situations for memory read and write

201

Design for Yield and Reliability

T

B

T

Cell 1

B

T

B

Cell 1

Cell 2

T

B

Cell 2



Cell 3

T

Cell 4

T

B

(a)

Cell 3

T

B

B

(b)

Figure 1. Column twisting: simple (a) and complicated (b).

operations, and back-to-back read and write operations from the same or different address locations. Many memories now implement these features to reduce the number of test algorithms. Finally, the extended march test notation should include a mechanism for converting algorithms intended for single-port memories to algorithms intended for multiport memories.9 To meet these requirements, we developed a special format to implement programmability in several memory system compilers. We implemented three types of programmability: test algorithm, test algorithm element, and background pattern. These programmability features greatly expand test flexibility. For example, programming the test algorithm lets users ■ ■ ■

replace the default test algorithm with one from the database, increase or decrease the test algorithm’s length by adding or removing certain march elements, and build new test algorithms for multiport memories on the basis of single-port algorithms from the database.

By programming the test algorithm element, users can ■





202

increase or decrease the length of certain march elements by adding new operations or excluding existing operations; change the addressing mechanism so that instead of accessing all address locations in a predetermined order, the mechanism allows a jumping address feature; apply special addressing stresses, such as fast-row,

Cell 4

T

B

fast-column, and fast-bank, enabling access to memory address locations in a user-defined order with respect to row, column, and bank address variables; and issue special test modes to activate hardware that provides stress situations for memory operations, as well as additional operations such as stress read/write and back-to-back read/write from the same or different address locations.

Background pattern programmability lets users change data background patterns by replacing the default pattern with another from the background pattern database.

Scrambling information in programmable data background patterns Many SoC designers who use embedded memories want memory compilers that provide programmability of data background patterns for the memory test algorithm. In addition to the traditional checkerboard pattern, memory test algorithms need other topological (physical) patterns, such as solid, row stripe, column stripe, double row stripe, and double column stripe. Experiments have shown that the fault coverage of a march test algorithm for a memory can vary up to 35%, depending on the background pattern used.10 To extract logical background patterns, we need complete information on all kinds of scrambling. Scrambling information identifies the difference between the memory structure’s topological and logical features. We explored the memory layouts and extracted about 20 types of scrambling information— address, row or column, bank, and memory bitline— necessary to extract topological background patterns. For example, from column scrambling information, we understand the mapping between logical and topological column numbering. As Figure 1 shows, there are two types of column twisting: simple and complicated. We define the simple column twist as the switching of two column locations. As a result, the bitline pair of one column twists with the corresponding bitline pair of the second column. Note that the two bitlines of both columns do not twist—that is, the topological left-to-right order of true bitline T and bar bitline B does not change for the bitlines of each column.

IEEE Design & Test of Computers

The complicated column twist is the switching of two column locations as a result of which the bitline pair of one column twists with the corresponding bitline pair of the second column, and the two bitlines of each column twist as well—in other words, the topological left-to-right order of T and B changes for the bitlines of each column. After extracting all necessary types of scrambling information and verifying them for all the typical cases in its range, the algorithm of background pattern generation is activated. It divides the memory array into segments with respect to existing rows with twisted bitlines (called twist rows). The memory segment is a memory subarray, for which it is possible to immediately extract a background pattern in Verilog code. Then the algorithm parameterizes the pattern and generates the template for background patterns, covering all possible cases of memory segments and memory configuration parameters.

Efficient design and evaluation of repair algorithms We have developed a class of simple and efficient built-in redundancy allocation (BIRA) algorithms with sufficiently high repair coverage and relatively little hardware and time complexity, capable of working on the fly with a BIST algorithm. Here, we introduce a methodology and special notation for designing and evaluating these algorithms. We defined basic parallel and sequential memory cell repair operations using redundant rows or columns and then defined formulas using algebraic combination to design the algorithms. A repair procedure consisting of a parallel repair by two single spare elements S1 and S2 is denoted (S1) || (S2). The notation (S1) ∼ (S2) denotes a sequential application of these two repair elements. So, we can describe any parallel or sequential combination of simple repair algorithms in terms of this notation. For example, if A1 and A2 are any BIRA algorithms, then B = A1 || A2 and C = A1 ∼ A2 denote two new algorithms obtained by parallel and sequential combination of A1 and A2. For arbitrary combinations of BIRA algorithms A1, A2, …, Ak, either parallel or sequential, we define a special class of simple BIRA algorithms. In another article, we described three main classes of BIRA algorithms: sequential (SA in our notation), parallel (PA), and algorithms with preferences (AP).5 Prime BIRA algorithms with preferences require more hardware overhead but at the same time provide higher

May–June 2004

repair coverage than prime BIRA algorithms without preferences. These classes of BIRA algorithms serve as the formal basis for the design of our new class of simple BIRA algorithms. Here, we define special operations, called sequential combination and parallel combination, for constructing new BIRA algorithms with elements of the already existing algorithms. We define the class of combinational BIRA algorithms, denoted Ω, as follows: If A1, A2, …, As ∈ SA ∪ PA ∪ AP, any combination A = A1 ⊗ A2 ⊗ … ⊗As, where ⊗ ∈ {||, ~}, s ≥ 1, is called a combinational BIRA algorithm and A ∈ Ω. If A1, A2, …, As ∈ Ω, any combination A = A1 ⊗ A2 ⊗ … ⊗ As, where ⊗ ∈ {||, ~}, s ≥ 1, is also called a combinational BIRA algorithm and A ∈ Ω. Note that by definition SA ∪ PA ∪ AP ⊆ Ω. Thus, we can define the notion of a BIRA algorithm formula and special rules of equivalent transformation for BIRA algorithm formulas over the set of prime algorithms with and without preferences. In the defined class of combinational BIRA algorithms, we can define equivalent formulas obtained one from another by transformation. For example, we can use the following formulas of equivalent modifications to simplify BIRA algorithm formulas:

A ⊗ A = A, ⊗ ∈ {||, ~} A || (B ~ C) = (A || B) ~ (A || C) A ~ (B || C) = (A ~ B) || (A ~ C) The class of combinational BIRA algorithms lets us approximate any possible BIRA algorithm. And any combinational BIRA algorithm is representable as a formula of primitive BIRA algorithms with and without preferences. We can use a special design and evaluation tool (described later) to design new combinational algorithms and compare the repair capability of any two BIRA algorithms.

Repair algorithm evaluation To evaluate and compare BIRA algorithms, we generate special reduced-fault bitmaps, called fault array graphs. These essentially reduce the number of memories to be considered for evaluation and bring the problem to the consideration of bitmaps with no unique way of repair (nonmust repairable memories). The fault array graph’s size depends on the number

203

Design for Yield and Reliability

of available redundant rows (n) and redundant columns (m). We also want to use the minimum size necessary for full redundancy analysis and a final repair solution. With this in mind, it is possible to reduce the bitmap area of defective cells to a very small area, n(m + 1) × m(n Defective cell Good cell + 1).4 To evaluate a BIRA algorithm’s repair coverage, it is sufficient to apply the algorithm on all representatives of Figure 2. Fault array nonmust-repairable-memory arrays from graph for a reducedthe constructed classes of n(m + 1) × m(n memory-fault bitmap + 1)-dimensional reduced memory with two spare rows bitmaps.5 For smaller m and n, the numand two spare ber of reduced memory bitmaps is not columns. too large and can be handled with an acceptable time complexity. Figure 2 shows the fault array graph for a reduced fault bitmap. For any reduced fault bitmap, we define the fault array graph as follows: The defective cells are the graph’s vertices. The pairs of vertices corresponding to defective cells in the same column or row are connected with lines and become the graph’s edges. This reduces the problem of repairing the bitmap with two spare rows and two spare columns to the problem of covering the fault array graph’s vertices with two horizontal and two vertical lines. We partition the set of all nonmust-repairable fault bitmaps of the reduced memory array into special classes. For convenience, we can present the classes according to the graph-theoretical description of the possible allocations of defective cells in the n(m + 1) × m(n + 1)dimensional bitmap. To reduce the number of class representatives, we introduce the notion of typically identical fault array graphs. Two fault array graphs are typically identical if we can obtain one from the other by extending or contracting at least one horizontal or vertical edge without changing the positions of graph vertices with respect to each other. If a BIRA algorithm repairs a certain fault array graph, it also repairs all fault array graphs typically identical to the first one. In another article, we proposed exact and approximate approaches for evaluating and comparing different redundancy allocation and repair algorithms, including the following measure of repair coverage for BIRA algorithms.5 Consider classes C1, …, Ct of nonmustrepairable fault array graphs. Denote by r(A, Ci) the number of fault array graphs from class M ∈ Ci that are repairable by BIRA algorithm A. Then r(A, Ci) / | Ci | can

204

be considered A’s repairability (coverage) with respect to class Ci. The full coverage of BIRA algorithm A with respect to all classes C1, …, Ct of nonmust-repairable memory fault array graphs is ∆ ( A) =

t

∑ i =1

r ( A,C i )

t

∑C

i

i =1

We can compare any two BIRA algorithms using measure ∆( ). We consider BIRA algorithm A better than BIRA algorithm B if ∆(A) > ∆(B).

Implementation We plan to implement our methodology in a special toolkit supporting test and repair infrastructure design. Currently, three tools from the toolkit are available for internal use.

Design and evaluation tool With this tool, users can evaluate combinational BIRA algorithms by calculating their repair coverages and comparing them. The tool also lets users estimate algorithms’ hardware and time complexities for tradeoffs between these overheads and repair coverage. Users can synthesize simple BIRA algorithms with acceptable repair coverage, and hardware and time complexity (with and without preferences) from primitive BIRA algorithms. The tool also pseudorandomly generates fault bitmaps with a probabilistic distribution of fault types (single-, double-, triple-, and quad-bit; and single-row and single-column) in the fault array graphs, and it calculates probabilistic repair coverage of BIRA algorithms.

SoC yield calculation tool This tool lets the user define memory instances on SoCs and calculate their yields according to a Poisson model of defect probability distribution.11 Specifically, the user can ■ ■





define the types and sizes of all memory instances on the SoC, and the tool will calculate the SoC yield; define some of the memory instances, and the tool will suggest the best configuration for the remaining part of the memory with the maximum possible SoC yield; define the memory’s size on the SoC, and the tool will suggest the best configuration of memory instances, with the maximal possible SoC yield; generate diagrams and graphical information on how the numbers of words, banks, column muxes, and so forth affect the yield of a memory instance for

IEEE Design & Test of Computers

Table 1. Memory chip configuration and the selection between ASAP and STAR that results in maximum yield. Length of



column mux

Selection of ASAP or STAR with maximum yield

Memory

No. of

No. of

No. of

type

words

bits

1

2

3

4

5

1

13

239

1

20

ASAP

ASAP

ASAP

ASAP

ASAP

2

17

233

1

16

ASAP

ASAP

ASAP

ASAP

ASAP

3

22

231

1

10

ASAP

ASAP

ASAP

ASAP

ASAP

4

20

160

1

8

ASAP

ASAP

ASAP

ASAP

ASAP

5

1,328

3

16

14

ASAP

ASAP

STAR

ASAP

ASAP

6

208

11

4

12

ASAP

ASAP

STAR

ASAP

ASAP

(no. of columns) instances

Case no.

7

576

4

16

16

ASAP

ASAP

ASAP

ASAP

ASAP

8

240

10

4

20

ASAP

ASAP

STAR

STAR

ASAP

9

624

4

4

15

ASAP

STAR

STAR

STAR

ASAP

10

512

64

4

20

STAR

STAR

STAR

STAR

STAR

11

512

32

4

10

STAR

STAR

STAR

STAR

STAR

12

512

48

8

22

STAR

STAR

STAR

STAR

STAR

13

5,120

32

16

35

STAR

STAR

STAR

STAR

STAR

14

256

16

4

20

STAR

STAR

STAR

STAR

ASAP

15

256

10

4

8

ASAP

ASAP

STAR

STAR

ASAP

16

256

22

8

12

ASAP

ASAP

STAR

STAR

ASAP

each memory type; and calculate the SoC’s effective yield.

Using repairable memories with spare elements increases the probability of repairing the faults on a memory die. As a result, the memory die yield increases. Because the spare elements increase the die area, fewer dies fit on the wafer. Therefore, even with higher yield, we might obtain fewer operational dies per wafer. The logical part of a memory instance does not have the same defect density as the memory core. To calculate yield for the entire chip, we need to know average defect densities for each part of the chip—that is, defect density for the memory core and defect density for the logical part. The overall chip yield is the product of the yields for the two parts.

Scrambling-information generation tool This tool automates generation of the necessary formats describing memory compiler scrambling information and applies the information to the extraction of topological background patterns for memory test algorithms. The tool’s inputs are the memory designer’s answers to specific questions about the memory structure, allowing the tool to generate all types of scrambling information.

May–June 2004

Yield evaluation case study Table 1 presents results for a memory chip with 260 embedded memory instances generated by the following memory compilers: ■ ■ ■ ■

dual-port 16-Kbit area, speed, and power (ASAP) register file; dual-port 16-Kbit self-test and repair (STAR) register file with one redundant block of columns; single-port high-density 512-Kbit ASAP SRAM; and single-port high-density 512-Kbit STAR SRAM with two redundant blocks of rows and two redundant blocks of columns, one column block for each half of the memory array.

We studied this chip for the five cases described in Table 2 (next page). ASAP memories have no redundant elements for repair. The yield calculation tool lets the user choose ASAP or STAR memory instances. Depending on the configuration of memory instances, the tool finds for each instance the choice with the highest yield. If the user selects the same type of memory for all instances, the result will not always be the highest yield. Table 1 shows five of the 16 types of memory chip configuration. Each configuration has a varying number of memory words and bits in a word; different

205

Design for Yield and Reliability

Table 2. Description of cases in Table 1. Memory Logic defect

Chip yield

defect

ASAP/STAR

Case

density

density

Complexity

All

All

selection with

no.

(defects/in2)

(defects/in2)

factor

ASAP (%)

STAR (%)

maximum yield (%)

1

0.8

1.2

12.5

54.48

72.64

76.52

2

0.7

1.2

12.5

55.61

75.52

78.74

3

0.3

0.8

12.5

69.98

88.28

89.46

4

0.2

0.4

12.5

82.80

91.82

92.96

5

0.4

0.5

12.5

76.58

84.95

87.62

options for column multiplexing; and differing numbers of memory instances for each memory type. For the five cases we considered, the best solution with maximum yield includes both ASAP and STAR memories.

Theory and Practice, John Wiley & Sons, 1991. 7. J.-F. Li et al., “March-Based RAM Diagnostic Algorithms for Stuck-At and Coupling Faults,” Proc. Int’l Test Conf. (ITC 01), IEEE Press, 2001, pp. 758-767. 8. V. Vardanian and Y. Zorian, “A March-Based Fault Location Algorithm for Static Random Access Memories,”

IN FUTURE PUBLICATIONS, we plan to demonstrate our approach’s application range, based on the opportunity it affords for building a variety of optimization mechanisms within the proposed models. ■

Proc. IEEE Int’l Workshop Memory Technology, Design and Testing (MTDT 02), IEEE Press, 2002, pp. 62-67. 9. S. Hamdioui and A.J. van de Goor, “Thorough Testing of any Multiport Memory with Linear Tests,” IEEE Trans. Computer-Aided Design, vol. 21, no. 2, Feb. 2002, pp. 217-231. 10. A.J. van de Goor and I. Schanstra, “Address and Data

Acknowledgments

Scrambling: Causes and Impact on Memory Tests,”

We thank K. Aleksanyan of Virage Logic for providing the field results.

Proc. 1st IEEE Int’l Workshop Electronic Design, Test and Applications (DELTA 02), IEEE CS Press, 2002, pp. 128-136.

References 1. Y. Zorian and S. Shoukourian, “Embedded-Memory Test and Repair: Infrastructure IP for SoC Yield,” IEEE

11. J.A. Cunningham, “The Use and Evaluation of Yield Models in Integrated Circuit Manufacturing,” IEEE Trans. Semiconductors, vol. 3, no. 2, May 1990, pp. 60-71.

Design & Test, vol. 20, no. 3, May-June 2003, pp. 58-66. 2. I. Kim et al., “Built-In Self-Repair for Embedded HighDensity SRAM,” Proc. Int’l Test Conf. (ITC 98), IEEE Press, 1998, pp. 1112-1119. 3. J. Kawagoe et al., “A Built-In Self-Repair Analyzer (CRESTA) for Embedded DRAM,” Proc. Int’l Test Conf. (ITC 00), IEEE Press, 2000, pp. 567-574. 4. D.K. Bhavsar, “An Algorithm for Row-Column SelfRepair of RAMs and Its Implementation in the Alpha 21264,” Proc. Int’l Test Conf. (ITC 99), IEEE Press, 1999, pp. 311-318. 5. S. Shoukourian, V. Vardanian, and Y. Zorian, “An Approach for Evaluation of Redundancy Analysis Algorithms,” Proc. IEEE Memory Technology, Design and Testing Workshop (MTDT 01), IEEE Press, 2001, pp. 51-55. 6. A.J. van de Goor, Testing Semiconductor Memories:

206

Samvel Shoukourian is the director of Virage Logic’s Embedded Test and Repair Program. He also heads the Department of Algorithmic Languages at Yerevan State University, Armenia. His research interests include developing compilers for memory systems with embedded test and repair facilities. Shoukourian has a degree in computer science from the Polytechnic Institute, Yerevan, and has candidate and doctor of sciences degrees in computer science from the Institute of Cybernetics, Ukrainian Academy of Sciences, Kiev. He is a member of the National Academy of Sciences of Armenia.

IEEE Design & Test of Computers

Valery Vardanian is the manager of Virage Logic’s Test and Repair Methodology Group. He is also a senior research scientist at the National Academy of Sciences of Armenia, as well as an associate professor (off campus) in the Department of Algorithmic Languages at Yerevan State University, Armenia. His research interests include test and repair of memory devices. Vardanian has a PhD in computer science from Moscow State University, Russia.

The biography of Yervant Zorian appears on p. 182 of this issue. Direct questions and comments about this article to Samvel Shoukourian, Virage Logic, 47100 Bayside Parkway, Fremont, CA 94538; samshouk@viragelogic. com. For further information on this or any other computing topic, visit our Digital Library at http://www.computer.org/ publications/dlib.

WE’RE ONLINE Submit your manuscript to D&T on the Web!

Our new Manuscript Central site lets you monitor your submission as it progresses through the review process. This new Web-based system helps our editorial board and reviewers track your manuscript. For more information, see us at http://cs-ieee.manuscriptcentral.com

May–June 2004

207

Suggest Documents