Micro-Assembly Experiments with Transparent

0 downloads 0 Views 2MB Size Report
In most cases these grippers are ..... was provided between the size of the part (peg) and the size of the slot (hole). ... requirement in both cases (visual and optical servoing) is reduced to .0 ..... Future gripper generations should have separate opaque aligning marks to ... silicon motherboard for optoelectronic interconnect.
Micro-Assembly Experiments with Transparent Electrostatic Gripper Under Optical and VisionBased Control Eniko T. Enikov, Lyubomir L. Minkov and Scott Clark

Abstract— This paper describes the assembly experiments conducted with a novel miniature assembly cell for microelectromechanical systems (MEMS). The cell utilizes a novel transparent electrostatic gripper and uses several disparate sensing modalities for position control: computer vision for part alignment with respect to the gripper, a fiber-coupled laser, and a position sensitive detector (PSD) for part to assembly alignment. The performed assembly experiments indicate that the gripping force and stage positioning accuracy of the gripper are sufficient for insertion of micro-machined parts into slots etched in silicon substrates. Details on the cell operation, the control algorithm used and their limitations are also provided. Potential applications of the developed assembly cell are assembly of miniature optical systems, integration of optoelectronics, such as laser diodes with CMOS, and epitaxial lift-off (ELO) of thin films used in optoelectronic devices. Index Terms—electrostatic gripping, micro-assembly, visual servoing

I.

INTRODUCTION

A

lthough MEMS devices are usually fabricated via massively parallel photolithographic techniques, in some instances sequential assembly is required. For example, heterogeneous integration for vertical cavity surface emitting lasers (VCSEL-s) with silicon-based CMOS circuitry requires placement of the laser die onto a silicon substrate containing the electronic circuitry. Further applications include the assembly of dense arrays of high aspect-ratio structures such as electrode arrays in IC probe cards. These assembly and packaging operations are costly and usually constitute the largest portion of the device’s total cost. In order to increase the manufacturing throughput and reduce the re-tooling costs, it is desirable to Manuscript received June 10, 2004 is a revised version of a conference proceeding paper submitted to ASME congress 2004. This work was supported in part by the National Science Foundation No DMI-0134585 and Contract # 256458 with Sandia National Laboratories E. T. Enikov is with the Department of Aerospace and Mechanical Engineering, University of Arizona, Tucson, AZ 85721 USA (phone: 520-6214506; fax: 520-621-8191; e-mail: [email protected]). L. Minkov and S. Clark are research assistants at the Department of Aerospace and Mechanical Engineering, University of Arizona, Tucson, AZ 85721 USA.

develop flexible assembly schemes, allowing for quick adaptation to various part geometries and configurations. Responding to this need, research in this area has led to the development of visually servoed robotic systems, which utilize computer vision to generate knowledge about the position of objects in the robot’s work space [1-3]. Research focused on microassembly with visual servoing has also matured [4,5] and have produced excellent image processing techniques for robot control in real time [6]. The two-dimensional limitations of the imaging systems have been partially overcome by techniques for extraction of three-dimensional position information using methods known as “depth-from-focus” or multiple CCD arrays [2,7]. In parallel with the software improvement, several research groups have also developed micro-grippers actuated with vacuum [8], electrostatic comb drives [9], thermal actuators [10], or fluidic self-assembly [11] for use under computer vision controlled robotic systems. In most cases these grippers are application specific, and thus require retooling when parts with variable geometry are used [12]. Further, the gripping force is applied point or edge wise, increasing the possibility of local surface damage due to stress concentrations. The present work describes our effort to implement a microassembly cell based on a previously developed optically transparent electrostatic micro-gripper for visual servoing [13]. When compared to other approaches such as vacuum gripping [8], this technique has the advantages of applying a uniform and controllable clamping force, uses a transparent gripper, which allows complete observation of the part and can accommodate parts with different planar geometry. Thus pick and place of Vertical Cavity Surface Emitting Lasers (VCSELS) [14], lift-off of thin fragile films, and assembly of LIGA parts [15] are a few of the potentially relevant applications of this assembly technique. II.

SYSTEM LAYOUT AND OPERATION

The layout of the assembly cell is represented graphically in Figure 1a. The entire system is controlled by personal computer equipped with frame-grabber board (Sensoray Inc., USA) and motion control board (PCI-7344 Controller, National Instruments Inc., USA). A CCD camera (L-902K Watec Inc. USA) provides standard video stream to the frame-grabber. Servo-amplifier (MC-4SA, National Aperture Inc. USA) was

used to amplify the control signal from the PCI-7344 I/O controller board and thus control the two motorized linear stages (MM4-MX, National Aperture Inc. USA). A position sensitive detector (PSD) (S5990-01, Hamamatsu Inc, Japan), a

653 nm laser (FIB-635-1SM LaserMax, Inc), and the electrostatic gripper are integrated into a gripper platform and installed on one of the two linear stages.

Fig. 1. (a) Block diagram of the assembly cell

Fig. 2. (b) Optical alignment systems and slotted wafer (slot sizes are in the inset)

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS The semiconductor laser is coupled to a collimating GRIN lens via single mode optical fiber. An x-y-z translation stage (Newport, Inc, USA) supporting a gimbal mirror mount (Edmund Industrial Optics, USA) is used to hold a slotted 3” silicon wafer in front of the gripper/laser assembly (see Figure 1b). The laser beam is reflected from the surface of this wafer and is directed to the PSD by an optical prism. In the present design only one axis (the linear stage) is servoed by scanning over the slotted surface of the wafer. Also shown in Figure 1b is are the dimensions of the receptacle slots (550µm × 500 µm) and the spacing between them (500 µm). The gripper consists of a glass substrate with transparent thin-film electrodes coated with an insulator and is described in detail in [13]. The optical transparency of the gripper makes possible the use of real-time visual servoing to align the part with respect to the gripper using the CCD camera. To achieve this the pattern of the gripper itself is used. Figure 2 shows the gripper with a clamped part.

3

of the slotted wafer was aligned manually prior to the insertion tests. Because of the limitations of this alignment, relatively large clearances were provided - 50µm and 30 µm for the 500 µm dimension, and 20 µm dimension, respectively.

Fig. 4. Stage with gripper platform mounted.

III.

SYSTEM IDENTIFICATION

A simplified block-diagram of the main components is shown in Figure 5. The z-transforms were obtained using first-order holder equivalents of the continuous signals with a sampling time Ts = 2.5 ms .

Fig. 5. Block diagram of the SIMULINK model. Fig. 2. Gripper with ITO electrodes holding a Ni part.

Figure 3 shows the laser, the collimator, the PSD and the gripper integrated into the gripper platform as described earlier. Fine adjustment of the collimator’s optical axis was achieved by two compressed o-rings threaded over the collimator housing.

Fig. 3. Stage with gripper platform mounted.

The assembly task consists of picking the Ni part which is then inserted sequentially in an array of slots as shown in Figure 4. The shank of the part had a cross section of 500 µm × 20 µm, and the automatic alignment was performed along the longer dimension (500 µm). The vertical position and tilt

A. Linear Stage and Encoder The linear stage/motor is modeled as a second order system containing a single time-constant due to the inertial load of the system a0 X (s ) , = V ( s ) s (1 + τs ) (1) where V (s ) and X (s ) are the Laplace transform of the motor input voltage and the linear position of the stage respectively. The parameters a 0 = 0.138 µm V ms and τ = 13.6 ms were

determined experimentally by applying a step voltage to the motor and recording the velocity output x& (t ) . The servo amplifier gain is G = 1.25 and the motor was also equipped with a 64 count/revolution encoder. The positioning experiments were conducted using a stage with resolution of 0.075 µm/count resulting from the 66:1 reduction gear connected to a 80 threads-per-inch (TPI) lead screw. The total travel of the stage was 48 mm. Lag in the feedback path of the system is caused by the processing time required for analog to digital and digital to analog signal conversion, performing floating point calculations and feature tracking. Among these, the latter is most significant and ranges between 36 ms and 111ms for a

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS 64 × 64 pixels feature in a 128 × 128 pixels search window. The time delay during optical feedback (PSD) with 5 point sliding differentiation rule (see section 3.2) was significantly shorter T = 2.5 ms .

4

algorithm. After pick-up of a part, the surface of the receptacle wafer is scanned for a slot. Upon alignment, a part insertion is attempted via the manual x-y-z stage.

B. Reflected Intensity Distribution Mathematical model has been developed to describe the intensity of the reflected laser beam as a function of the position of the slot in the receptacle wafer. The intensity of the laser is modeled using the Gaussian distribution. It is assumed to be symmetric: I ( x, y ) = I max e

−2

( x − x0 ) 2 + ( y − y 0 ) 2 w02

(2) w0 is the beam waist, x 0 and y 0 are the coordinates of the center of the beam. To model the power acquired by the reflected intensity I ref ( x, y ) , the I ( x, y ) distribution was convolved with a characteristic function of the slot w w h h  0, x ∈ [ x H − , x H + ] and y ∈ [ y H − , y H + ] , H ( x, y ) =  2 2 2 2  1, otherwise

where

xH and yH

(3) are the coordinates of the centroid and

w , h are the width and the height of the slot. The power of the signal is acquired after integrating the intensity over an infinite area ∞ ∞

P( x 0 , y 0 ) =

∫ ∫I

max

e

−2

( x − x0 ) 2 + ( y − y 0 ) 2 w02

Hdxdy

(4)

−∞− ∞

The beam radius is chosen to be such that w0 = w / 2 . The results predicted by equation (4) were compared with experimental data acquired by passing the laser beam over an anisotropically etched rectangular slot in a silicon wafer. IV. CONTROL ALGORITHM

Two feedback loops have been implemented: one based on the signal from the CCD camera; and another based on senory data from the PSD detector. Image-based visual servoing was implemented by employing the Sum of Square Differences (SSD) search algorithm [16]. Two SSD algorithms were used: spiral SSD algorithm and sequential SSD algorithm. The spiral search starts from the last known feature position and probes for a match in a spiral manner. The sequential search performs consecutive line scanning. Spiral mode was used for detection of parts with known initial location. Sequential mode was used when no estimate for the initial location of the tracked feature was available. Figure 6 shows a screen capture of the software interface developed for visual servoing. In this particular example three features marked with rectangles are selected for tracking. Two features belong to the gripper and one feature (edge of the part) is selected for centering between the other two. Alignment of a part with respect to the gripper is achieved through the use of this image-based visual servoing

Fig. 6. Block diagram of the SIMULINK model.

Optical servoing aligns the part with respect to a slot on the wafer. Successful alignment is indicated by a minimum in the total reflected light intensity measured by the PSD detector. Real-time alignment is achieved through an intensity minimizing procedure, which utilizes measurements from the PSD sensors and from the magnetic encoder of the linear stage servomotor. Encoder information is also used for feed-forward position control of the linear stage. Two control algorithms for intensity minimization were tested - spatial derivative method and hill-climbing algorithm. The first one is based on a real-time spatial derivative estimation of the PSD signal. The second one is based on intensity minimum search with a predefined step size. A. CCD Camera Feedback A simple proportional control law was used for the imagebased visual servoing 1 &x& + x& = a 0V (t ) (5)

τ

V = − K p ei ,

(6)

ei is the positioning error established by the SSD search algorithm in the image space [6]. Due to the linear projection of the CCD camera [16], this error is proportional to the actual error in the physical space resulting in a conventional proportional control ei = (x − x 0 ) S , where S = 10.778 µm/pixel is the camera magnification factor and x − x 0 is the position error in the physical space. The position error

x − x0 is determined with an accuracy of 1 pixel using

a simple minimization algorithm on the CCD image space. While sub-pixel accuracy can be achieved with this method for example by using the “center of mass” of the feature [17], here this has not been implemented, since the objective of this study is to demonstrate for the first time the use of optically transparent gripper and ample clearance (50 µm)

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS was provided between the size of the part (peg) and the size of the slot (hole). B. Optical Detector Feedback The PSD-based optical positioning control was achieved using derivative control with respect to the space variable x. The control voltage was selected proportional to the derivative of the reflected intensity with respect to the spatial coordinate x dI ref V = −K d , (7) dx K d is the feedback derivative gain. This results in zero control action when the derivative of the spatial reflected intensity function reaches zero. The real-time implementation of this control poses problems with establishing the spatial derivative of the reflected intensity. The local stability requirement in both cases (visual and optical servoing) is reduced to K p > 0 and K d > 0. (8)

C. Time Delay and Stability Analysis Lag in the feedback path of the system is caused by the processing time required for analog to digital and digital to analog signal conversion. In the case of a digital control, due to time delay upper limit on the values of the proportionality constants K p and K d exists. Non-linear effects such as dead-zone due to dry friction inside the stage and bounds on the usable actuation voltage V (t ) ∈ [−10V ,+10V ] cause the system to significantly deviate from the linear regime. During each cycle the control signal is formulated based on the error measurement from the previous cycle, while the control input V (t + Tj ), t ∈ [0, T ] is held constant. As a result, during each cycle, equation (5) takes the form 1 &x&(t + jT ) + x& (t + jT ) = a 0V (T ( j − 1)), t ∈[0, T], j = 1,2,...

τ

(9)

Integrating (9) over one time period T, results in a discrete mapping describing the response of the digitally controlled system at times t = [0, T ,2T ,.....]. z j +1 = L ⋅ z j , (10)

1  L = 0 − K 

1− e

e 0

−η

−η

−1+η  1 − e −η ; z j +1  0  e

−η

 x j +1     τ  T =  x& j +1 ; η = τ   a0τV j   

2b1 K d , for the optical case K = (11)  SK p , for the visual case The stability of (10) is investigated by applying the Jury’s criterion as described in [18] and [19]. The resulting stability conditions are derived:

A = 2 + 2e−η + Κ (η − 2 + 2e−η + ηe−η ) > 0 −η

−η

B = 4 − Κ (η − 4 + 4e + 3ηe ) > 0

(12) (13)

BC − AD > 0,

5 (14)

where (15) C = 2 − 2e−η − Κ (η + 2 − 2e−η − 3ηe−η ) −η (16) D = Κ (η − η e ) . The stability region for the gain K crit as a function of the dimensionless sampling time η is shown in Figure 7. As expected for small time delays we recover the continuous case of equation (8) derived for the system without time delay.

Fig. 7. Stability conditions satisfying Jury’s criteria.

D. Real-Time Implementation Optical alignment using the PSD was implemented using the proportional control law (7). Initialization was conducted in a region of the wafer devoid of slots resulting in an max . The stage estimate of the maximum reflected intensity I ref was commanded to move at full speed toward the location of the slot until the reflected intensity dropped below a given fraction C off of its initial value. At this point the proportional law was activated. A non-linear filter was used to reduce the effect of the noise. A p-point linear regression was used to fit the line I = ax + b to the measured intensity n −1

∑ (I

i

− ax i − b) 2 = 0 ,

(17)

i = n − p +1

where the coefficient a was used to approximate the spatial derivative dI ref / dx ≈ a . The derivative requires the solution of the following system of equations

 n −1 2 xi  i = n − p +1  n −1 xi   i = n − p +1





  n −1  xi I i  xi   i = n − p +1 i = n − p +1   a   =   b   n −1 p   Ii   i = n − p +1  n −1



∑ ∑

(18)

resulting in the following values for the coefficients a and b

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS n −1

n −1

n −1

∑ x I ) − ( ∑ I )( ∑ x )

p(

i

i

i

i = n − p +1

a=

6

n −1

p(

i

i = n − p +1



n −1

2

xi ) − (

i = n − p +1

i = n − p +1



;

(19)

xi ) 2

i = n − p +1

and ( b=

n −1



xi )(

i = n − p +1

(

n −1



n −1

xi I i ) − (

i = n − p +1 n −1

∑x )

i i = n − p +1

2



xi ) 2 (

i = n − p +1 n −1

− p(

∑x

i i = n − p +1

2

n −1

∑I )

i i = n − p +1

,

(20)

)

where xi is the position measurement at time step i, I i is the intensity measurements; p is the number of points used for the linear regression. Positioning experiments using the reflected intensity were first conducted on a Si wafer containing 1.4 mm wide etched cavities, spaced 8.2 mm from one another. Figure 8 shows the reflected intensity (top) and its unfiltered spatial derivative (bottom) as the laser is scanned across these. A simple two-point differentiation scheme and equation (7) was used to obtain the derivative of the signal resulting in substantial noise. A significant noise reduction and accuracy improvement can be achieved if equation (19) is used instead. Figure 9 illustrates the noise rejection properties for large number of points ( p = 50. ) The measured signal to noise ratio was 60dB. Using p number of points, the resulting time lag can be estimated to be Tdelay = ( p 2 )T , with T being the time delay for one sampling cycle. At low velocities even a small noise present in the signal is greatly amplified, causing unstable behavior in the vicinity of the desired position. To avoid this singularity, the controller is turned off when the motor control signal enters the friction dead-zone (±0.3 V.)

Fig. 9. Comparison between filtered and unfiltered intensity derivative p=50 points. The filtered derivative is smooth and can be used for data processing.

V. INSERTION EXPERIMENTS

A. Error Estimates Due to time delay, dead-zone effects and non-linear characteristics of the PSD detector accurate prediction of the steady state error is cumbersome [18]. While accuracy can be improved by increasing the proportional gain, stability limits cannot be exceeded. Position error estimates can be obtained based on the control voltage dead zone of the motor

Vt , (21) K where K is given by (11) and Vt = 0.3 is the threshold x − xH =

voltage due to the dead-zone of the actuator. Table I shows the steady-state positioning error using maximum magnification of the camera. The reported errors indicated the maximum error out of five and on e-hundred experiments for the visual and optical positioning tests, respectively. Both errors were determined with respect to the encoder readings (± 0.075 µm/count accuracy). In the case of visual servoing, the gripper was used to place the part at a reference location and subsequently command the gripper to return to this position using the visual servoing. The error between the reference and final position of the encoder was then recorded as the positioning of the visual servoing (see Table I.) Error estimates predicted from equation (21) are also provided. The established accuracy corresponds to the resolution of the visual search algorithm, i.e. one pixel. As pointed earlier this can be improved through software algorithm improvements, the objective in this study was to only demonstrate the utility of an optically transparent gripper in visual servoing, and not to establish the ultimate limits of the visual servoing

Fig. 8. Intensity and unfiltered intensity derivative over 3 holes. The noise of the intensity spatial derivative in the vicinity of the slot is significant.

TABLE I VISUAL SERVOING POSITION ERROR

Kp ε, µm Experimental Equation (21)

100

600

2000

97 86

11 14

64 4

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS approach. The positioning errors from the optical alignment experiment were determined from a separate experiment. The laser was scanned across a slot and reflected intensity vs. encoder reading was recorded. The location of the minimum on this curve was assumed to be the center of the slot. An automatic centering experiment was then performed using one of the two algorithms and the positioning error with respect to the minimum point was recorded as illustrated in Figure 10. A summary of the resulting errors and corresponding MATLAB simulations are shown in Table II and Table III, respectively. Table III also provides the error predicted by equation (20). The presence of noise in the model increases the steady state error in agreement with the experimental observations. The MATLAB model indicates that the time delay also contributes to the increase of the steady-state error.

7

total of eight slots were present in each row. Both intensity gradient and hill-climbing algorithms were tested. The first one resulted in accuracy of ± 7 µm and relatively poor success rate 37.5%, however the speed of this alignment was on order of tens of milliseconds. Using the hill-climbing algorithm [20], the positioning accuracy was reduced to ± 4µm with positioning speed of 10 to 15 seconds. Five series of eight consecutive insertions were successfully performed. Subsequent analysis of the experimental conditions showed that the difference in the success rate was not solely due to the poor positioning accuracy of the intensity gradient-based method, but rather due to misalignment of the roll and yaw angles of the slotted wafer (see the rotations Ry and Rx in Fig.1b.) Since this alignment was performed manually prior to each insertion experiment series, a small variation of the degree of alignment combined with the relatively lower positioning accuracy of the gradient – based method resulted in partially successful insertion. When proper alignment was made between the slotted waver and the visually servoed stage (a tedious task), the part was inserted successfully in all eight slots. Figure 11 illustrates the insertion sequence and release of the part as seen through the gripper. TABLE III OPTICAL SERVOING POSITION ERROR SIMULATIONS

Kd

Fig. 10. Error measurement results for the optical alignment system. The bold portion of the curve visualizes the path of the stage.

Kd

10000

95000

320 50 300 15 300 2 ( p = 5,10 , 15 points and Coff = 85% )

95000

200000

7

4

22

13

90

50

VI. SUMMARY AND CONCLUSIONS

TABLE II OPTICAL SERVOING POSITION ERROR

ε, µm 5 points 10 points 15 points

10000 ε, µm Equation (21) 68 MATLAB no 87 noise MATLAB noise 280 (p=5, Coff = 85% )

200000 70 30 20

E. Peg-in-a-hole Experiment C-shaped Ni micro-electrodes were inserted sequentially into an array of rectangular slots. A series of insertion experiments were conducted according to the schematic illustrated earlier (see Fig. 4.) The micro-machined nickel part was first aligned with respect to the gripper and clamped. The stage was then commanded to move to the beginning of the row of rectangular slots of 550µm × 50 µm slots (receptacles) and the optical alignment procedure was run consecutively for each slot. After each optical alignment, a manual insertion was performed to verify the alignment. No part sliding along the surface of the gripper was observed. A

Successful “peg in a hole” assembly experiments have been demonstrated using optically transparent electrostatic gripper. Two algorithms were used for alignment of the part with respect to the slot. The hill-climbing algorithm proved to be more accurate than the spatial gradient method, however the observed failures of the gradient-based method cannot be explained with the relatively poorer accuracy, rather they are result of inconsistent wafer/stage manual alignment. A complete automation of all degrees of freedom is therefore needed in order to demonstrate repeatable assembly operation using the developed gripper. Such development is outside the scope of this report. The spatial derivative method however, had the advantage of greater speed at modestly reduced accuracy. The gripping force has been shown to be sufficient for the insertion of parts with 50 µm clearance. The uniformity of the electrostatic attachment proved useful in handling misaligned parts, which were able to slide along the gripper surface without damage.

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS The transparency of the gripper also allows for reliable part tracking, however the ITO layer was not always sufficiently opaque for the reliable tracking. Illumination of the stage was one of the major contributing factors in determining the robustness of the tracking algorithm. Future gripper generations should have separate opaque aligning marks to aid in improving the tracking accuracy.

8

ACKNOWLEDGMENTS The authors acknowledge the support for this work in part by a grant No DMI-0134585 from the National Science Foundation and by Contract # 256458 with Sandia National Laboratories. The authors also acknowledge the useful discussions and support of James (Red) Jones from Sandia National Laboratories during the course of this project. REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9] Fig. 11. Assembly sequence.

[10]

D. Kugelmann, Autonomous robotic handling applying sensor systems and 3D simulation, in: Proc. 1994 IEEE Int. Conf. on Robotics and Automation, San Diego, California, pp. 196-201 (1994). G. D. Hager, W.-C. Chang and A. S. Moore, Robot feedback control based on stereo vision: towards calibration-free hand-eye coordination, in: Proc. 1994 Int. Conf. on Robotics and Automation, San Diego, California , pp. 2850-2856 (1994). S. Wang, R. Cromwell, A. Kak, A., I. Kimura and M. Osada, Model-based vision for robotic manipulation of twisted tubular parts: using affine transforms and heuristic search, Proc. 1994 Int. Conf. on Robotics and Automation, San Diego, California, pp. 208-215 (1994). B. J. Nelson and P.K. Khosla, “Visually Servoed Manipulation Using Active Camera,” Proc. ThirthyThird Annual Allerton Conf. On Communication, Control, and Computing, University of Illinois at Urbana Champaign. Oct. 4-6, 1995 see program N. Papanikolopoulos Selection of features and evaluation of visual measurements during robotic visual servoing tasks, J. Intelligent Robotic Systems 13 (3), 279-304 (1995). B. Vikramaditya and B. J. Nelson, Visually guided microassembly using optical microscopes and active vision techniques, in: Proc. 1997 IEEE Int. Conf. on Robotics and Automation, Albuquerque, New Mexico, pp. 3172-3177 (1997). B. Nelson, N. Papanikolopoulos , and P. K. Khosla, Visual servoing for robotic assembly, in: Visual Servoing, K. Hashimoto (Ed.), Series in Robotics and Automation, Vol. 7, pp. 139-164, Word Scientific Publishing, (1993). Vikramaditya, B., Nelson, BJ., Yang G. and Enikov, ET., Microassembly of hybrid magnetic MEMS, Journal of Micromechatronics, v.1(2) pp. 99-116, 2000 Kim, C.-J., Muller, R. S., Pisano, A. P., and. Lim, M. G., “Polysilicon microtweezers,” Sensors and Actuators A (Physical), v. 33, pp.221-227, 1992 Tsui, K., Geisberger, AA., Ellis, M., and Skidmore, G.D., “Micromachined end-effector and techniques for directed MEMS assembly,” J. Micromech. Microeng. v. 14 pp. 542–549, 2004

MANUSCRIPT SUBMITTED FOR REVIEW TO IEEE TRANSACTIONS ON INDUSCTRIAL ELECTRONICS [11]

[12] [13]

[14]

[15]

[16]

[17]

[18] [19] [20]

Srinivasan U, Liepmann D., and Howe R.T., “Microstructure to substrate self-assembly using capilliary forces,” J. Microelectromech. Sys., v. 10 pp. 17–24, 2001 Keller C. “Microfabricated high aspect ratio silicon flexures,” PhD Thesis 1998 Enikov, ET, Lazarov, KV, Optically Transparent Gripper for Microassembly, Microrobotics and Microassembly III, Proceedings of SPIE Vol. 4568,pp. 40-49, 2001 B. Corbett, K. Rodgers, F.A. Stam, D. O’Connell, P.V. Kelly, G.M. Crean,, Low-stress hybridisation of emitters, detectors and driver circuitry on a silicon motherboard for optoelectronic interconnect architecture, Material Science in Semiconductor Processing 3 (2000) 449-453. Egert, C.M., and Hylton, K.W., “Automated array assembly: a high throughput, low cost assembly process for LIGA-fabricated micro-components,” Microsys. Tech. v. 4 pp. 25–7, 1997 J. Alex, B. Vikramaditya, and B.J. Nelson, “A virtual reality teleoperated interface for assembly of hybrid MEMS prototypes,” Proc. 25th Biennial ASME Mechanisms Conf. (DETC), September 1316, Atlanta, GA, ASME, 1998 Ning Mei Yu, Tadashi Shibata, “A Real-Time Center-of-Mass Tracker Circuit Implemented by Neuron MOS Technology,” IEEE Trans Circ. and Sys. II: Analog and Digital Signal Processing, v. 45 (4), 1998 Enikov, E. and Stepan, G., Microchaotic Motion of Digitally Controlled Machines, Journal of Vibration and Control, v. 4 (4), p. 427-443, 1998 Kuo, B.C. Digital Control Systems, SRL publishing, Champaign, IL, 1977 Kamran S. Mobarhan, Ph.D., Martin Hagenbuechle, Ph.D., and Randy Heyler, Fiber to Waveguide Alignment Algorithm, Newport Corporation, Irvine, California

Eniko T. Enikov received his M.S. degree from Technical University of Budapest in 1993 and Ph.D. degree from University of Illinois at Chicago in 1998, followed by a two-year postdoctoral fellowship at the Advanced Microsystems Laboratory of the University of Minnesota, 19982000. He is currently an Assistant Professor in the Department of Aerospace and Mechanical Engineering at the University of Arizona. His current research is focused on the design and fabrication of micro-electromechanical systems (MEMS), the development of theoretical models of active actuator materials used in MEMS and development of relevant applications of these. Dr. Enikov's group at the University of Arizona has an ongoing research and development program on tactile displays, electrostatic micro-grippers for assembly of MEMS, and nano-assembly of macro-molecules using electrostatic fields. Dr. Enikov is a member of the following professional societies ASME, SPIE, ASEE, and SEM. Lyubomir L. Minkov received his MS degree in Radio-Communication Systems form The Technical University of Sofia, Bulgaria in 2001. From beginning of 2003 till June 2004 he was Research Assistant at the

9

Aerospace and Mechanical Engineering Department, University of Arizona, Tucson, under Dr. Enikov's supervision. During that period he worked on assembly of micro-parts. Currently Mr. Minkov is a PhD student at the Electrical and Computer Engineering Department, University of Arizona. Scott Clark was born in Omaha, NE. He is an undergraduate pursing a B.S. in mechanical and aerospace engineering from the University of Arizona, graduating in 2005. He is currently a Research Assistant in the Advanced Microsystems Laboratory at the University of Arizona. In 2004, he was also an Intern at Sandia National Laboratory working on the development of microassembly systems for MEMS devices..