Design of an Advanced Microassembly System for the Automated ...

7 downloads 2443 Views 1MB Size Report
setup and software components are presented in detail. .... a model of the MEMS gripper with integrated force sensor. .... dramatically increases depth of field.
Design of an Advanced Microassembly System for the Automated Assembly of Bio-Microrobots Martin Probst, Karl Vollmers, Bradley E. Kratochvil, Bradley J. Nelson Institute of Robotics and Intelligent Systems ETH Zurich 8092 Zurich, Switzerland {mprobst, kvollmers, bkratochvil, bnelson}@ethz.ch

Abstract— Three dimensional hybrid MEMS devices are gaining more and more importance as new sensor and actuator designs are being developed. It is widely believed that such systems will eventually be used for biomedical applications, such as non-invasive surgery or high-precision drug delivery. However, the key criterion for success of hybrid MEMS is industry acceptance. Fabrication of the 2.5D building blocks uses standard clean room processes whereas the combination of those units into large and complex devices requires precise and robust microassembly stations. This paper presents such a system in its second generation. The mechanical design as well as the vision setup and software components are presented in detail.

A

C B

I. I NTRODUCTION Bio-microrobotics is a combination of robotics and microelectro-mechanical systems (MEMS) technologies for biomedical applications. It aims at the construction of intelligent sub-millimeter sized devices using standard microfabrication processes and to equip those with advanced robotic handling and/or sensing tools. An example of such a first generation microrobot is shown in Figure 1. It would be externally steered and powered by magnetic fields in order to fulfil tasks inside certain parts of the human body, i.e. the eye or the blood circuit. The complexity of such a device demands a high degree of integration of its individual components. A promising approach is the concept of hybrid MEMS, where incompatible materials and manufacturing processes as well as 2.5D shape limitations are overcome [1]. The aggregation of complex hybrid MEMS asks for advanced microassembly systems that are capable of ultra-precisely assembling microparts within a reasonable amount of time. Ideally, the operation of such a system would be as simple and intuitive as assembling macroparts by hand. This would make microassembly systems indispensable members of any lab working in the field of (hybrid) MEMS. This work demonstrates an advanced microassembly station in its second generation. It starts off with a brief literature overview and continues with a detailed description of the mechanical-, vision- as well as software components. Some ideas and future concepts are provided in the end of this paper. II. P REVIOUS R ESEARCH A large number of microassembly systems for various applications have been developed over the past few years. They

D

Fig. 1. Microrobot components (fabricated from Ni) and assembled device. This example is a large version for testing purposes. The little inset shows the device on a fingertip. This components are manufactured and assembled at IRIS.

can be classified as parallel microassembly, self–assembly and serial microassembly systems [2] [3]. Whereas the first two methods aim at the mass production of components, serial microassembly offers more flexibility and parts can be of higher complexity since the individual components can mostly be translated and oriented in full 6 DOF. The aid of computer vision has proven to be a robust method for coping with high precision requirements and vastly different physics governing part interactions at the microscale and some interesting work can be found in [1] [4] [5] [6]. Visually guiding components to their final position, either in a semi– or fully automatic mode, can be supported by manufacturing devices with snap-

(dx,dy)

cx

3 cameras

η

ζ

RCM



dome

(ex,ey)

top unit z x

base unit

y

cover Fig. 2. CAD model of the new microassembly system consisting of a 4 DOF base unit, a 2 DOF top unit, an illumination dome in the center and 3 microscopes with CCD cameras attached on a ring above the whole structure.

lock features [7] [8] [9]. Those LEGO-like building blocks are more complex but make an additional bonding unit (i.e. a glue dispenser) obsolete. Interesting work on microassembly concepts has been done by [10] and [11]. A good overview of environmental influences on microassembly processes as well as the construction of a “controlled climate system” can be found in [12]. We envision a versatile full 6 DOF microassembly station that can handle a large variety of parts using a simple and cost-efficient hardware setup under non-cleanroom conditions. It should also provide a simple interface and assist users with a semi- or fully-automatic mode. III. M ICROASSEMBLY S YSTEM S ETUP

ω

α

θ

β fz



cy

Fig. 3. Kinematic setup of the microassembly system with all rotational axes meeting in one point (concept of remote center of motion, RCM). The little inset shows the configuration of one of the three camera units. Ø 28mm

15mm

Fig. 4. Microfabricated workbench square consisting of holes of various sizes where microparts can be stuck in. Constant airflow generated by a vacuum on the lower side of the workbench sucks parts down and helps releasing them from the gripper.

A. Mechanical Setup The microssembly system presented here is based on a previous system also built at IRIS three years ago (details can be found in [13]). A large number of experiments and input from different users lead to the final design shown in Figure 2. The current system consists of a base unit, a top unit, three camera units as well as a cover that holds the illumination dome. The kinematic setup can be seen in Figure 3. The base unit consists of a robust and precise rotation table (Θ-axis, Newport RV160CC-F) with an integrated slip ring that transmits 36 electrical wires and 2 pneumatic/vacuum lines and therefore allows full 360◦ rotation in both directions. On top of the rotation table there is an xyz-stage (xyz-axes, Sutter Instrument Company MP-285) with the working table attached to its end. The working platform has a diameter of 25 mm and a square microfabricated insert with a side length of 15 mm with fixtures in the shape of holes where microparts of different sizes can be docked with high precision. A vacuum applied on the lower side of the insert ensures constant airflow through

the holes and helps releasing parts from the gripper (see Figure 4). The top unit defines a kinematic chain of two rotational axes η and ζ at a 90◦ angle which are both driven by a combination of DC motors and Harmonic Drivesr in order to minimize backlash. The end effector at the end of the upper arm is of a modular design and allows changing tools with minimal effort. Good experiences have been made with a microwire EDM fabricated and DC motor driven (Faulhaber 0816 008S) microgripper with a thickness of 100 µm, a maximum tip opening of 800 µm and a maximum gripping force of 0.072 N. For the handling of smaller objects this gripper can be exchanged by a miniature MEMS microgripper mounted on a PCB that was designed and fabricated at this institute. This electrostatically driven device handles objects in the range of 5 – 200 µm and has integrated force measuring capabilities with a resolution of 70 nN [14] (see Figure 5). The base as well as the top unit are each mounted on additional translation stages which allow movements in the

workbench

cy

Fig. 5. DC-motor driven microgripper for 200 – 800 µm sized objects (right) and MEMS gripper and holder for 5 – 200 µm objects. The little insert shows a model of the MEMS gripper with integrated force sensor.

x-direction (cx axis, Giroud Type-100) and the y-direction (cy axis, Schneeberger Minimodul/Minirail), respectively. Both axes are used for calibration, i.e. to create a remote center of motion (RCM) at the tool center point (TCP) (tip of the microgripper in this case). For that reason, two lasers LΘ and Lη are mounted on the Θ and ζ axes creating a focused laser beam intersecting at the desired RCM point. Strong reflections on the gripper tip are recorded by all cameras and used to align the axes. Work is in progress to fully automate this calibration step. The alignment of the third rotational axis (ζ) does not incorporate a laser due to space limitations. However, the tip of various grippers can be easily aligned by rotating the ζ axis ±90◦ and observing lateral displacements of the tip of the gripper with the cameras. Calibration of the axes usually has to be done once a new gripper is installed since the tip might not coincide with the previous RCM anymore. Once a gripper is calibrated, moving in and out along the cy axis for loading parts on the working table (see Figure 6) does not require recalibration since the calibration position can be stored. Table I shows a complete list of the performance of all axes. The xyz-stage yields a high resolution which accounts for the fact that translational precision scales with size [15]. The present microassembly system requires a space volume of 1220x620x670 mm and it is designed to be mounted on the grid of a Newport vibration table. The absolute workspace has the shape of a cube with a sidelength of 25 mm. B. Vision and Illumination System High precision microassembly is strongly dependent on a high performance vision system that provides a clear image for the user as well as for the auxiliary computer vision modules. Three IEEE 1394 cameras (Basler A602fc) with variable zoom microscope lenses (Edmund Scientific VZM-300i) are equally spaced around the center at a 45◦ angle to the horizontal plane. This configuration maximizes the visual resolvability [16] and guarantees that for any given position of the gripper there are always two non-occluded views. Each of the cameras is mounted on a motorized linear stage (fz axis) and a pan/tilt unit (α and β axes) and can be rotated as a whole around a ring

Fig. 6. Loading position (top) and working position (bottom) of the microassembly system. Effective driving time is in the order of a few seconds. TABLE I S YSTEM PERFORMANCE Axis

Range

Type

Resolution

Speed

xyz Θ η ζ cx cy dx , dy ex , ey

± 12.5 mm full 360◦ −45◦ − +60◦ ± 90◦ 25 mm 656 mm 6.3 mm 6.3 mm

Stepper DC DC DC manual DC manual manual

0.04 µm 0.010◦ 7.8E-5◦ 0.002◦ 10 µm 1.66 µm 0.8 µm 0.8 µm

2.9 mm/s 80◦ /s 47◦ /s 540◦ /s – 231 mm/s – –

fz ω α β

30 mm full 360◦ full 360◦ −45◦ − +10◦

DC manual manual manual

0.1 µm – 0.02◦ 0.02◦

0.2 mm/s – – –

(ω axis). Last is centered around the illumination dome and therefore the camera units can be easily aligned and adjusted. Depending on the magnification requirements, the microscopes can be easily replaced by a different model. The little inset in Figure 3 shows the camera units’ kinematic setup. The illumination system (Figure 7) has been completely redesigned and consists of three modules. The central part is the aluminium dome with a coated inner side to provide a diffuse ambience on the working platform. The section plane of the dome contains a circuit board with 12 high power LEDs (Luxeon Emitter III, 3W) facing upwards to the inside of the dome and thus creating a strong diffuse

TABLE II

top view:

E DMUND VZM-300 I MICROSCOPE CHARACTERISTICS

C1 C2 spotlight dome

microscope

C3

M

FOV

NA

DOF

d

V

0.75X 1.0X 2.0X 3.0X

8.0 mm 7.3 mm 4.7 mm 2.0 mm

0.016 0.030 0.040 0.045

3.82 mm 1.28 mm 0.59 mm 0.42 mm

3.1/1.3 mm 2.1 mm 0.9 mm 0.6 mm

209.0 mm3 18.3 mm3 1.7 mm3 0.6 mm3

ringlight

Firewire specimen

working platform indirect light

Fig. 7. Illumination system consisting of an indirect-, a ring- and a spotlight which can all be controlled individually according to the requirements defined by the assembly process.

0.75X

3.1

1.0X

2.1

2.0X

0.9

3.0X

0.6

Ethernet

Fig. 9. The hardware is controlled by the robot control unit MARVIN (center) where an instance of Player server is running. The controller can be accessed over Ethernet from any workstation. Video signals are directly transferred to the workstation over Firewire.

Fig. 8. Viewing volumes (top) and their maximum cross-sectional areas (bottom with indicated side lengths in mm) for different levels of magnification of an Edmund VZM-300i microscope.

C. Software Components

illumination. The dome itself is equipped with a spotlight (Luxeon Star, 1W) perpendicularly facing down to the center of the hemisphere and 4 spotlights equally spaced around the center axis with an angle of 14◦ to the section plane. All LEDs are triggered in sync with the camera shutter (∼ 30Hz) which allows to operate them at higher currents. They can also be controlled and dimmed individually — a convenient feature when working with microparts of various reflection. The four spotlights arranged around the center can be turned on and off one after the other in a circular manner such that only one lightsource is on at one instant of time. Capturing one image at each light cycle yields four images each showing different shadows of the parts. The combination of those images results in a depth image [17] that provides a much more intuitive view of the scene and is also used as a basis for image processing tasks. The illumination dome also contains holes for 6 UV LEDs that are used for curing UV activated glue. This setup provides extremely bright illumination and allows closing the apertures of the lenses to a minimum which dramatically increases depth of field. Table II shows the field of view (FOV), numercial aperture (NA), depth of focus (DOF), side length (d) of the viewing volume and the viewing volume (V) itself for different magnifications (M) of one and a combination of three VZM-300i microscopes, respectively. Side length and volume have been calculated in CAD by intersecting the three viewing volumes of each camera. These values are important for the specification of the workspace and an automatic adaption of the focal plane. Please also refer to Figure 8 for a visualization of the viewing volumes.

The present system consists of 9 DC motors, 3 stepper motors, 3 pneumatic valves, an illumination system with 17 normal and 6 UV LEDs and three IEE1394 cameras. All those components are controlled with a robot control unit MARVIN developed at this institute. MARVIN consists of a Pentium-M class computer with a compact PCI bus built in a standard 19” rack. The modular setup offers great flexibility for future upgrades and/or replacements of electronic components. Debian Linux is used as an operating system and Player [18] as a robot controller. This allows the unit to be controlled remotely over the network by a standard workstation running Debian Linux and Player, too. Video data from the three IEEE1394 cameras is directly transferred to the workstation so that only little information for controlling the system has to be transferred over the network (see Figure 9). Since CAD models are readily available from the MEMS design process, we are using CAD-model based localization (initial pose estimation) and tracking in order to retrieve 3D position and orientation of the microparts. This two-step process can be made more robust by introducing an edge quality metric that assists in identifying false poses during the tracking process. Initial pose estimation is based on an algorithm developed by [19] where 2D image and 3D model features are mathematically connected by constraint equations. The tracking part is based on work done by [1] and operates at a frame rate of 30 Hz. We have also tried to combine our model-based localizer with a spring-mass-damper based snake approach [20]. However, the additional cost of computation negates the little gain in precision. Since a complex MEMS device, such as the microrobot introduced at the beginning of this paper, can contain a large

d

(a)

(b)

(d)

(e)

(c)

2

ε

r

Fe

=

Ft

=

Fv

=

q 4πd2 4πdγ hd 8πz 2

Fig. 10. Point-sphere model of the micromechanics and the forces involved (Fe : electrostatic, Ft : surface tension, Fv : Van der Vaals, q: charge, : permittivity, d: diameter, γ: surface tension, h: Lifshitz-van der Vaals constant, z: atomic separation between plane and sphere).

number of individual components, each having its own distinct properties, a common model database is essential. Localization and tracking both rely on an accurate representation of all objects involved in the assembly process. This also applies to additional tasks, such as collision avoidance, advanced physical modeling and 3D visualization. Modern scene graph packages, such as OpenSG and OpenSceneGraph, cover all of these aspects and are constantly expanded for future concepts. We use a simple scene graph model that, on one hand, includes the base and top unit of the microassembly system, and on the other the components to be assembled. Each component will be embedded in a tree-like structure with (1 − n) parents as well as (0 − m) children. Dependencies would constantly change as parts are moved from one place to the other and thus keep track of the assembly process as a whole. IV. M ICROASSEMBLY C ONCEPTS The successful handling of parts at the microscale is dependent on a large number of factors. The five most important to be named here are task modeling, microgripping, force sensing, vision sensing and micropositioning. Some of them are explained in more detail in the following sections. A. Task modeling Task modeling deals with the mechanics and physics of micromanipulation and a sphere-plane model is often used as an approximation of the forces involved (see Figure 10(a)). Electrostatic, surface tension as well as Van der Vaals forces are the dominant effects at this scale, whereas gravity can be neglected as part dimensions fall below ∼100 µm. B. Microgripping Understanding the details of micromechanics is very important for the gripping and handling processes. Microgrippers can be divided in the following four groups (some examples are depicted in Figure 11): 1) Mechanical microgrippers with internal actuation: This category contains grippers whose actuation is integrated into the actual mechanical structure. They are generally referred to as MEMS microgrippers and use a wide range of actuation principles, based on thermal expansion, electrostatic forces, magnetostriction, bimorph materials or shape memory alloys. They have actuation displacements in the order of 10 – 100 µm and gripping forces up to 50 µN.

(f)

Fig. 11. Strategies for gripping microparts based on the following principles (from top left to bottom right): (a) roughness change/Van der Waals, (b) surface tension, (c) vacuum, (d) electrostatic, (e) gravity and (f) impulsive manipulation.

2) Mechanical microgrippers with external actuation: Externally actuated grippers or macroscale microgrippers consist of a gripping mechanism and a separate actuator connected by some type of transmission mechanics. Structure with flexure hinges manufactured with conventional techniques and/or EDM are commonly used. Actuators range from miniature DC-/stepper motors to solenoid, piezoelectric or SMA actuators. These grippers have large tip displacements of up to 2 mm and a wide range of gripping forces. 3) Non-mechanical microgrippers: This class contains grippers which lack moving parts or mechanisms completely. The most common type is the vacuum gripper. The use of surface tension with a low viscosity fluid can pick up parts that are released by evaporating the fluid. Yet another model uses a thin film of water that is frozen between the gripper and the object. Non-mechanical microgrippers are simple and the actuation device can usually placed far away from the tip. However, forces are usually not high enough to constrain the part in all directions so a combination with a mechanical type gripper is often used. 4) Passive microgrippers: A passive design refers to the fact that there is no active control of the gripper tips for opening or closing. In contrast, gripper and parts are specially designed counterparts with deformable structures such as snaplock mechanisms. The advantage of those designs is the lack of actuators which are in most cases complex and fragile components. However, the disadvantage is that they require every single object to have one or multiple features according to the counterpart on the gripper. This drastically reduces flexibility of the whole system. C. Force and Vision Sensing The handling of fragile sub-miniature parts often demands precise handling capabilities. For example when working with biological cells, the system has to make sure that the gripping force is kept above the minimum required for holding and below the maximum allowed by the cell itself. Additionally, force sensing can become crucial for the automated assembly, if required friction forces for combining parts have to be estimated. While macrorobotic assembly is dominated by playing back recorded motion using precise robots, vastly different physics

the system followed by an intensive test phase. Once that has been accomplished tools are ready for taking a further step towards the automated assembly of bio-microrobots. R EFERENCES (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 12. Sequence of a manual assembly process where component B of the microrobot is inserted in component A (see Figure 1) using a mechanical microgripper (Figure 5 on the right). Component A is grasped (a) and inserted into a hole of the workbench (b) and UV activated glue is applied (c). Part B picked up (d), rotated (e) and inserted in part A (f). The injection needle for the glue can be seen in (g) and the final assembled microrobot in (h).

in the microworld demand closed-loop control through computer vision feedback. Limited focal depths of microscopes can be overcome by incorporating a multi-microscope configuration and CAD model based tracking techniques. Those models are readily available from the MEMS fabrication process. Force and vision are the main sensing devices for microassembly and the basis for a fully automated system. However, they must offer a certain robustness to external influences. This applies particularly to the vision part where the loss of track can be fatal if it is not detected. V. C ONCLUSION We presented the conceptual design of a 6 DOF microassembly system in its second generation. The focus of the development was on the flexibility of the whole system – it is desirable if grippers, lenses, etc. can be exchanged and adapted to different situations. The unit must also provide an intuitive interface so that no special training is required for operation. Current work is focused on the final assembly of

[1] K. B. Yes¸in, “CAD model based tracking for visually guided microassembly,” Ph.D. thesis, University of Minnesota, Minneapolis, Minnesota, USA, June 2003. [2] K. F. B¨ohringer, R. S. Fearing, and K. Y. Goldberg, The Handbook of Industrial Robotics (chapter Microassembly), S. Y. Nof, Ed. Wiley & Sons, 1999. [3] M. B. Cohn, K. F. Boehringer, J. M. Noworolski, A. Singh, C. G. Keller, K. A. Goldberg, and R. T. Howe, “Microassembly technologies for MEMS,” in Proceedings of the SPIE Conference on Micromachined Devices and Components IV, A. B. Frazier and C. H. Ahn, Eds., vol. 3515, no. 1. SPIE, 1998, pp. 2–16. [4] B. Nelson, Y. Zhou, and B. Vikramaditya, “Sensor-based microassembly of hybrid mems devices,” Control Systems Magazine, IEEE, vol. 18, no. 6, pp. 35–45, 1998. [5] J. T. Feddema and R. W. Simon, “Visual servoing and cad-driven microassembly,” in IEEE International Conference on Robotics and Automation (ICRA), vol. 2, 1998, pp. 1212–1219. [6] A. Ferreira, C. Cassier, and S. Hirai, “Automatic microassembly system assisted by vision servoing and virtual reality,” IEEE/ASME Transactions on Mechatronics, vol. 9, no. 2, pp. 321–333, 2004. [7] N. Dechev, W. L. Cleghorn, and J. K. Mills, “Construction of 3D MEMS microstructures using robotic microassembly,” in Sensing and Manipulation of Micro and Nano Entities: Science, Engineering, and Applications, Workshop, International Conference on Robots and Intelligent Systems (IEEE/RSJ IROS 2003), 2003. [8] N. Dechev, J. K. Mills, and W. L. Cleghorn, “Mechanical fastener designs for use in the microassembly of 3d microstructures,” in Proceedings of the ASME International Mechanical Engineering Congress IMECE, 2004. [9] K. Tsui, A. A. Geisberger, M. Ellis, and G. D. Skidmore, “Micromachined end-effector and techniques for directed MEMS assembly,” Journal of Micromechanics and Microengineering, vol. 14, pp. 542– 549, Apr. 2004. [10] G. Yang, J. Gaines, and B. Nelson, “A supervisory wafer-level 3D microassembly system for hybrid MEMS fabrication,” Journal of Intelligent and Robotic Systems, vol. 37, pp. 43–68, 2003. [11] B. Vikramaditya, B. Nelson, G. Yang, and E. Enikov, “Microassembly of hybrid magnetic MEMS,” Journal of Micromechatronics, vol. 1, no. 2, pp. 99–116, 2001. [12] Q. Zhou, A. Aurelian, B. Chang, C. del Corral, and H. N. Koivo, “Microassembly system with controlled environment,” Journal of Micromechatronics, vol. 2, pp. 227–248, 2002. [13] B. E. Kratochvil, K. B. Yes¸in, V. Hess, and B. J. Nelson, “Design of a visually guided 6 DOF micromanipulator system for 3D assembly of hybrid MEMS,” in Proceedings of the 4th International Workshop on Microfactories, October 2004. [14] F. Beyeler, D. J. Bell, B. J. Nelson, and et al., “Design of a microgripper and an ultrasonic manipulator for handling micron sized objects,” in International Conference on Intelligent Robots and Systems (IROS), 2006. [15] K. Koyano and T. Sato, “Micro object handling system with concentrated visual fields and new handling skills,” in IEEE International Conference on Robotics and Automation (ICRA), April 1996. [16] B. J. Nelson and P. K. Khosla, “Vision resolvability for visually servoed manipulation,” Journal of Robotic Systems, vol. 13, no. 2, pp. 75–93, 1996. [17] R. Raskar, K.-H. Tan, R. Feris, and et al., “Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging,” ACM Transations on Graphics, vol. 23, no. 3, pp. 679–688, 2004. [18] R. T. Vaughan, R. Gerkey, and A. Howard, “On device abstractions for portable, resuable robot code,” in International Conference on Intelligent Robots and Systems (IROS), Las Vegas, October 2003, pp. 2121–2427. [19] B. Rosenhahn, “Pose estimation revisited,” Ph.D. thesis, ChristianAlbrechts-Universit¨at, Kiel, Germany, Sept. 2003. [20] S. A. Stoeter, M. Probst, and M. A. Iranzo, “Improving tracking precision for microassembly,” in IEEE International Conference on Robotics and Automation (ICRA), May 2006.

Suggest Documents