Flexible Microrobots for Micro Assembly Tasks H. Woern, J. Seyfried, St. Fahlbusch, A. Buerkle and F. Schmoeckel Institute for Process Control and Robotics, Universität Karlsruhe (TH) Kaiserstr. 12 (40.28), D-76128 Karlsruhe, Germany e-mail:
[email protected]; phone: +49 721 608 4006, fax: +49 721 608 7141 ABSTRACT A wide range of microcomponents can today be produced using various microfabrication techniques. The assembly of complex microsystems consisting of several single components (i. e. hybrid microsystems) is, however, a difficult task that is seen to be a real challenge for the robotic research community. It is necessary to conceive flexible, highly precise and fast microassembly methods. In this paper, the development of a microrobot-based microassembly station is presented. Mobile piezoelectric microrobots with dimensions of some cm3 and with at least 5 DOF can perform various manipulations either under a light microscope or inside the vacuum chamber of a scanning electron microscope. The components of the station developed and its control system are described. The latter comprises a vision-based sensor system for automatic robot control and a user interfaces for semi-automated control and teleoperation. First results of the SEM-based micro assembly, handling of biological cells and integration of force micro-sensors into our microrobots are presented as well.
manipulation and positioning of micro-objects. Automatically controlled with the help of visual and force sensors, these robots may free humans from the tedious task of having to manipulate very small objects directly. An intelligent vision system including a microscope adapter, a CCD-camera and a PC is used for object recognition, image processing and feedback position control of the robot in real time. Other control loops, using the information from tactile and force microsensors on the robot effectors, are currently being integrated into the system.
Keywords: Microrobots, Micro Manipulation, Micro Assembly, Vision Sensors, Force Sensors, Object Recognition, SEM Figure 1: MINIMAN III with exchangeable manipulating module
INTRODUCTION The assembly of microsystems, the handling of biological cells, and the handling of specimens for scanning electron microscopy (SEM) all require micro-manipulation facilities. The existing micro-manipulation and micro-assembly systems are rather large and expensive, are usually tailored to a specific task and, if teleoperated, depend on the manual skill of the operator. Microrobots like the ones presented here are likely to help solving this problem [1-3]. In order to perform manipulations with objects in the milliand micrometer range, highly precise robots are necessary which can position objects with an accuracy of a few µm down to some nm. Depending on the application, the robots should be either able to operate automatically or in teleoperation. At the same time, a macroscopic movement is required in all applications which should be reasonably quick both for automatic and teleoperated operation. For these applications, we have developed a flexible micro-manipulation station (FMMS) [4]. Within this station, it is possible to carry out an assembly process under a light microscope or inside the vacuum chamber of a scanning electron microscope (SEM). For this station, various piezoelectric microrobots have been developed, which are able to perform transports, high-precise
DESIGN OF THE MICROROBOTS Each of the robots consists of a mobile platform driven by three tube-shaped piezoceramic legs. These legs can be bent in any direction by applying different voltages to the electrodes. The process of walking is based on inertia forces. A slip-stick [1] actuation principle has been implemented [4-6]. A small movement is achieved by bending all three legs slowly in the same desired direction. Then, the polarity of the voltage is abruptly changed making the legs bend in the opposite direction. Because of inertia, the legs slip on the ground without moving the platform. Finally, the voltage is slowly decreased to relax the legs, causing another small movement. By repeating these three steps, the robot can move over long distances. The maximum stroke of a leg at an applied voltage of ±150 V is about ±3 µm. At a step frequency of up to 5 kHz, a maximum speed of 30 mm/s can be obtained. The motion resolution of the platform is 20 nm. Each microrobot is equipped with a flexible manipulator unit to allow manipulation of differently shaped microobjects with several degrees-of-freedom. The prototype MINIMAN III, Fig. 1, is equipped with a piezoelectrically driven gripper
having three rotational DOF. A steel ball as interface permits easy tool exchange.
As a further integration measure, long flexible printed circuit boards realize the up to 50 connections to the control system. However, the smallest available plugs still consume a lot of space on the robot platforms. Printed Circuit Board
Manipulating Unit (simplified) Notch for Connector Figure 2: RobotMan with integrated CCD-camera
Prototype RobotMan (Figure 2) uses a two-fingered gripper with two translatory degrees of freedom. This gripper is driven by three DC micro-motors. Two Faulhaber micromotors with integrated planetary gears actuate both exchangeable end-effectors of the gripper (Figure 12, left). The motors’ rotation is transformed into a linear movement by two spindle drives.
Figure 3: Horizontal positioning unit with micro-gripper
By using an RMB brushless DC motor with planetary gearhead, the gripper can be horizontally positioned (Figure 3). The upward movement of the positioning unit is enabled by a kind of cable winch, its downward movement by a spring construction. A high-precision parallel guidance with ball-bearing provides the required positioning accuracy. With the latest prototype, MINIMAN IV, the microrobot size will be further reduced. It consists mainly of a small printed circuit board (Ø50 mm) integrating all piezolegs, LEDs and the connectors to the changeable manipulating unit and the control system. Figure 4 shows the design of the robot with one of the planned manipulating units.
Piezo Legs Figure 4: Design of the latest prototype MINIMAN IV
DESIGN OF THE STATION Figure 5 shows an overview of the microrobot-based microassembly station that has been implemented. The robots work on either an XY-stage of a light-optical microscope or in an SEM. The microscope and the microrobots are controlled by a lower-level control computer which is equipped with several interface cards, power electronics and ADconverters. A higher-level computer hands down commands to the control computer. The spectrum of tasks in microassembly ranges from simple preparatory operations like applying adhesives, drawing adjustment marks, cleaning objects, etc. to the performance of the final assembly of the microsystem, including grasping, transportation, positioning and fixing of parts. A well conceived FMMS must be able to accomplish these steps automatically, which requires an integrated planning and control system. To perform an automated assembly, the given assembly problem has to be specified using a CAD model to the planning system. Based on the geometric assembly model and the information on the necessary connections between parts, the microassembly station has to perform task planning, generate a motion sequence and control the execution of this sequence. An intelligent assembly planning system is located at the uppermost control level. Assembly planning for micro assembly has to take problems into account which are specific to the micro world. This includes the fact that vision is mandatory for all assembly steps, but difficult due to the small dimensions (see next section). The optimization of the as-
sembly sequence has to take disturbing adhesive effects into
account by avoiding very small subassemblies.
Figure 5: Components of the micro-manipulation desktop station
The microassembly planning system of the FMMS, which is currently being developed [7-9], consists of three main modules: system interface, assembly task planner and assembly execution planner. The modules are supported by a knowledge base which includes knowledge on the task specification, an assembly model of the microsystem, the specification of existing microrobots and their tools, a world model (micro and macro) and the current station state obtained by the sensor system. User interface World model Product design
Assembly planning Robot control language interpreter
Supervisor, global execution planning
Knowledge base
ORB
Robot object
Robot object
Kernel
Kernel
Real-time Kernel
Real-time Kernel
Robot 2
Microassembly process
Robot 1
Assembly aids
...
...
Camera object
Camera object
Kernel
Kernel
Real-time Kernel
Real-time Kernel
Sensors
Micro objects
...
Figure 6: Planning and Control system architecture
The user interface module is designed to allow the user to define a multirobot environment and help him with gathering the domain knowledge and specifying the initial and desired final assembly states in terms of the objects, microrobots, tools and their relationships. The planning process is based on the assembly model of the microsystem to be built. It is performed in three steps. In the first step, an assembly graph is generated, containing all feasible assembly sequences. After that, the optimal sequence is selected, which fulfills the given optimization criteria. The geometry of the working area and the resources available such as microrobots and their tools are taken into account during this step. Finally, the action sequence is decomposed into sub-plans for the station’s microrobots according to their operational capabilities. The execution planning done by the microrobots' subplanners is supervised to ensure the consistency of the plans. The sub-plans are decomposed into single operations by the execution planner, providing each microrobot with a series of instructions in a special object-oriented robot control language (RCL) code used by the interpreter. To account for the parallelism inherent in the multi-robot assembly system, the RCL interpreter performs a look-ahead execution of the robot programs. To accomplish this, it analyzes interdependencies between variables and objects and executes as many statements out of order as possible while some or all of the robot objects are busy. At all stages of the manipulation process,
the user is able to intervene with a convenient user interface and perform manual teleoperations, if necessary. The underlying object oriented control system employing CORBA for inter-object and remote communication was presented in [10]. To make the control system architecture as scalable and flexible as possible, the physical objects in the station (cf. Fig. 1 and 6) were mapped onto software objects using a permutation mapping. These software objects can either run on a single server or distributed on several computer systems, depending on the number of objects and the resulting computational load. Analysis of the necessary communication events has shown that two classes of communication can be defined depending on the current state of the control system and the objects: a high-level communication between objects and a low-level communication for closed loop control algorithms. Since no suitable implementation of a real-time CORBA is available, the communication protocol bypasses the CORBA protocol when a close link between two or more control objects is necessary (e.g. a robot object positions itself using vision data from a camera object).
good results even with noisy images. Furthermore, we do not need to know the exact number of segments in the image in advance. This fact is of great importance in the case of additional spots caused by light reflection. Still, this method is not quite suitable for real-time motion control of the robot as turning the LEDs on and off and segmenting the difference image takes too long to allow online-tracking. Therefore, it is only used to detect the initial position. While the robot is moving, the LEDs stay on, and their position in the captured image is searched in the neighborhood of their previous location.
VISION-BASED SENSOR SYSTEM The microrobots’ motion control approach was introduced in [4-5]. The aim is to control the robot movements in a such way that, first, the tip of the manipulator is moved from the initial (actual) point to the aspired end-point and, second, the defined orientation of the robot in the final state is achieved. One can distinguish between coarse motion (i.e. navigation of the robot over a long distance) and fine motion (i.e. manipulation of parts under a microscope). Reflecting this fact, the visual sensor system of the FMMS using a light microscope consists of two parts - a global sensor and a local sensor, as in related work [12-13]. The global one supervises the microrobots’ work space and detects the position and orientation of the robot, with a deviation of less than 0.5 mm at the tool center point. This is sufficient to navigate the robot into the field of view of the microscope. The actual manipulations are monitored by the local sensor system (i.e. through the microscope) the accuracy of which lies in the range of a few µm. Global Sensor System Controlling the robot requires the knowledge of its current position and orientation, which is obtained in the FMMS from a CCD-camera. To easily locate the robot in the camera image, there are three LEDs mounted on top of the robot platform, forming an isosceles triangle, Figure 7. Knowing the 2D pixel coordinates of all LEDs, the position and orientation of the microrobot can be calculated. To locate the three LEDs in the image provided by the global camera, a region-based segmentation algorithm known as pyramid linking and described in [14-15] has been implemented [16]. One advantage of this method is that it provides
Figure 7: The robot MINIMAN-II with three LEDs
Local sensor system Once the robot has reached the microscope’s objective, the local sensor system takes over control. This sensor is formed by the microscope and a top-mounted CCD camera. The combination of microscope and camera leads to the problems a microrobotics researcher typically has to cope with: a small field of view and a small depth of focus. A possible approach to increase the depth of focus is multi-focusing. By capturing several images at consecutive focus levels and combing the sharp areas of each image a focused image can be generated [17]. The focused image can now be used as input for an object recognition algorithm. To permit automation of microassembly operations, a fast and reliable vision system is necessary. Furthermore, the wide range of applications makes great demands on a such system. In order to meet these demands the vision system possesses a modular structure. It consists of several dedicated modules which communicate via shared memory. They comprise methods for calibration, object recognition, position detection, object tracking, measuring and depth estimation. Since SEM images differ from light microscope images in several respect like shading and depth of field, they cannot be treated by the same object recognition algorithms. Furthermore, specific applications like the handling of cells require special algorithms for non-rigid objects. The central part of the vision system is an object database containing feature representations of all known objects as well as actual information about their current status, i.e. visi-
bility and position. Messages to activate particular vision modules are sent to the vision system. Requirements such as the location of an object or processed images are stored in shared memory and can be accessed by either the control system or any one of the other vision modules. The database serves as interface between the control system and the recognition system. The modular concept of the vision system allows to spread the modules over several computers. The database not necessarily has to be located on the same computer as the recognition system or the control system. Object recognition module Currently, the object recognition module only considers planar, two-dimensional objects. Three dimensional object recognition is too intensive a task to perform in real time. For most applications this is not a real constraint since most objects can be identified by their two-dimensional silhouette. The implemented algorithm recognises rigid objects like a gripper and micro parts by the curvature of their contour. Based on the generation of curvature zero crossing points (CZC) [18] introduced by Mokhtarian, a fast and reliable recognition and position detection method has been developed. Figure 8 outlines the procedure. First, edges were extracted and traced to form continuous curves. These curves were smoothed and parametrised. Their curvature was calculated using a derivative filter. CZC-segments consisting of two adjacent CZC-points were generated and used as features to represent the object(s). Comparing the generated features with those in the object database results in a large number of hypotheses. A penalty function evaluates every hypothesis resulting in a list of concrete objects. Unlike other feature representations, such as Fourier descriptors, describing the global shape of an object, CZCpoints describe local features. This is an essential requirement for the handling of occlusion and partial visibility. Nevertheless, recognition on the basis of CZC-points has two drawbacks. First, it is not scale invariant, e.g. different magnifications require different object representations in the database. This can be avoided by calibration of the microscope imaging system and scaling the object representation appropriately. A more severe problem is the inability of recognising convex objects since they do not have any CZC-points. A special representation is needed for this kind of objects. The computational costs highly depend on the number of objects in the database. In the example shown in Figure 1, gripper and gear wheel could be recognised and located in 3.4 sec. on a Linux-PC (Pentium II@400 MHz). Once the objects under the microscope were identified, the vision system switches into tracking mode. Tracking is a much faster way to determine the position of an object by applying a-priori information such as the kind of object and its previous position.
When tracking fails, e.g. the gripper moves out of the field of view, the system falls back into recognition mode. a)
b)
c)
d)
Figure 8: Object recognition with curvature zero crossing points (CZC): a) actual scene b) boundary segmentation c) CZC segments d) generated hypothesis superimposed with the CZC segments actually used for identifying the corresponding object.
Depth recovery module The light microscope only provides a two-dimensional projection of the scene. Though there are possibilities to directly extract depth information from standard microscope images, these methods appear to be unsuitable to monitor assembly operations. They either require a series of images captured at different focus levels (auto focussing), which makes them inapplicable in real-time, or they measure the fuzziness of blurred structures, typically edges [19], which requires edged object contours.
Figure 9: A laser module for depth measuring has been integrated into the FMMS
Within the FMMS, a laser triangulation method is employed to acquire depth information. A focusable class 3B diode laser (14 mW, λ = 635 nm) was chosen as projection device. A cylinder lens spreads the laser beam to a sheet of light. The laser module is mounted on a micro positioning table driven by a piezo-based linear motor, which is affixed to the microscope’s specimen stage, Fig. 9. The measuring principle is based on a method called sheet of light triangulation, a variation of standard triangulation. The current height of an object, e.g. the gripper, is calculated from the position where the laser sheet of light intersects with the object, Fig. 10 [20].
been implemented, that extracts the micrometer lines from the image and counts the number of pixels between the lines. The result is the pixel dimension of the optical system, i.e. a value that specifies the size of a pixel in µm. Since pixels do not have to be squared, the calibration has to be performed both in x- and y-direction. The height resolution that can be reached with the described method highly depends on two factors: The lateral resolution and the accuracy of the line segmentation. The first factor is limited by the microscope magnification and the resolution of camera and frame grabber. The second factor describes how accurate the rather thick laser line can be located by image processing. Assuming a line segmentation accuracy of one pixel and using a standard CCD camera and frame grabber the height resolution in our setup lies in the range of 0.67 µm to 5.4 µm, depending on the selected microscope objective. FORCE SENSING SYSTEM
Figure 10. Applying laser triangulation for measuring the vertical alignment of the micro-gripper
The calibration procedure of the depth measuring system was also described in [20]. Besides position and orientation of the laser sheet of light, the image acquisition system consisting of microscope, CCD camera and frame grabber has to be calibrated resulting in a mapping function between pixel coordinates and real world measurement.
The function of a gripper is to guarantee a fixed position and orientation of the gripped part with respect to the last link of the robot. The inertia forces in micro-assembly are not as severe as in conventional assembly, because the weight of the parts is much smaller in relation to the surface available for gripping [21]. The gripper must grasp the part firmly enough to prevent it from shifting while at the same time it must not deform or damage it. Micro-grippers with integrated force sensors (Figure 12, left) have the advantage that they can detect the gripping forces and the presence of a part without the need to add an external sensor or an extra vision algorithm. By eliminating a control step, the assembly cycle can be made shorter.
1.5 mm Figure 12: Micro-gripper with mounted end-effectors (left), end-effector and mounted strain gauge (right)
Figure 11: The camera system is calibrated using a stage micrometer (100 lines per mm)
The latter one is obtained by using a stage micrometer as shown in Figure 11. An automated calibration procedure has
For first measurements, semiconductor strain gauges have been chosen. Up to a certain extent, the investigations already gained within this field in the macro-world can be transferred into the micro-world. Although alternative methods are being investigated successfully, the use of strain gauges seems is the most widely used approach, because of its good performance in terms of costs, speed and accuracy of measurement. Two
pairs of semiconductor strain gauges have been glued at the base of the end-effectors (Figure 12, right). One strain gauge of each pair is stressed in tension and the other in compression load, in order to obtain a double output signal compared to the use of a single gauge. A further reduplication of the signal amplitude is obtained by inserting the two pairs in a full Wheatstone bridge configuration. Micro-grippers with integrated piezoresistive force sensors and with attached strain gauges are limited in their ability to resolve the gripping force. A scanning probe microscope (SPM) allows very precise displacement and force measurements in subangstrom and subnanonewton ranges. Hence, self-sensing SPM cantilevers are currently being integrated into the gripper of one of the microrobots. These cantilevers operate by measuring stress-induced electrical resistance changes in an implanted conductive channel in the flexure legs of the cantilever. The real-time force feedback provided by these sensors offers information to better understand the prevailing nano forces and dynamics, what is indispensable for reliable micro-manipulation strategies. When the gripper is approaching a micro-part, e.g. just before impact occurs, a peak downward in the force plot shows an adhesive force that begins pulling the cantilever before impact actually occurs (Fig. 13). This phenomenon has been observed and reported during micro-manipulations already by [22].
constant and ∆R/R is the resistance change of the cantilever per unit deflection divided by the resistance of the cantilever.
User interface A robot system with a complexity like the presented FMMS has to offer an intuitive way to operate the robots and enter commands. Otherwise, the number of parameters the user can adjust would make the system too complex to control by a human. Therefore, a graphical user interface (GUI) has been developed [23].
Figure 14: Dialog windows of the GUI
It offers an intuitive point-and-click interface to control the robot semi-automatically; the user can specify the desired robot position and orientation onscreen and can choose between several implemented closed-loop control methods (e.g. PI, a fuzzy controller or neural-based model reference adaptive controller). Figure 14 shows an overview of the dialog windows for the whole system including the RCL interpreter, execution windows and a time diagram which shows the outof-order execution of an RCL program. The development is based on the Qt library which is free for Linux.
Figure 13: Interatomic forces
Piezoresistive cantilevers have the considerable advantage that the deflection-sensing element is integrated into the cantilever, so unlike optical levers, they do not require external laser and detectors. The resistance of the piezoresistive cantilever is measured with a Wheatstone bridge. This arrangement produces an output voltage Uw given by
Uw = k ⋅
F ∆R ⋅ Ub R
where F is the force exerted on the cantilever, Ub is the Wheatstone bridge bias voltage, k is the cantilever spring
Figure 15: 6D-mouse to control the robot [24]
Another possibility is the use of a 6D mouse to control the motion of the gripper, Fig. 15. With this input device, the user’s motion commands are split up into commands for the mobile platform and for the manipulation unit. Depending on the current situation, the maximum speed of the robot can be capped for fine manipulation. This interface offers a very intuitive way to perform telemanipulation.
The best handling results could be achieved with the pipettes of about the same dimension as the cell (21 µm), chiefly if they were coated with Repel Silan, Fig. 17. Currently, we are working together with a research project partner to integrate a suction gripper into the robots to perform manipulations with these cells with the already proven teleoperation interface.
MICROROBOTS WITHIN AN SEM Considering the demands of micromanipulation and the abilities of the mobile microrobots concerning the dimensions of the workspace (also in height) in connection with the attainable precision of 20 nm, the limits of light microscopy are obvious. The scanning electron microscope (SEM) is superior to the light microscope in resolution and, what is often more important, depth of focus. The large working distance – i.e. the distance between the final lens and the samples – of an SEM offers much more space for robot systems. At present, good success has been shown with the prototype MINIMAN III in teleoperation monitored by the SEM image and an additional lateral miniature camera mounted inside the vacuum chamber of the SEM. As an example, Figure 16 shows the teleoperated microassembly of a micro gear with the help of the 6D-mouse [25].
Figure 17: Cell handling with a commercial glas pipette with an inner diameter of 21 µm
CONCLUSIONS In this paper, our current efforts on the development of a microrobot-based micro-manipulation station were presented. Mobile piezoelectric microrobots are employed to perform various manipulations either under a light microscope or within the vacuum chamber of a scanning electron microscope. The components of the station developed and its control system were described. We discussed the basic FMMS components: an automated planning and control system, a vision-based sensor system for the automatic robot control and a user interface. Finally, some new topics of the research work, which are coming up now, were introduced. So we looked into the first successes of the SEM-based microrobotics, handling of biological cells and into our research activities on the development of force micro-sensors for integration into our microrobots. ACKNOWLEDGEMENTS
500 µm
Figure 16: Mounting a Ø500 µm wheel of a planetary micro gear by teleoperation with a 6D-mouse: lateral camera images and an SEM image
This research work has been performed at the Institute for Process Control and Robotics (Head - Prof. H. Wörn), Computer Science Department, University of Karlsruhe. The research work is being supported by the European Union (ESPRIT Project “MINIMAN”, Grant No. 33915).
CELL HANDLING
REFERENCES
First cell handling experiments were performed using OLN-93 oligodendroglia cells from the rat brain. These cells have dimensions of about 20 µm and can easily be cultivated. First cell handling experiments were performed with commercial glass pipettes of various inner diameters (10 µm, 3.6 µm, 21 µm) and a manual suction pump for cell handling.
[1] J.-M. Breguet, E. Pernette, R. Clavel: “Stick and slip actuators and parallel architectures dedicated to microrobotics”, Proc. SPIE 2906, Boston, 1996. [2] J. Hesselbach, N. Plitea, R. Thoben: “Advanced technologies for microassembly”, Proc. SPIE 3202, Pittsburgh, 1997
[3] T. Kasaya et al.: “Micro Object Handling under SEM by Vision-based Automatic Control”, Proc. SPIE 3519, Boston, 1998 [4] S. Fatikow: “An Automated Micromanipulation Desktop-Station Based on Mobile Piezoelectric Microrobots”, Proc. SPIE 2906, Boston, 1996 [5] B. Magnussen et al.: “Actuation in Microsystems: Problem Field Overview and Practical Example of the Piezoelectric Robot for Handling of Microobjects”, Proc. ETFA, Paris, 1995 [6] U. Rembold and S. Fatikow: “Autonomous Microrobots, Journal of Intelligent and Robotic Systems” 19: 375391, 1997 [7] S. Fatikow: “Planning and Error-Free Plan Execution in a Flexible Microrobot-Based Microassembly Station”, Proc. ISIR, Stockholm, 1997 [8] J. Seyfried, S. Fatikow and A. Mardanov: “An Automated Microassembly Environment”, Proc. IROS, Grenoble, 1997 [9] J. Seyfried and S. Fatikow: “Microrobot-based Microassembly Station and its Control using a Graphical User Interface”, Proc. SYROCO, Nantes, 1997 [10] J. Seyfried: “Control and Planning System of a Micro Robot-based Micro-assembly Station”, Proc. of the 30th ISR, Tokyo, Japan, 1999 [11] R. Munassypov et al.: “Development and Control of Piezoelectric Actuators for the Mobile Micromanipulation System”, Proc. ACTUATOR, Bremen, 1996 [12] S. Allegro, J. Jacot: “Automated Microassembly by Means of a Micromanipulator and External Sensors”, Proc. of the Int. Conf. Microrobotics and Micromanipulation, SPIE ’97, Vol. 3202, Pittsburgh, USA, 1997 [13] S. Allegro: “Use of a Leica DM RXA Microscope as Optical Sensor for Automated Microassembly”, Scientific and Technical Information Vol. XI, No.5, October 1997 [14] P.J. Burt: “The pyramid as a structure for efficient computation”, In: A. Rosenfeld (Ed.): Multiresolution Image Processing and Analysis, Springer, 1984 [15] B. Jähne: “Digital Image Processing”, Springer, 1997 [16] A. Bürkle and S. Fatikow: “Computer Vision Based Control System of a Piezoelectric Microrobot”, Proc. CIMCA, Vienna, 1999 [17] S. Fatikow, J. Seyfried, St. Fahlbusch, A. Buerkle, F. Schmoeckel and H. Woern: “Intelligent Microrobotic System for Microassembly Tasks”, 1st Int. Conference on Mechatronics and Robotics, St.-Petersburg, Russia, May 29-June 2, 2000 [18] F. Mokhtarian: “Silhouette-Based Object Recognition with Occlusion through Curvature Scale Space”, Lecture Notes in Computer Science, vol. 1064, 1996 [19] A. Sulzmann, P. Boillat, and J. Jacot: “New developments in 3D Computer Vision for microassembly”, Proc. of SPIE Int. Symposium on Intelligent Systems & Advanced Manufacturing, Vol. 3519, 1998
[20] A. Buerkle and S. Fatikow: “Laser measuring system for a flexible microrobot-based micromanipulation station”, IEEE/RSJ Int. Conference on Intelligent Robots and Systems (IROS), Takamatsu, Japan, 2000, in print [21] R.S. Fearing: “Survey of Sticking Effects for Micro Parts Handling”, Proc. Int. Conf. on Intelligent Robots and Systems, 2, Pittsburgh, 1995 [22] B. Nelson, Y. Zhou, B. Vikramaditya: “Integration of force and vision feedback for microassembly”, Proc. SPIE 3202, Boston, MA, 1997 [23] H. Woern, J. Seyfried, S. Fatikow, K. Santa: “Information Processing in a Flexible Robot-Based Microassembly Station”, Proc. INCOM, Nancy, 1998 [24] LogiCad3D GmbH, Gilching, Germany, Driver CD, 1999 [25] F. Schmoeckel, S. Fatikow: "Smart flexible microrobots for SEM applications”, Journal of Intelligent Material Systems and Structures, accepted