TCP/IP. Communication Program. WINDOWS NT 4.0. WINDOWS NT 4.0 + RTOS. WINDOWS 98. Figure 2. Software architecture of micromanipulation system.
Recognizing and Tracking of 3D-Shaped Micro Parts Using Multiple Visions for Micromanipulation Seok Joo Lee*, Kyunghwan Kim**, Deok-Ho Kim**, Jong-Oh Park**, and Gwi Tae Park* * Department of Electrical Engineering, Korea University 1, 5-Ka, ANAM, SUNGBUK, SEOUL, 136-701, KOREA ** Microsystem Research Center, Korea Institute of Science and Technology P.O.BOX 131, CHEONGRYANG, SEOUL, 130-650, KOREA of-field. So, many micromanipulation systems utilize another high magnification macro vision sensor to observe the whole environment of the working space and deliver it to operator. [5][7][9] However those systems usually need many manual operations, so it is difficult to construct automatic manipulation system and to obtain accurate information. In this paper, we propose new architecture of multiple vision-based micromanipulation system that is composed of three-vision sensors with same viewpoints, and having different magnification. We extract useful information from multiple microvision data for micromanipulation and use it to recognize and track 3D-shaped micro parts. Also, we have made experiments to verify the effectiveness of our multiple vision-based system in micromanipulation. In section 2, we explain our micromanipulation system, and in section 3, we describe a method for recognizing and tracking micro parts. The merit of our system and method of application of our multiple vision-based system are described in section 4. The results of experiment are shown in section 5. Finally, section 6 presents a plan for future research and brief summary. 2. Micro Manipulation System
Abstract: This paper presents a visual feedback system that controls a micromanipulator using multiple microscopic vision information. The micromanipulation stations basically have optical microscope. However the single field-of-view of optical microscope essentially limits the workspace of the micromanipulator and low depth-of-field makes it difficult to handle 3D-shaped micro objects. The system consists of a stereoscopic microscope, three CCD cameras, the micromanipulator and personal computer. The use of stereoscopic microscope which has long working distance and high depth-of-field with selective field-of-view improves the recognizability of 3D-shaped micro objects and provides a method for overcoming several essential limitations in micromanipulation. Thus, visual feedback information is very important in handling micro objects for overcoming those limitations and provides a mean for the closed-loop operation. We propose this method for recognition and tracking 3D-shaped micro parts and generate motion commands for the micromanipulator.
1. Introduction Wide research has been actively carried out on micro manipulation that is very useful in the areas where human approach is limited such as the gene operation, integrated circuit inspection, and micro assembly, etc.[1][3][4][8] In a micro domain, low predictability of manipulation, narrow knowledge about microphysics, and adhesive forces between parts and micro gripper make it difficult to realize micro-assembly task. In addition, precise modeling and calibration is very difficult and complex. Moreover, restriction of micro space has an influence on the whole system structure, so the micromanipulation system needs miniature actuators and high-resolution sensors. But many kinds of sensors, which are used in macro world, have insufficient resolution and size. The vision sensor can obtain high magnification of micro object and microenvironment by the aid of optical microscope. Also, vision data have much useful information for micromanipulation and have many applicable imageprocessing algorithms that have been developed and verified its performance in the past. But high magnifications of optical microscope cause noise at micro vision image, narrow field-of-view and low depth-
In general, micromanipulation system is composed of micromanipulator, micro vision system, master haptic device for micro teleoperation, and etc. Micromanipulation system must be suitable for a goal of micromanipulation task. So many micromanipulation systems have their own distinct feature. In this section we describe the structure and features of our micromanipulation system. 2.1 System Structure The goal of our research is the construction of micro teleoperation system for assembly of 3D-shaped micro parts. Figure 1 illustrates the structure of our micro teleoperated micromanipulation system. The system is controlled by three personal computers. One is Microsoft Windows-98 based PC and it controls master haptic device. The Windows-NT based PC is used for micromanipulator control, and the last one is also Windows-NT based system and it takes charge of image processing of multiple vision data. The information of the whole system is shared by TIP/IP protocol. This distributed
1
micromanipulation. An effective vision system is important for recognizing of 3D-shaped micro parts, path generation for micromanipulator, feedback control for micro actuator, and operation scheduling.
system architecture is useful for reducing system load controlling in real time.
Stereo Microscope (Leica Mz 12.5) Multiple CCD camera (Sony XC-55)
Image Processing
Frame Grabber (Matrox GENESIS) Pentium III 800 256 MB RAM
Figure 1. Structure of micromanipulation system
Monitor
WINDOWS NT 4.0 + RTOS
WINDOWS 98
VC ++ 6.0 Based Control Program Platform
VC ++ 6.0 Based Control Program Platform
Multiple Vision Data Fusion Algorithm
Micro Stage Control Algorithm
Master Haptic Device Control Program
MILImage Processing LIB
Micro Manipulator Control Algorithm
TCP/IP Communication Program
TCP/IP Communication Program
TCP/IP Communication Program
GUI
Vibration Isolation Table
Figure 3. Structure of micro vision system
Figure 3 shows the hardware construction of the micro vision system. The system has stereo optical microscope (MZ-12.5 of Leica co.) with superior recognition ability for 3D-shaped micro parts, multiple CCD cameras (XC-55 of Sony co.), and DSP frame grabber (GENESIS of Matrox co.). Vision data are obtained from CCD cameras, which are mounted on optical microscope, and they are translated to frame grabber. The optical stereomicroscope has wide magnification range, high resolution, and long working distance. Also its field-of-view is 8 times at low magnification and 10 times at high magnification than general optical microscope (Mitutoyo FS60 optical microscope was referred), and it could recognize from a few micrometer to several millimeters. Moreover, general optical microscope has small working distance around 10mm, which restricts the workspace of micromanipulator. However the working distance of MZ-12.5 optical microscope is 97mm. Narrow field-of-view causes restricted workspace for micromanipulator. To the contrary, wide field-of-view offer wide workspace data, but it has low resolution which is insufficient for visual feedback control. So we adopt multiple vision system that has same viewpoint but each vision data have different magnification to overcome the limitation of single field-of-view. Each vision data has different magnification and field-of-view. Therefore, selective field-ofview was possible and observe the whole workspace was also possible. And it has high-resolution vision data. Thus, can overcome the limitation of optical microscope.
TCP / IP
WINDOWS NT 4.0
Object Detection Pattern Recognition Pattern Matching 3D Graphics
Fine Positioner
In the system we use piezoelectric micro actuator (Kloke co.) for micromanipulator, Phantom haptic device for human operator, and Genesis DSP frame grabber (Matrox co.) for high-performance of image processing. Software architecture is depicted in figure 2.
VC ++ 6.0 Based Multiple Vision Data Processing Platform
Object + Gripper
GUI
GUI
Micro Assembly Algorithm
Figure 2. Software architecture of micromanipulation system
2.2 Micro Gripper The micro gripper is operated under a stereo optical microscope with long working distance. It is driven by two Nanomotors which has been developed by Klocke Nanotechnik. It is controlled by a single PC card and it generates appropriate voltages for coarse and fine positioning. The PC card has four channels, so that two Nanomotors could be driven parallel. The Nanomotors can reach to every point on a line within 5mm with the speed up to 1mm/s. Nanomotors are driven by a PC card with only +/- 15 volts.
2.4 Feature of Micromanipulation System Constructed micromanipulation system was aimed at micro assembly. Hence, for dexterous and real time micromanipulation, there exist multiple DOF master haptic device and control architecture which guarantees real time
2.3 Micro Vision System The performance of visual interface is very important factor that determines the speed and accuracy of teleoperated
2
micromanipulation. Also, micro actuator has high resolution and has good control response characteristics. Finally, the proposed vision system has fine recognition ability to recognize 3D-shaped micro parts and bio cell. The optical microscope has 10~20 times depth-of-field than others. Multiple vision based system has same viewpoint and each vision data has multiple field-of-view and multiple magnification. This feature enables effective micromanipulation and it is possible to overcome the defect of micro vision system. Figure 5. After exclude illumination noise
3. Recognizing and Tracking of Micro Parts After minimization of the illumination noises of multiple vision data, using various image processing algorithm for removing useless image data which has been produced due to the high magnification of optical microscope. However the characteristic of noise was different for every image, suitable algorithm must be selected. For example, periodic noise was removed by using FFT. Following equation 3.1 is the Fourier-Transform of N by N image and figure 6 illustrates process of FFT. 1 N −1 N −1 (3.1) F (u, v ) = ∑ ∑ f ( x , y ) exp[ − j 2π (ux + vy ) / N ] N x =0 y =0 u, v = 0, 1,..., N − 1
Algorithm for recognizing and tracking of micro parts and noise exclusion are described in this section. Pattern matching is essential process for vision-based micromanipulation and image noise and illumination noise must be removed before beginning micromanipulation. 3.1 Noise Exclusion In micromanipulation, noises are divided into two groups. One is the illumination of optical microscope, and the other is image noise which is caused by high magnification of microscope. In this paper we use separate algorithm to remove these noises. First, we analyze histogram distribution for removing illumination noise. Figure 4 illustrates original multiple vision data and its histogram. Multiple vision images have different brightness as because multiple visions have different magnification. Thus, the suitable brightness for low magnification image causes much bright image for high magnification and vice versa. A trade-off is needed to find suitable brightness which is necessary for effective image processing..
Figure 6. The spatial filtering process using the FFT
First, the original spatial image is transformed to a frequency image using FFT. The frequency image is multiplied by a frequency mask image using dual-image point process. The frequency mask image is 0 wherever we want to eliminate a frequency and 1 otherwise. The frequency image is then inverse Fourier-transformed back to the spatial domain. Various image processing algorithms must be selected for suitable image. Figure 7 is the result of image processing. Especially, the opening operation is simply an erosion operation followed by a dilation operation. The effect is to remove single-pixel object anomalies such as small spurs and single-pixel noise spikes.
Figure 4. Before exclude illumination noise
Histogram information is effective to observe brightness. Therefore, we adjust brightness physically by tuning the gain of multiple CCD cameras, and then adjust black and white reference signal of frame grabber by software. Figure 4 is the examples of noised multiple vision data by illumination and its histograms. Figure 5 is the result images of illumination noises to be removed. The black references of all images have been adjusted to 30, and the white references to 0, 150, and 110 in order.
3
matching method is just the comparison of target image and model image, so pattern matching algorithm becomes fit for object recognizing by the assistance of developed hardware such as high performance DSP, in recent years. Generally, pattern matching is performed by the equation of correlation that is the convolution type as below; N
r = ∑ Ii Mi
(3.3)
i =1
where, r , Ii , Mi are the correlation value, pixel value of target image and pixel value of model image. The correlation operation can be seen as the form of convolution where the pattern-matching model is analogous to the convolution kernel. In fact, ordinary (un-normalized) correlation is exactly the same as the convolution. In this case, the area that has maximum correlation value is the most similar area of model image; so we can find the pattern by calculating the coordinates. Unfortunately, with ordinary correlation, the correlation value r increases if the image gets brighter. In fact, the function reaches a maximum when the image is uniformly white, even though at this point it no longer looks like the model. The solution is to use a more complex, normalized version of the correlation function [2]; N ∑ IM − ( ∑ I ) ∑ M (3.4) r= 2 [ N ∑ I − ( ∑ I ) 2 ][ N ∑ M 2 − ( ∑ M ) 2 ]
Figure 7. Image processing for noise exclusion; original image, binarize, closing, convolution, opening, erosion, dilation, edge detection, thickening, thinning, rank filter, histogram equalization
The opening operation for input image, f , by mask, B , is as follow: g0 (m, n) = ( f ο B )( m, n) (3.2) = max ( i , j ) {ge ( m − i , n − i )} = max ( i , j ) {min ( i ', j ') f ( m − i + i ' , n − j + j ' )}
With this expression, the result is unaffected by linear changes (constant gain and offset) in the image or model pixel values. The result reaches its maximum value of 1 the image matches the model exactly, gives 0 where the model and image are uncorrelated, and is negative where the similarity is less than might be expected by chance. Normally, we are not interested in negative values, so results are clipped to 0. In addition, we use r 2 instead of r to avoid the slow square-root operation. Finally, the result is converted to the percentage where 100% represents the perfect match. The resulting the match score is (3.5). (3.5) Score = max( r ,0) 2 × 100 [%] A typical application might need to find a 128x128-pixel model in a 512x512-pixel image. In such a case, the total number of arithmetic operations needed for an exhaustive search is 5x512 2 x1282 , or over 20 billion. Even with a DSP processor TMS320C80, this would take several minutes, which is much more than the 10 milliseconds, the searching time. A reliable method of reducing the number of computations is to perform a so-called hierarchical search. Basically, a series of smaller, lower-resolution versions of both the target and the model image are produced, and search begins on a much-reduced scale. If the resolution of the target image or model image is 512x 512 at level 0, then at level 1 it is 256x256, at level 2 it is 128x128, and so on. Therefore, the higher the level, the lower the resolution of the target and model image. The search starts at low resolution to find likely-match
Figure 8. Noise Exclusion using opening
3.2 Recognizing and Tracking of Micro Objects It is positively necessary for pattern matching to recognize and track the micro parts. The method of recognizing micro parts is described in this section. 3.2.1 Recognizing of Micro Parts In micro assembly, micro objects for micromanipulation are divided into two groups. One is micro parts for micromanipulation, and the other is the gripper of manipulator. The information of position or distance between parts and manipulator is fundamental for both of operator and controller of micromanipulator in micro assembly. We analyze problem of multiple vision-based system and find solution in the constructed micromanipulation system. And develop the recognition algorithm for micromanipulation by using multiple micro vision information. 3.2.2 Recognizing Algorithm for Micro Parts Characteristic points matching and pattern matching methods are widely used for object recognizing algorithm. However, characteristic point matching method needs preprocess step of characteristic point detection. This preprocessing may cause time delay for computation, so it is improper for real time control. On the other hand, pattern
4
candidates quickly. The search process iterates on the higher resolutions than that of the formal one sequentially to refine the positional accuracy and make sure that the matches found at low resolution actually are occurrences of the model. Because the position is already known from the previous level, the correlation function is evaluated only at the very small number of locations. But, if the level were too high, original image would damage. So the search algorithm must trade off the reduction in search time against the increased chance of not finding the pattern at very low resolution. Table 1. Example of hierarchical search
Resolution Target image Model image 0 512X512 128X128 1 256X256 64X64 2 128X128 32X32 3 64X64 16X16 4 32X32 8X8 5 16X16 4X4 In application described earlier (128x128 model image and 512x 512 target image), it might start the search at level 4, which would mean using an 8 by 8 version of the model image and 32x32 version of the target image. Level
Figure 9. GUI for multiple micro vision system
Block diagram of data flow and entire work for micromanipulation is depicted in figure 10. There is a close relationship between multiple vision information and movement of micro stage. For micro assembly, control of micro stage and process of multiple vision data must work together. The micro vision system obtain three micro vision data which have multiple magnifications, so selective field of view and tracking of multiple field of view are possible in this structure.
3.3 Tracking of Micro Gripper and Micro Parts For micro assembly task using proposed micro vision system, the confidence of real time tracking of micro parts must be guaranteed. So, verification for tracking of micro parts is accomplished in section 5. Using pattern-matching algorithm that described in section 3.2, moving multiple micro parts are tracked in real time. The 1-DOF micro stage used for accurate movement and its trajectory is generated by sine function. By comparison of the trajectory of center coordinates of micro parts in multiple micro vision data with the trajectory of micro stage, verification of the effectiveness of tracking of micro parts using multiple visions is accomplished.
[ Camera 1 ] Low Zoom Vision Data
[ Camera 2 ] Miiddle Zoom Vision Data
[ Camera 3 ] High Zoom Vision Data
[Multiple Vision Data Processing Algorithm] Algorithm #1 : Selective Field of View Algorithm #2 : Image Processing Module Algorithm #3 : Real Time Pattern Matching Algorithm #4 : Calculate Distance, size, position . .
4. Usefulness of Multiple Vision Data [Micro Stage + Micro Object + Micro Gripper]
Multiple micro vision data are utilized for various purposes. The application method of multiple micro vision data, which have different magnification and same viewpoint, is described in this section.
Micro Positioning Control Micro Manipulator Control Micro Stage Moving Strategy
Figure 10. Block diagram of data flow
4.1 Multiple Vision Data Mainframe of GUI for proposed multiple micro vision system in figure 9. It is the initial states for micromanipulation. This GUI has four windows and three of them were for display vision information. The resulting one is for display of histogram for control brightness by adjusting camera gain, luminous intensity of optical microscope and black/white reference of frame grabber by software.
4.2 Application of multiple vision data Minute movements of micro object or micro gripper cause large transformation of micro vision data. Also, in the high magnification vision data, the micro object disappears frequently in the field of view. Hence it is effective that the position of the target object is always in the center of high magnification vision data.
5
USER
VC 6++ compiler. The white rectangle line of figure 12 is indicates pattern-matching result. The pattern matching scores of micro parts were calculated by equation 3.5, and they are also measurements of pattern matching accuracies.
Select Object for Assembly from DB
User out of the Control Loop
Object Search from Low Zoom Vision area Pattern Matching Algorithm Real Time Tracking of Target Object
Micro Positioning & Stage Moving Strategy
Micro Stage Control
NO Exist Target Object In High Zoom Vision area? YES USER
Define Task (Taget Position and Task)
Figure 11. Example of multiple vision data using Figure 12. Sample data of pattern matching result
Example of the strategy for applying multiple vision data to micro assembly task is illustrated in figure 11. Using selective field of view, the vision system always observes the center of working environment by real time tracking of micro object using pattern matching in the low magnification vision data. Moreover, using this information, micro stage is positioned for object to exist in the center of vision data. For applying this algorithm to micromanipulation, development of confidential pattern matching algorithm and micro positioning algorithm for fine control of micro stage must be followed. In the micro assembly task, to accomplish the focus adjustment algorithm, development of path estimation of micro gripper is effectively applied to the strategy of micro positioning.
Recognizing rates of micro parts for arbitrary 20 angles and position were graphed in figure 16. The larger the rotation angle, the worse the recognition rate. It was because change of pixel value and shadow of micro parts by change of illumination. Actually, the histogram of image was much different for each rotation angle. The graph illustrates that performance of high magnification image was high performance for most angles. It may because the pattern information was obtained more clearly in the high magnification image. Experiment results show that the pattern matching was has more than 80% reliability for the most part and the recognition results were sufficiently reliable. Matching Percentage 100
5. Experiment
90 80
The construction of micromanipulation system is under progress. We have performd experiment to promote the confidence of the whole system. So, to verify the confidence grade of multiple vision data, we have made experiment of recognition and tracking micro parts by using patternmatching algorithm that described in section 3.
70
zoom factor ratio = 1.0 : 1.5 : 3.0 =
60
:
:
50 40 30 20
5.1 Result of Recognizing Micro Object Micro parts that used in experiment were programmed oscillator IC and its size is 600 µm × 1200 µm , and the resolution of each multiple vision data is 640 × 480 . The relative position between micro parts, rotation degree, and match score was extracted to analyze the reliability of multiple vision system. Figure 12 indicates experiment methods that were accomplished by rotating micro parts arbitrary degree. The program was constructed by using MS
Angle of Object
10 0
30
60
90
120
150
180
210
240
270
300
330
360
Figure 13. Result graph of pattern matching score
5.2 Analysis of Multiple Vision Data Reliance The model images must be selected each different magnification vision data in the pattern-matching algorithm, which used in this paper. It may cause error of relative
6
distance between vision data having different magnifications. If the error was zero, the relative distance ratio must equal to the ratio of zoom factor. To observe distance error of different magnification image, we calculate pixel number between recognized two micro parts as relative distance.
of the slight trembling of stage and error of model image for pattern matching. The experiment result shows that tracking performance in proposed multiple vision system was very reliable. Table 3. Tracking result of micro parts
magnification rate moving distance(pixel) Distance ratio
Table 2. Relative distance of different magnification image
Image 2 1.5 191.962111 1.5024696
Image 3 3.0 383.237371 2.999563
5.3 Result of Tracking Gripper and Object Real time tracking of micro parts is fundamental condition for micro assembly. In this section the real time tracking result using multiple vision system is described.
(a) input signal for micro stage
(c) tracking result of middle magnification
3.0 208.6 3.14
The narrow workspace of micromanipulation system gives many restrictions to many operations which are necessary for micro assembly task. Basically, it is very difficult for the system to sense micro object because of the inaccuracy of the obtained micro sensor data. In this paper we propose new architecture of multiple micro vision system which have different magnification at the same viewpoint. Multiple vision system architecture has already been used but because of different viewpoint cause many problems and many restriction of field-of-view. Using proposed multiple micro vision, we solve the recognizing and tracking problem, which caused by narrow filed-of-view of micro vision system. The recognizing and tracking of micro parts are essential prerequisites for micro assembly. The verification of confidence degree of proposed multiple vision system and presentation of effective methods for application of multiple vision data were also accomplished. As a result, we have obtained satisfactory performance and confidence degree for recognizing and tracking by using proposed multiple vision system. The construction of micromanipulation system and the development of effective strategy are under progress for accurate micro positioning of micro stage and for accomplishing effective micro assembly.
Table 2 has experiment results, and the error of relative distance is below 0.1%. This error occurred by reason of selection of model image. So, using this pattern matching algorithm, operator must be careful of selecting model image.
Image 1 1.0 127.764387 1.0
1.5 104.4 1.57
6. Conclusion
Figure 14. Result of relative distance analysis
Zoom ratio pixel pixel ratio
1.0 69.5 1
References
(b) tracking result of low magnification
[1] T.tanikawa, T.Arai, Y.Hashimoto. "Development of Vision System for Two-Fingered Micro Manipulation", Proceedings of the Intelligent Robotics and System, pp.1051-1056, Grenoble, September 1997. [2] Matrox Image Library Users Guide, Matrox Electronic System Ltd., 1997. [3] M. C. Carrozza, P. Dario, A. "Manipulation Biological and Mechanical Micro Objects using LIGA-Microfabricated End Effectors", Proceedings of the IEEE ICRA, pp.18111816, May 1998. [4] S.Fahlbusch, S.Fatikow, J.Seyfried, A.Buerkle, "Flexible Microrobotic System MINIMAN: Design, Actuation Principle and Control", IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp.156 161, 1999 [5] Sano, T., Yamamoto, H. “Study of Micromanipulation Using Stereoscopic Microscope”, Instrumentation and
(d) tracking result of high magnification
Figure 15. Tracking result of micro parts
The sine function of figure 15.(a) is input reference signal for micro stage which has 1-DOF. The micro stage moves 1000µm repeatedly in 10 times. Figure 15.(b),(c),(d) are tracking results of two micro parts, and the slant lines show the trajectories of the center coordinates of micro parts. Table 3 shows the analysis result of experiment. Relative distance errors of the tracking results were bellow 0.1% and it because
7
Measurement Technology Conference, 2000. IMTC 2000. Proceedings of the 17th IEEE , Volume: 3 , 2000, Page(s): 1227 -1231 vol.3 [6] Deok-Ho Kim, Keun-Young Kim, and Kyunghwan Kim “Micro Manipulation System Based on Teleoperation Techniques”, The 32nd International Symposium on Robotics, pp. 686-691, April 2001 [7] Xudong Li, Guanghua Zong and Shusheng Bi,"Development of Global Vision System for Biological Automatic Micro Manipulation System", Proceeding of the 2001 IEEE International Conference on Robotics and Automation, May 2001 [8] N.Tsukada, K.Kudoh, A.Yamamoto, T.Higuchi, K.Sato, M.Kobayashi, K.Oishi, and K.Iida, "Development of Oocyte Rotation System for Biological Cell Manipulation", The 32nd International Symposium on Robotics, pp.682-685, March 2001 [9] Seiji Hata, Ko" ichi Sugimoto, and Ichiro Ishimaru (Kagawa Univ., Japan) “3-D Vision Systems and Its Computer Model for Micro-Operation”, The 32nd International Symposium on Robotics, pp.698-703,April 2001
8