A Novel Omnidirectional Image Sensing System

1 downloads 0 Views 721KB Size Report
Wan Soo Kim and Hyung Suck Cho, Member, IEEE ... 8, 1998. Recommended by Technical Editor K.-M. Lee. W. S. Kim is with the FA Research Institute, ...
IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

275

A Novel Omnidirectional Image Sensing System for Assembling Parts with Arbitrary Cross-Sectional Shapes Wan Soo Kim and Hyung Suck Cho, Member, IEEE

Abstract— In contrast with relatively simple cross-sectional shapes, it is difficult to discern misalignments in relative position and angular orientation between mating parts with complicated cross-sectional shapes. This is because geometrical uncertainties depend on the complexity of the shapes, and their assembly requires complete information about possible misalignments along their mating boundary interfaces. This motivates the development of a new sensing method for detecting misalignments. However, in the area of assembly, such a method has not been fully developed; for the most part, only local sensing techniques using proximity, tactile, force, and vision sensors have been developed. In this paper, a novel omnidirectional image sensing system for assembling parts with arbitrary cross-sectional shapes is proposed, and its features are investigated theoretically. Its feasibility for assembly is also shown by experiments. Utilizing a pair of conic mirrors and a camera, this system can immediately acquire a 2 coaxial image of the misalignment along the mating boundary interface between mating parts without experiencing self-occlusion. Index Terms—Assembly, complicated shape, image sensing, 2 misalignment.

I. INTRODUCTION

A

SSEMBLY, an important step in many manufacturing processes, is a positioning and aligning task. Automated assembly has been in place for decades. Because manufacturing style is being transformed into small-batch-size production units, many researchers are beginning to take an interest in flexible assembly techniques. Flexible assembly employs four principal stages: part acquisition, part handling, part positioning, and part mating. According to Whitney’s research [1], the majority of time in an assembly cycle, about 65%, is devoted to part handling and part mating. This is primarily due to geometrical uncertainties incurred between mating parts [2], [3]. These uncertainties generate misalignment between mating parts and have many and various causes, such as manufacturing parts tolerance or system control and modeling errors. A small misalignment in relative position and angular orientation would always occur at the interface between mating parts. This misalignment can then produce large contact forces, preventing successful Manuscript received January 31, 1997; revised October 13, 1997 and May 8, 1998. Recommended by Technical Editor K.-M. Lee. W. S. Kim is with the FA Research Institute, Production Engineering Center, Samsung Electronics Company Ltd., KyungKi-Do, 441-742 Korea. H. S. Cho is with the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Taejon, 305-701 Korea. Publisher Item Identifier S 1083-4435(98)09331-4.

assembly completion. Damage to parts or a robot can also result. To this end, the misalignment must be detected and compensated for during the mating period. In the interim, analysis of the misalignment between mating parts requires the information of their relative geometry and, thus, depends closely on the complexity of a cross-sectional shape [4]. Therefore, assembly requires a sensing system that is capable relative geometrical information between of obtaining mating parts along their mating boundary interfaces, regardless information, which of the complexity of the shape. This we call omnidirectional information, is more effective in a field such as an insertion task of parts with asymmetrical and complicated shapes, because their misalignments cannot be easily understood by their locally detected relative geometry. To date, many techniques using force/torque (F/T) sensors, optical fiber sensors, pneumatic pressure sensors, and visual sensors have been developed for misalignment detection and its compensation between mating parts [5]–[6]. However, these techniques are suitable for detecting relative local geometry between mating parts along their mating boundary. This local sensing capability makes it difficult to acquire the relative geometrical information between mating parts with complicated shapes. The force sensor [7]–[9] can obtain point contact information between mating parts, although it can drastically reduce contact force due to its adaptability to change in mating conditions. This local information, such as the point contact, makes it difficult to discern misalignment between mating parts with simple shapes, as well as complicated shapes. Hence, in order to figure out misalignment between mating parts, it requires a contact state model representing all relations between its output signals and their contact states based on geometries of mating parts. Its sensing task, therefore, is restricted to assembling parts with a simple shape, such as a circle, since it is comparatively easy to construct the model. In contrast to the simple shape, complicated shapes produce many contact states and, thus, it is difficult to build the model, although the shape information is known a priori [10]–[11]. The proximity sensing methods utilize eight optical fiber sensors [12] or four pressure sensors [13] arranged around a peg. Since these sensors are capable of detecting relative local geometry between mating parts at their position, they estimate misalignment between mating parts through composite output features of the detectors. The finite number of these sensors makes it difficult for them to obtain enough geometrical

1083–4435/98$10.00  1998 IEEE

276

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

(a)

Fig. 2. Schematic diagram of the proposed sensing system: (a) the configuration, (b) the equivalent cofiguration of the inside-conic mirror, and (c) an expected image for a pair of a peg and a hole.

(b) Fig. 1. Sensing methods for relative geometric errors using a camera: (a) a fixed method and (b) a relocation method.

information to discern misalignment states between mating parts with complicated shapes. On the other hand, the visual sensor is practically incorporated into a variety of assembly tasks, such as mechanical assembly [14][15] and printed circuit boards (PCB’s) [16], because it is capable of detecting relatively large misalignments between mating parts with noncontact. The visual sensor, for example, a camera, is generally mounted according to a fixed method [14]–[16] or a relocating method [17], [18], as shown in Fig. 1. However, these arrangements are also not suitable for detecting misalignment between mating parts with complicated shapes. This is attributable to the fact that they can detect only a local scene in the viewing direction and gives rise to some problems, such as self-occlusion occluded by the mating part or part-handling gripper itself, time-consuming camera motion, and lengthy processing time for image composition. In addition to those techniques, a sensing system consisting of multiple plane mirrors, for example, a pyramidic type [34], can be utilized. The pyramidic type, as one of the cases using multiple plane mirrors, utilizes a pair of pyramidic mirrors, and the pyramidic mirror consists of orthogonally arranged four-plane mirrors. By using this configuration, one image, consisting of four subimages with mutually orthogonal viewing angles, is obtained. It is suitable to measure a shape deformed in space with respect to a flexible part with symmetrical, simple cross-sectional shape. The deformed shape is measured by using a stereo technique. That is, the deformed shape is calculated through composition of four subimages in a captured one image. In addition, although it uses only one image for reconstructing the shape of an

object, two subimages among these four segmented subimages in the one image should be selected, in order to obtain a stereo image for reconstructing an object shape. Therefore, its resolution reduces to one-fourth. Moreover, the orthogonal mirror arrangements become an obstacle to acquire a view without self-occlusion, because a nonreflective zone appears between plane mirrors. This orthogonal configuration occasionally generates no common viewing region and, so, makes it difficult to find the corresponding points with respect to an object with asymmetrical, complicated cross-sectional shape. Based upon the above discussion, the methods outlined here relative geometrical informaare incomplete to obtain the tion required to assemble parts with complicated shapes. In the long run, it is necessary to develop a novel sensing method geometrical information during capable of obtaining the mating parts, regardless of the complexity of a cross-sectional shape. In this paper, we propose such a novel omnidirectional geometrical image sensing system capable of obtaining the misalignment information along mating boundary interfaces between mating parts with complicated shapes. The system encompasses a camera with an optical unit attached to the front part of it. The optical unit consists of a pair of plane mirrors which yields a coaxial view, and an inside-conic mirror and an image of misalignment outside-conic mirror which obtain a without self-occlusion. This paper is organized as follows. Section II describes the configuration and sensing principles of the sensing system and presents the analytical design procedure. Section III presents an image formation model between multiple mirrors, and the nature of the sensing system will be investigated by simulation. Section IV describes the implementation of the sensing system and evaluates its feasibility as an assembly sensing system through several experiments.

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

(a) Fig. 3.

277

(b)

Configuration of the field view: (a) the two-dimensional shape and (b) the three-dimensional shape.

II. AN OMNIDIRECTIONAL IMAGE SENSING SYSTEM A. The Sensing System Principle Fig. 2(a) illustrates the basic configuration of the sensing system. The system consists of the following components: an inside and an outside-conic mirror, a pair of plane mirrors, a camera, and a gripper. The goal of the sensing system is to view as described above. Let us assume that there obtain a on are plane mirror patches, placed at interval and their oblique the circumference of a circle with a radius and respectively. angles and widths are Then, the plane mirrors encompass a peg and a hole and their is calculated as follows; surface normal vector

(1) Meanwhile, increasing the mirror patches to infinity as its normal is then identical to a conic mirror with the following normal vector:

We call this conic mirror an inside-conic mirror, since its inside is a mirror surface. This is a case to generalize the finite mirror arrangement, such as a configuration consisting of two more mirrors. So, the inside-conic mirror is capable of reflecting the figure of an object, encompassed by it, in its mirror surface without self-occlusion. Therefore, the insiderelative geometry between conic mirror is used to obtain the mating parts at once, without self-occlusion. However, an additional optical components is required to detect it by using a camera on off axis, as shown in Fig. 2(a). At first, an outside-conic mirror with a mirror surface outside itself, placed coaxially on the central part of the inside-conic mirror, is used image in the inside-conic mirror surface. In to collect the image onto the image order to project again the collected plane of the camera, two plane mirrors are then used; one is placed above the outside-conic mirror and the other is placed below the camera, as shown in Fig. 2(a). According to this principle, this system is eventually capable of obtaining not coaxial image of relative geometry between the only a peg and the hole, but also omnidirectional sideviews such as figures denoted as a, b, c, and d, without self-occlusion, as shown in Fig. 2(c). B. Field of View (FOV)

(2)

where

is the vertex angle of the conic mirror.

Fig. 3 illustrates the sensing system FOV. Its vertical cross section looks like a simple triangle, as shown in Fig. 3(a). In addition to its vertical cross section, its three-dimensional shape consists of the conical and pyramidic volumes. These

278

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

volumes are determined by the shape of a sensing element such as the charge-coupled device (CCD) cell and the changed optical path by mirrors. They are investigated by inversely tracing the rays of a light projected onto the boundary of a sensing element and the image center in the image plane, respectively. First, when the rays projected onto the rectangular boundary of the CCD cell are inversely mapped onto the object space, it is natural that a rectangular shape should come out on the outside-conic mirror surface. The rays with the rectangular shape are reflected again on the inside-conic mirror surface, on the vertical axis, since and they pass through a point the vertex angle of the inside-conic mirror is less than 180 . Hence, these whole optical paths of the rays look like a as shown in Fig. 3(b). pyramidic volume In addition to the case of the boundary of the CCD cell, when the rays of a light, projected onto the image center in the image plane are inversely mapped onto the vertex point of the outside-conic mirror, a horizontally circular shape comes out on a circumference of the vertex point. The rays with the circular shape are reflected again on the inside-conic similarly as mirror surface, and they pass through a point in the case of the pyramidic volume. Then, these optical paths as shown in Fig. 3(b). The look like a conical volume horizontal rays generating these volumes are reflected on and lines through the different vertical positions and on the inside-conic mirror surface, respectively, and they and Therefore, pass through the different positions they intersect each other and the lower structure of the FOV is symmetrically given as a shape rotating the conical and axis at the pyramidic shape in 180 with respect to the and respectively. The region of the FOV is classified into three zones according to mapping features: a unique zone, an overlapped zone, and an occluded zone. The unique zone, shaded with white in Fig. 3(a), has a feature of mapping a point object into the point shape. However, the overlapped zone, shaded with light gray, is the common region of the pyramidic volume before rotating in 180 and the conical volume after that. Since this surface of the zone is visible from all positions on the inside-conic mirror, a point object placed on the region looks like a donut on the image plane. The occluded zone is out of the FOV, and it is natural that an object in this region should be invisible to the camera. As a result, the available zone is actually the unique zone; however, the overlapped zone is used within limits.

C. Invariability of Azimuth Angle Fig. 4(a) is a simply reconstructed configuration of the sensing system, as if the camera were placed coaxially above the outside-conic mirror. This equivalent configuration is obtained, without loss of generality, by stretching the optical path of to a pair of plane mirrors rays from the optical center with respect to the vertex axis through the vertex points of on an the conic mirrors. Suppose that a point object, defined with respect to the object coordinate centered at is projected onto the image plane.

(a)

(b) Fig. 4. Invariability of azimuth angle: (a) a simplified model and (b) a mapping feature.

Then, its virtual point defined relative to a sensor centered at exists on the same coordinate defined by the points and azimuth angle plane When the virtual point is projected onto a point on the image plane, the following relationship is obtained by the reflective law on the conic mirror surfaces and the perspective projection of the camera: (3) The result means that there is no variation in the azimuth angle during an image projection of the sensing system. All points in space, therefore, are mapped into the with an azimuth points on the radial line that passes through the image center

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

279

In order to describe the design procedure carefully, it is necessary to define the parameters. Let us assume that a ray, starting with a departure angle at the optical center flies until the object plane through the optical system, as shown in Fig. 5. If reflected rays on the mirrors are stretched so as to is then simply analyze the ray trace, a virtual optical center is defined as the obtained. Then, a measurement distance amount of the optical path lengths of the ray from the virtual to the object plane. It is calculated as optical center (4) (5) is the reference measurement distance that is defined where as the optical path length of the reference ray emitting with along its optical axis at the optical the departure angle and are vertex angles of the inside-conic center is a mirror and the outside-conic mirror, respectively, and slope angle between the measurement plane and the reference ray. is given by The reference measurement distance in Fig. 5. Therefore, it can be written as follows: (6) (7)

Fig. 5. Parametric configuration of the sensing system.

with the same azimuth in Fig. 4(b).

on the image plane, as shown

D. System Design 1) Design Parameters: In determining the threedimensional configuration of the sensing system, about 39 design parameters, including the locations and sizes of mirrors and camera lens specifications, should be considered. However, it is difficult to consider all the parameters, because the parameter analysis is sometimes nonlinear and complex. The three-dimensional configuration can be simplified as a two-dimensional symmetric model, as shown in Fig. 5. The resultant design parameters of the model are then 15. This is a result reduced to about 38%, in comparison with the three-dimensional model. Therefore, we chose the twodimensional model for the system design. We also used a front image plane model in order not to yield a reverse image in the design model. In this simplified design model, the purpose of the sensing system design is simply to determine configuration parameters.

is a relative distance from the optical center to the where is the relative distance between the center plane mirror #1, is a relative distance between points of the plane mirrors, the center point of the plane mirror #2 and the vertex point is a relative distance between of the outside-conic mirror, is the locating the vertex points of the conic mirrors, and distance of the outside-conic mirror, defined as the amount of relative distances from the optical center to the vertex point of the outside-conic mirror. Similarly, suppose that two rays start at the virtual optical with departure angles and center respectively, then they strike at two positions and on the object plane through the multiple mirrors. The reference FOV is defined as a power of twice the length between the and Therefore, the reference FOV, is positions defined as follows: (8) (9) (10) denotes the size of the reference FOV, is the where is the camera’s maximum CCD size of the camera, and viewing angle. The interference height denotes an average distance from the vertex point of the outside-conic mirror to the bottom position of a grasped part, and it is represented by (11)

280

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

The size of the object also has a great deal to do with the of the FOV as follows: size

TABLE I PARAMETERS OF THE DESIGNED SENSING SYSTEM

(17) Calculating the focal length and the focus distance from (15) and (16) with respect to such constraints as the range of mm, the reference measurement the object mm, and 2/3-in CCD with the image plane distance mm, the results are obtained as follows: of size 51.41 mm and 55.72 mm 59.90 mm. 48.30 mm Hence, we selected the commercial lens with a focal length mm as suitable for the result. Calculating again of the focus distance and the size of the object for the selected mm and commercial focal length, the results are mm, respectively. In addition, the oblique angle from (17) to satisfy the constraint also gives 14.1 In the interim, substituting (9) and (11) into is calculated by (6) and rearranging, the locating distance (18)

The reference measuring distance the reference FOV and the interference height are design input parameters, and they are predetermined in consideration of size of parts, the height of the gripper, and assembly environments. In this paper, we predetermined these parameters, taking into account the handling of small mechanical parts and PCB parts for automated assembly of electronic appliances such as video tape recorders (VTR’s), camcorders, or compact discs. They are given as mm

3) The Plane Mirrors: For a simple projection, let us asand sume that the mounting angles of the plane mirrors are 45 . The lengths of the mirror surfaces and are then defined as minimally viewing lengths of the mirrors at of the camera, derived as a maximum viewing angle follows:

(19)

(12) mm

mm

(13)

(20)

(14)

Accordingly, the controllable design variables to satisfy these predetermined constraints are 17 variables: the mountthe size of mirrors ing angles of plane mirrors the vertex angles of conic mirrors the diameters the focal length and the focus and the relative distances distance of the camera as shown in Fig. 5. However, between mirrors is calculated by and and is also determined by and Therefore, the independent variables to be determined are 15. These parameters are designed analytically [20], and their results are shown in Table I. 2) The Camera Lens: As shown in Fig. 5, let us assume is placed normally, relative to the that an object of size vertical axis of the system, at the reference measurement apart from the optical center of the camera. distance Calculating the necessary focal length and the required focus distance in consideration of projecting the object onto the full image plane of the camera by the perspective projection and the pinhole camera geometry [21], they are given as

(21)

is the locating distance of th plane mirror, and the where is predetermined to satisfy (7) and (18), relative distance taking into account the compactness and minimum interference of the sensing system. 4) The Outside-Conic Mirror: Similarly, we assumed that is 90 , in order the vertex angle of the outside-conic mirror image in its mirror surface onto the plane to project a mirror #1 at the mounting angle of 45 located on the top side of the outside-conic mirror without any image distortion. The is also then derived as minimum length (22)

(15)

is the locating distance of this conic mirror, as where of the outside-conic mirror defined in (18). The diameter is likewise given by

(16)

(23)

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

281

5) The Inside-Conic Mirror: The inside-conic mirror, which plays an important part in the system, exerts considerable influence upon the FOV and its configuration. This the minimum mirror is represented by the vertex angle of the mirror surface, the diameters and length and the relative locating position as shown in Fig. 5. The and the relative distance can be obtained vertex angle from the relation of (5) and (11), respectively. Accordingly, of this mirror can be calculated as the minimum length follows: (24)

(25) (a)

is the locating distance of the inside-conic mirror, where defined as the amount of the relative distances between the optical center and the inside-conic mirror. In the end, the remaining variables are simply the diameters of the insideconic mirror. It is calculated in consideration of the mirror and the vertex angle and it is written by length (26) 6) Resolution: Resolution is one of the critical factors in evaluating sensing capacity of a sensor. In the sensing system, calculating the resolution in consideration of the defocusing effect caused by depth variation, it is given as for (27) for where a blurring circle diameter

is given by (see [22])

(b) Fig. 6. Resolution variation in the sensing system: (a) resolution variation depending on the aperture diameter and distance variation and (b) constant feature for small aperture diameter.

where is the well-focused measurement distance or the is measurement distance, reference measurement distance, is lens focal length, is size of pixel in image plane, is nominal number of image pixels, is FOV, and is aperture diameter. Fig. 6 illustrates the relationship among the resolution, the aperture diameter, and the measurement distance for the sensing system from (27). It shows that the resolution is mainly governed by the aperture diameter and constant because a CCD cell has a finite pixel size when the aperture diameter is less than 5 mm, as shown in Fig. 6(a). Therefore, the calculation of the sensing system resolution at the designed aperture diameter of 1.8 mm becomes about 0.088 mm, as shown in Fig. 6(b). III. CONFIRMATION

OF THE

PRINCIPLE

A. Image Transformation Model Fig. 7 illustrates the geometry of the optical system consisting of four mirrors and a camera with a collecting lens. Let on the object. Let us also set a us set an object frame

Fig. 7. The coordinate system of the sensing system.

282

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

camera frame centered at the optical center The frame parallel to the frame is an image coordinate system which is the intersection point of centered at the point the image plane and the axis through the optical center. The and mirror frames are denoted as centered at the intersection points between mirrors and the along the optical path of a ray, starting at the optical center optical axis, respectively. The focus distance is defined as the distance from the optical center to the image plane. Provided from that the ray of light, starting with a direction cosine on an object defined by is projected onto a point in the image plane of a camera with the incident a point through an optical unit consisting of four direction cosine mirrors, then the image transformation between the object and the image plane can be formulated as follows [20]:

where :

mirror’s surface normal

vector; : distance between and ; : normal component of an incident ray ; with respect to a mirror : normal distance between mirror and . frames and the distance vector are The normal vector as follows: also described with respect to a sensor frame mirror frames

(34) (35)

(28) is the rotation matrix about the Euler angles [23], is the relative to the translation vector of a mirror frame is a normal vector of a mirror sensor frame defined relative to itself, and is also a normal vector relative to the sensor frame If the of a mirror optical system utilizes reconfigurative mirrors, the orientations of which can be controlled by actuators such as a servo which is motor and a galvanometer, the normal vector Euler angles is no more a function of the a fixed value. At that time, the normal should be considered as variables. In general, since the collecting lens has the radial lens with only one term [25], the undistorted point distortion obtained under the assumption of the perfect pinhole camera model, is calculated as follows: where

where

Similarly, the inverse transformation is given as

(29) In these relations, the initial and the final direction cosine are also given as (30) (31) is a virtual optical center that represents the moved where when optical paths between mirrors are optical center equivalently stretched relative to the object. In addition to the and also represent the homogedirection cosine, nous matrix [23] and the reflection matrix [24], respectively. Representing these matrices with their components, they are given as follows:

(36) (37) is a distorted point unwhere is the distortion factor, and der lens distortion, is the radial distance to the image position If the actual point is transformed into the digitized is given as [25] image plane, the corresponding pixel point (38)

(32)

(39)

(33)

is the central image pixel point of the image where and are mean distances between adjacent sensing plane, and direction, and elements on a CCD cell in are the number of sensor elements and the number of pixels in is the horizontal image scale factor. In a scanned line, and the end, substituting (36)–(38) into (28) and (29), the forward and the transformation models of (28) and (29) are rewritten

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

283

as follows:

(40)

(a)

(b)

(41) and are the rotational matrix and the translation where vector, respectively, and they are defined as follows: for (c)

B. Misalignment Estimation The relation of misalignment between mating parts and their relative geometrical image is derived from the transformaand are tion model. Let us assume that the corresponding points on a peg and a hole, respectively, and and their projected points onto the image plane are When the misalignment between the peg and the hole are projected onto the image plane through the image sensing and in position and system, the misalignments in angular orientation between them are written by

(42) (43) where

and

Actually, the misalignment from (42) is computed by a twobetween a peg stage approach. First, the misalignment and a hole on the obtained misalignment image is detected by using the two-dimensional pattern-matching techniques [26], and [27]. Secondly, the detected misalignments in position and in angular orientation are inversely projected onto the object space by using the relations of (42) and (43), and respectively, and then, the desired misalignments, are obtained. In this misalignment estimation, the depth information is needed to inversely project a misalignment image onto the object space. In general, the depth information can be obtained by using the typical algorithms, such as the triangulation method using a laser or a structured light [31], the stereo method [32], and the depth estimation from focusing and defocusing effect [33]. In this system, the motion stereo technique [32] among them is suitable for obtaining the depths

(d)

Fig. 8. A projection of a cube at Z = Z0 : (a) a cube, (b) the projected image of (a), (c) a tilted cube rotated by yawing 2 and pitching 5 and translated by x = 2 mm, y = 1 mm, z = 2 mm relative to a hole, and (d) the projected image of (c).

0

0

of a peg and a hole. In other words, the depth with respect to a point on an object is calculated by detecting its corresponding point from two omnidirectional images, captured before and after a forward motion. In addition, the corresponding points exist on the radial line with an azimuth angle due to the invariability of an azimuth angle, as described in Section II-C. Therefore, the corresponding points can be simply detected in this sensing system. C. Simulation In this section, we executed a simulation for validating the principle of the sensing system by using the image transformation model and the designed values in Table I. Fig. 8(a) 20 30 mm and a hole of 20 shows a cube of 20 20 mm perpendicularly placed on a height mm under the inside-conic mirror, corresponding to the reference defined in Section II-D-1. The measurement distance hole is identical to the bottom shape of the cube because there is no initial misalignment. Fig. 8(b) shows a projection of the resultant relative shape onto the image plane. This coaxial image is attained as the basic illustrates that the concept of the sensing system without self-occlusion. The horizontal top plate, closer to the sensing system than the bottom plate, however, shows a pin-cushion distortion. In addition, the result also shows that the vertical line with the same azimuth is not distorted. Fig. 8(c) shows the tilted cube relative to the vertical vertex axis of the by with inside-conic mirror. The tilted cube is placed on mm, an initial misalignment relative to the hole,

284

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

(a)

Fig. 10. Cross-sectional diagram of double conic mirrors.

(b) Fig. 9. Projection example of a rectangle: (a) an object placed in the field of view and (b) an image of (a) projected onto an image plane through multiple mirrors.

mm, mm) Fig. 8(d) is a projection result of the resultant relative shape between the hole and the tilted cube onto the image plane. As expected, we consequently obtain a coaxial misalignment image between the two-dimensional cube and the hole without self-occlusion. coaxial image makes it possible In the long run, this to simply estimate misalignment between mating parts by two-dimensional pattern-matching techniques [26], [27], as described in Section III-B. However, the distortion is found in the results. We have carried out the other simulation in order to investigate the top-plate distortion of the peg in detail, as shown in Fig. 8. Fig. 9(a) shows a planar rectangle with a changed at interval 2 mm on a height mm. Fig. height 9(b) is the projection result of the rectangle onto the image is farther plane. As shown in Fig. 9(b), when a rectangle apart from the sensing system than the rectangle denoted by in Fig. 9(a), the rectangle is distorted with a barrel, as denoted by in Fig. 9(b). However, if the object closely approaches the sensing system, it shows no distortion at the height denoted corresponds to the reference by . The height denoted by When the rectangle gets closer, the measurement height distortion is now transformed from the barrel distortion to the pin-cushion distortion for each horizontal line. It seems that

the distortion features depend on the measurement height If so, the distortion may provide a cue as to how to calculate the depth of an object. Therefore, at first it is necessary to analyze the distortion features. As an aside, the measurement height varies with the sizes of handling parts, such as height and width, thus yielding a defocusing effect. In addition to the distortion, therefore, the depth of field of the sensing system should be also analyzed. D. Distortion In order to investigate distortion features of the sensing system consisting of a pair of an inside-conic and an outsideconic mirror, a projective relation between a pair of the conic mirrors should be derived. For simplicity, the sensing system configuration can be reconstructed, without loss of generality, as described in Section II-C. Fig. 10 shows the reconstructed of the sensing system. diagram with an azimuth angle Suppose that an object is located at the measurement height and a ray, starting at a point on the object, in the image plane of the is projected onto a point camera through a pair of the conic mirrors. Then, the mapping and is relation between the points of derived as follows (see [28]):

(44)

This relation is called the double conic projection and all the coefficients are determined from the geometry of the sensing system, as shown in Appendix I. The double conic projection of (44) is subject to two essential projective features [28].

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

(a)

(b)

(c)

=

285

(d)

+

Fig. 11. Double conic projection, r c1 R c0 . (a) Variation of the scale factor c1 . (b) Variation of the distortion factor c0 . (c) Variation of the no distortion height Z0 with image position r . (d) The approximated error of the double conic projection by using Taylor series.

Feature 1: When a point on an object is projected onto an image plane through the optical system, the projected point is distorted uniquely with measurement distance variation. In other words, a point at the reference measurement height is not distorted. However, a point, closer than the point at the is distorted with pin cushion, and a point, farther distance is distorted with than the point at the reference height barrel shape. Feature 2: The distortion effect gets larger if an object closely approaches the optical axis. In order to explain these features, it is necessary to extract the linear term and the distortion term from the projective relation of (44). Hence, using Taylor series [35] to do so, the double conic projection is rewritten by (see Appendix II)

(45)

where and are called the distortion and the scale factor, respectively. They are defined in (46), shown at the bottom of the page, and

(47)

The first distortion feature is generated by this distortion As shown in Fig. 11(a), the scale factor is positive term In addition and dependent only upon a measurement height to the scale factor, the magnitude and the sign of the distortion factor vary with the projected position in the image plane, Its variation relative to the height as well as the height is more dominant than that relative to the projected position , the effect of which is negligible, as shown in Fig. 11(b). According to its sign varying dominantly with the height

(46)

286

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

the distortion is classified into three types as follows: for at no distortion for at barrel distortion for at pin-cushion distortion is called the reference measurement distance, showwhere ing no distortion. It is calculated from the condition of and shows (48), shown at the bottom of the page, where all the coefficients are shown in Appendix I. The reference is slightly changed with the projected measurement height position but its effect is negligible, as shown in Fig. 11(c). In the end, since this feature shows that the image distortion it gives a cue capable to varies dominantly with the height calculate the measurement height from the optical center of the sensing system to the object point In addition, the second distortion feature is produced by the relative magnitude between the linear term and the distortion term. The relative magnitude between them depends on the According to the range, range of the horizontal distance therefore, the projective relation of (45) is given as for for

at at

(49) (50)

In other words, when an object closely approaches the optical axis, the distortion term is dominant, and when an object gets far from the optical axis, the distortion term is negligible. Therefore, the optical system shows that the distortion effect gets larger, if an object closely approaches the optical axis. E. Depth of Field In an image sensing system, the depth is changed by the variations in the intrinsic and the extrinsic factors, such as the sizes of an object and an incident angle of a ray to an object plane, respectively. In other words, when a ray, starting from with a departure angle strikes on a the optical center horizontal object plane through multiple mirrors, the depth of of the object plane is changed due to the incident angle the ray relative to the object plane, as shown in Fig. 12(a). not In the proposed sensing system, the incident angle zero, with respect to the horizontal plane, is generated due to the changed optical path between the conic mirrors. Then, the in this system is calculated by depth variation (51) Fig. 12(b) shows the depth variation in the object plane placed horizontally with respect to the sensing system; the maximum depth variation is about 7 mm in the range of mm This depth variation should not be over of the camera. The theoretical the allowable depth of field depth of field [22] is defined as (52)

where is the reference measurement distance, is a focal is an aperture diameter, and is a length of the camera, confusion or a blurring circle diameter. Fig. 12(c) represents the relationship among the aperture diameter, the depth of field, and the measurement distance for the sensing system, from (52). It shows that the depth of field is dominated by the aperture diameter more than it is by the measurement distance. Computing the depth of field from (52), in order that a apart from an optical center of the camera is point at accurately focused on a pixel for a 2/3-in CCD camera with mm diameter and the focal length of the aperture mm, it is given by about 2.3 mm. Incidentally, when the aperture diameter is reduced to 1.8 mm, it becomes about 35 mm, as shown in Fig. 12(d). Since this computed value is more than the maximum depth variation in the reference FOV, the intrinsic factor is, therefore, not a problem in the system. In general, the vertical and horizontal sizes of a peg defined with respect to a sensor vary in the assembly process. Since the vertical size variation of them yields the measurement distance variation between a peg and a sensor, its blurring image is occasionally obtained. Therefore, this vertical size variation, the extrinsic factor, should be overcome for a visual sensing system with a lens of fixed focal length. Typically, of a peg in a the allowable vertical size variation visual sensing system is derived from the depth of field calculated with respect to the direction of a camera’s optical axis. If the vertical size of a peg varies along the direction of of the optical axis, the allowable vertical size variation the peg is the same as the magnitude of the depth of field. However, if the optical axis is slanted due to mirrors in a sensing system, then the allowable vertical size variation of the peg is different from the depth of field. Let the angle difference between the vertical direction of a peg and the optical axis be as shown in Fig. 12(a). Then, the allowable vertical size variation of the peg is calculated by (53) is In the sensing system, the vertical distance variation about 34 mm with respect to the calculated depth of field, 35 mm, and the slope IV. FEASIBILITY EXPERIMENTS A. Implementation Fig. 13 shows a prototype of the sensing system. The prototype has four essential components: a camera, an illuminator, an object-handling gripper, and an optical system that includes a pair of conic mirrors and a pair of plane mirrors. The sensing system uses a circular-shaped acrylic plate for fixing the gripper and the outside-conic mirror. In other words, the gripper is fixed at the center under the acrylic plate, and the small outside-conic mirror is also placed at the center on the

(48)

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

287

(a)

(b)

(c)

(d)

Fig. 12. Variation of depth and working distance in the system’s FOV. (a) Distance variation with rays to object plane. (b) Working distance variation in the FOV. (c) Depth variation depending on the side of the aperture. (d) Eescaled diagram of (c).

Fig. 13.

The prototype of the proposed sensing system.

288

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

(a)

(b)

(c)

2

30 mm2 placed Z0 = 238 mm and inversely projected object of captured image by using an inverse

Fig. 14. Comparison between a rectangular shape of 30 transformation model: (a) captured image, (b) comparison between inversely projected object of (a) and a rectangular object, and (c) error analysis of (b).

acrylic plate; then, the acrylic plate is attached coaxially to the bottom of the inside-conic mirror. As a further consideration, the acrylic plate should be transparent, because the use of an opaque plate incurs occlusion. The sensing system also uses a camera with a small aperture size of about 1.8 mm in order to reduce the blurring effect on the input image caused by spherical aberration of the conic mirrors [21]. However, this yields a lack of brightness on the input image, and a picture of an object cannot be taken without strong illumination. Hence, this system is designed to include an illuminator consisting of an LED array of a ring type, as well as four halogen lamps at intervals of 90 . We also used a mirror specially manufactured of aluminum for implementation to avoid the estimation error of the reflected position on its mirror. Utilizing a cheap glass mirror reflecting on rear surface, it yields the estimation error of the reflected position on a mirror, proportional to its thickness, because of the refracting effect [21], and the error is accumulated in multiple mirrors. An F/T sensing system can be utilized effectively for small misalignment in the case of assembly parts with simple shapes. So, the F/T sensor is attached to the top part of the sensing system for adapting small misalignments over the visual sensing range. But the visual sensing system plays a more important role in the hole search stage than the F/T sensor,

since it can detect a large misalignment between mating parts without contact. From this point of view, the proposed sensing system has smaller assembling cost than that of the F/T sensing system for detecting and compensating large misalignment between mating parts with complicated shape. When the sensing system crashes, the gripper is the first part to crash. In order to avoid a crash of the system, the gripper is equipped with a spring capable of shrinking about 10 mm to the axis and two limit sensors for detecting shrinkage of the spring. The constitutive parameters of the sensing system in the model of (41) are carefully estimated through the mapping relation between cross points on a grid pattern for calibration and their corresponding image points. Using a steepest descent method as the searching algorithm [29], the calibration needs about 50 iterations to obtain the converged result within about 1 pixel standard deviation error with respect to a grid pattern with 40 cross points. B. Experiments 1) Misalignment Estimation: In this section, some experiments to confirm the principle and show the feasibility of the proposed system are executed. Fig. 14 shows an

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

experimental result to validate the mapping relation between the object space and the image plane of the camera, when an object is projected into the image plane through the proposed sensing system. Fig. 14(a) is a projection shape 30 mm placed on the of a planar rectangle of 30 mm from the optical center, through measurement height the sensing system. Fig. 14(b) is a compared result between the given rectangle and the inversely projected shape by using the inverse projection of (41). Fig. 14(c) is an error analysis with respect to the result of Fig. 14(b). The result shows that the average error is within 2 pixels and the maximum error is about 4 pixels. Therefore, it shows that the inverse projection model of (41) can be used for mapping an object image onto an object space. Fig. 15 is the coaxial misalignment image between mating parts, produced by geometrical uncertainties after approaching a part into a counter part. Fig. 15(a) is an experimental setup. Fig. 15(b) is a misalignment image of a planar object with complicated shape. Fig. 15(c) is a misalignment image of a three-dimensional parallel-piped peg and a hole. They show that, as mentioned coaxial misalignment images are in Section II-A, the immediately obtained without self-occlusion, regardless of a dimension and a shape complexity of an object, and their images are similar to the images of planar objects. Fig. 16 is an estimating result of aligning errors between the disc and the pattern, as shown in Fig. 15(b), by using the two-stage approach, as described in Section III-B. In this experiment, at first, the edges have been extracted by using a Sobel operator [30], then, the misalignment on this image is detected through a matching technique such as a hypothetical method [25] with respect to the thinned edge image. The detected misalignment is inversely projected onto object space and then, the misalignments between aligning objects are calculated by using (42). Fig. 16(a) is a comparison result between a given misalignment and an estimated misalignment. Fig. 16(b) and (c) shows the misalignment’s features with respect to change of an azimuth angle and a measurement height from mm to mm. The results show that the misalignment’s features are not nearly changed about the azimuth angle and the measurement height variation, respectively. However, the estimating error is within about 2 pixel (1 pixel 0.088 mm on design). It is mainly produced by many causes, such as model uncertainty, algorithm accuracy, and environmental noises, In order to obtain more accurate results, it needs, above all, a clear and robust image and, also, a fast and accurate algorithm. In the long run, the sensing system has shown that it can coaxial image without self-occlusion, immediately obtain a regardless of object dimensions and, thus, the misalignment estimation on the image plane becomes a pattern-matching problem as a planar object. Therefore, the sensing system is suitable for calculating the misalignment estimation between mating parts, such as three-dimensional objects, as well as planar objects with complicated shapes. 2) Omnidirectional Image: In addition to misalignment detection, the sensing system shows other sensing capability, such as omnidirectional image obtaining. Typically, the procedure of object recognition consists of the following four

289

(a)

(b)

(c) Fig. 15. The 2 coaxial misalignment images of two-dimensional object with complicated shapes and three-dimensional object. (a) The experimental setup. (b) A misalignment image of a planar object. (c) A misalignment of a three-dimensional parallel-piped object.

stages: 1) obtaining a scene including an object; 2) making feature extraction and description of the object; 3) making models; and 4) matching the description of the object with that of the model. From the viewpoint of a detector, the most important thing of the proposed sensing system is a sensing capability of a scene. Fig. 17 shows an experimental sensing example of a cylindrical object, with the characters , , and on its side surface, to obtain its omnidirectional image. Fig. 18 also shows an experimental sensing example of a cube, with three holes of type on its side surfaces, to acquire its omnidirectional image. As shown in Figs. 17(b) and 18(b), the sensing system can immediately obtain an omnidirectional image containing not only the object shape, but also the side surface of figures, such as characters and holes on the an object, without self-occlusion. On the contrary, unlike other sensing systems capable of detecting local scenes, it needs no image composition in order to obtain an omnidirectional object image of shape. In addition to the omnidirectional sensing capability, it needs the capability of obtaining three-dimensional information, particularly depth information, in order to estimate the dimensions of the cube and the holes on its side surfaces from their omnidirectional image. The depths of the cube and the holes are obtained by the forward motion stereo technique [32], as described in Section III-B, and their estimation accuracy depends on the environmental conditions, such as uniform illuminating and keen focusing. In the end, the sensing system can immediately acquire coaxial misalignment image between mating not only a parts, but also an omnidirectional image that needs no image shape of an object. composition to detect a

290

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

(a)

(b)

(b) Fig. 16. The experiment for misalignment estimation. (a) A comparison between given misalignment and estimated result. (b) Effect of azimuth angle change. (c) Effect of measurement height change.

(a) (a)

(b)

Fig. 17. A sensing example of a cylindrical object with characters on its side. (a) A cylindrical-type object with characters on its side surface. (b) The detected omnidirectional image.

V. CONCLUSIONS We have proposed the use of an omnidirectional image sensing system for assembling parts with arbitrary crosssectional shapes and described an analytic design method. We also developed the image transformation model in the multiple mirrors and investigated the nature of the system by

(b)

Fig. 18. A sensing example of a cube with holes on its side surface. (a) A cube with holes on its side surface. (b) The detected omnidirectional image.

means of simulations. In addition, we showed experimentally its feasibility as a sensing system for assembly. This system, a conic type, has developed to particularly misalignment between mating parts with asymdetect metrically complicated shapes without self-occlusion. This objective is implemented by using a pair of conic mirrors consisting of an inside-conic mirror and an outside-conic mirror. The mirrors are made of a metal such as aluminum to reflect an incident ray on a front surface to avoid estimation

KIM AND CHO: A NOVEL OMNIDIRECTIONAL IMAGE SENSING SYSTEM

error incurred by the refracting effect [21], as described in Section IV-A. Therefore, the pyramidic cost is higher than that of the conic type, because it needs more fabricating time for complicated configuration and more additional indexing devices for orthogonal arrangement of the plane mirrors than those of the conic type. By using this configuration, it can immediately acquire not only an omnidirectional image including enough information coaxial misto quickly recognize an object, but also a alignment image along the mating boundary interface between mating parts, without any self-occlusion. Therefore, the misalignment can be determined by utilizing pattern-matching techniques without any image composition, no matter how complicated the shapes may be. It can also estimate three-dimensional shapes of assembling parts by using two stereo images obtained through a forward motion stereo technique, as described in Section IIIB. Although it requires two images in order to reconstruct the three-dimensional shape of an object with unknown depth a priori, it has high resolution because of utilizing the full image and no longer needs time-consuming motion for altering viewing angles, since the omnidirectional shape image of an object is obtained with one shot. Moreover, the corresponding points can be simply detected on a radial line with an azimuth angle, as described in Sections II-C and III-B. From this viewpoint, this system, a conic type, is different from the pyramidic type [34], orthogonally arranged by four plane mirrors as mentioned in Section I, from the viewpoints of performance, usefulness, and cost. Therefore, the conic type is suitable for detecting misalignment between mating parts with asymmetrical complicated shapes. In addition, this system has a feature that linearly distorts a horizontal line from pin cushion to barrel with vertical measurement distance, providing a cue to the depth from distortion. Research now needs to be implemented to develop a method for depth estimation from distortion.

291

where the coefficients

are defined as

APPENDIX II APPENDIX I As shown in [28], all the coefficients of the double conic projection of (44) are denoted by the constitutive parameters of the sensing system shown in Table I, and they are given by

In order to describe the distortion principle and its feature, it is necessary to classify the double conic projection of (44) into the linear term and the distortion term. It is possible to do so by using Taylor’s series. When the second order is taken into consideration, the double conic projection of (44) can be written by (54) where the coefficients

and

are also calculated as

The remainder is used to denote higher order terms. Calculating the remainder with respect to the sensing system, the constitutive parameters of which are shown in Table I,

292

IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 3, NO. 4, DECEMBER 1998

it is within a 1-pixel error, as shown in Fig. 11(d). The approximation of the relation (54) is rewritten by (55) and are the distortion and the where the coefficients of linear terms, respectively. They are given as and The relation of (55) is inversely given as follows; (56) is the distortion factor and where They are represented by

is the scale factor.

and

REFERENCES [1] J. L. Nevins and D. E. Whitney, “Assembly research,” Automatica, vol. 16, no. 6, pp. 595–613, 1980. [2] T. Lozano-Perez, M. T. Mason, and R. H. Taylor, “Automatic synthesis of fine-motion strategies for robots,” Int. J. Robot. Res., vol. 3, no. 1, pp. 3–24, 1984. [3] D. E. Whitney and O. L. Gilbert, “Representation of geometric variations using matrix transformations for statistical tolerance analysis in assemblies,” in Proc. IEEE Int. Conf. Robotics and Automation, 1993, pp. 314–321. [4] G. C. Burdea and H. J. Wilson, “Solving jigsaw puzzles by a robot,” IEEE Trans. Robot. Automat., vol. 5, pp. 752–763, Dec. 1989. [5] H. S. Cho, H. J. Warnecke, and D. G. Gwon, “Robotic assembly: A synthesizing overview,” Robotica, vol. 5, pp. 153–165, 1987. [6] N. Takanashi, H. Ikeda, T. Horiguchi, and H. Fukuchi, “Hierarchical robot sensors application in assembly tasks,” in Proc. 15th ISIR, 1985, pp. 829–836. [7] H. Asada and S. Hirai, “Toward a symbolic-level feedback: Recognition of assemblyprocess states,” in Proc. 5th Int. Symp. Robotics Research, 1989, pp. 341–346. [8] C. S. G. Lee and R. H. Smith, “Force feedback control in insertion process using pattern analysis techniques,” in Proc. Amer. Control Conf., 1984, pp. 39–44. [9] Y. K. Park and H. S. Cho, “A fuzzy rule-based assembly algorithm for precise parts mating,” Mechatronics, vol. 3, no. 4, pp. 433–450, 1993. [10] T. Arai, “Analysis of part insertion with complicated shapes,” Ann. CIRP, vol. 38, no. 1, pp. 17–20, 1989. [11] R. H. Sturges and S. Laowattana, “Passive assembly of nonaxisymmetric rigid parts,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robotics and Systems, 1994, vol. 2, pp. 1218–1225. [12] E. S. Kang and H. S. Cho, “Vibratory assembly of prismatic parts using neural network based positioning error estimation,” Robotica, vol. 13, pp. 185–193, 1995. [13] J. Volmer, P. Jocob, A. Schwarz, and H. Zachan, “Positionierung von Montagegreifern und Verkzengen durch Industrierobotor,” Fert.tech. Betr., vol. 32, pp. 742–747, 1982. [14] Y. Shirai and H. Inoue, “Guiding a robot by visual feedback in assembly tasks,” Pattern Recognit., vol. 5, pp. 99–108, 1973. [15] J. Ishikawa, K. Kosuge, and K. Furuta, “Intelligent control of assembling robot using vision sensor,” in Proc. IEEE Int. Conf. Robotics and Automation, 1990, pp. 1904–1909. [16] J. J. Hill, D. C. Burgess, and A. Pugh, “The vision-guided assembly of high-power semiconductor diodes,” in Proc. 14th Int. Symp. Industrial Robots, Gothenburg, Sweden, 1984, pp. 449–460. [17] J. Miura and K. Ikeuchi, “Generating visual sensing strategies in assembly task,” in Proc. IEEE Int. Conf. Robotics and Automation, 1995, pp. 1912–1918. [18] S. A. Hutchinson and A. C. Kak, “Planning sensing strategies in a robot workcell with multisensor capabilities,” IEEE Trans. Robot. Automat., vol. 5, pp. 765–783, Dec. 1989. [19] M. Inaba, T. Hara, and H. Inoue, “A stereo based on a single camera with view-control mechanisms,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1993, vol. 3, pp. 1857–1864.

[20] W. S. Kim, H. S. Cho, and S. Kim, “A new omni-directional image sensing system for assembly,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1996, vol. 2, pp. 611–617. [21] E. Hecht, Optics. Reading, MA: Addison-Wesley, 1987. [22] E. Krotkov, “Focusing,” Int. J. Comput. Vis., vol. 1, pp. 223–237, 1987. [23] J. J. Craig, Introduction to Robotics. Reading, MA: Addison Wesley, 1986, pp. 15–53. [24] R. Kingslake, Applied Optics and Optical Engineering: Optical Components, vol. 3. New York: Academic, 1965, pp. 269–308. [25] R. Y. Tsai, “Versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-th shelf TV cameras and lenses,” IEEE Trans. Robot. Automat., vol. RA-3, pp. 323–344, Aug. 1987. [26] O. Faugeras, Three Dimensional Computer Vision. Cambridge, MA: MIT Press, 1993. [27] T. H. Reiss, Recognition Planar Objects Using Invariant Image Features. New York: Springer-Verlag, 1993. [28] W. S. Kim, H. S. Cho, and S. Kim, “Distortion analysis in an omnidirectional image sensing system for assembly,” in Proc. IEEE Int. Symp. Assembly and Task Planning, 1997, pp. 257–262. [29] J. M. Zurada, Introduction to Artificial Neural Systems. New York: West, 1992. [30] D. H. Ballard and C. M. Brown, Computer Vision. Englewood Cliffs, NJ: Prentice-Hall, 1982. [31] P. J. Besl and R. C. Jain, “Three-dimensional object recognition,” Computing Surveys, vol. 17, no. 1, pp. 75–145, Mar. 1985. [32] R. Nevatia, “Depth measurement from motion stereo,” Comput. Graph. Image Processing, vol. 5, pp. 203–214, 1976. [33] Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1993, pp. 68–73. [34] J. Y. Kim, H. S. Cho, and S. Kim, “A visual sensing system with multiple views for flexible parts assembly,”in Proc. 8th Int. Conf. Advanced Robotics, 1997, pp. 979–984. [35] F. B. Hilderbrand, Advanced Calculation for Applications. Englewood Cliffs, NJ: Prentice-Hall, 1976.

Wan Soo Kim was born in 1964. He received the B.S. degree in mechanical design engineering from Seoul National University, Seoul, Korea, and the M.S. degree in production engineering and the Ph.D. degree in automation and design engineering from Korea Advanced Institute of Science and Technology, Taejon, Korea. He is currently a Senior Research Engineer with the FA Research Institute, Production Engineering Center, Samsung Electronics Company Ltd., KyungKi-Do, Korea, where he is working on automated assembly, robot system integration, and industrial vision. His research interests include intelligent assembly, robotics, sensors, machine vision, and intelligent control.

Hyung Suck Cho (M’94) was born in 1944. He received the B.S. degree in industrial education from Seoul National University, Seoul, Korea, the M.S. degree in mechanical engineering from Northwestern University, Evanston, IL, and the Ph.D. degree in mechanical engineering from the University of California at Berkeley in 1971, 1973, and 1977, respectively. Since 1978 and 1995, respectively, he has been a Professor in the Department of Production Engineering and the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Taejon, Korea. From 1990 to 1993, he served as Vice Chairman of the Manufacturing Technology Committee, International Federation of Automatic Control (IFAC). Since 1993, he has been a Chaired Professor of Pohang Steel Company (POSCO), Korea. Currently, he is serving as a Member of the Editorial Boards of Robotica, IFAC Control Engineering Practice, and Journal of Advanced Robotics. His research interests include robotics and automation, intelligent control applications, manufacturing process control, machine vision, and robotic assembly. In these research areas, he published over 250 papers in international journals and conferences. Dr. Cho is a member of the American Society of Mechanical Engineers, Society of Manufacturing Engineers, and Korean Society of Mechanical Engineers..