Indian Journal of Marine Sciences Vol. 38(3), September 2009, pp. 324-331
Vision based distance measurement system using single laser pointer design for underwater vehicle Muljowidodo K1, Mochammad A Rasyid2, SaptoAdi N2 & Agus Budiyono3* 1
Mechanical Engineering Program, Mechanical Engineering and Aeronautics Faculty, InstitutTeknologi Bandung (ITB) Ganesha 10, Bandung, West Java, Indonesia [E-mail:
[email protected]] 2 Center for Unmanned System Studies (CentrUMS), InstitutTeknologi Bandung (ITB), Ganesha 10, Bandung, West Java, Indonesia [E-mail:
[email protected],
[email protected]] 3 Corresponding Author, Department of Aerospace Information, Smart Robot Center, Konkuk University 1 Hwayang-Dong, Seoul 143-701, Korea [E-mail:
[email protected]] Received 26 July 2009, revised 11 September 2009 As part of a continuous research and development of underwater robotics technology at ITB, a vision- based distance measurement system for an Unmanned Underwater vehicle (UUV) has been designed. The proposed system can be used to predict horizontal distance between underwater vehicle and wall in front of vehicle. At the same time, it can be used to predict vertical distance between vehicle and the surface below it as well. A camera and a single laser pointer are used to obtain data needed by our algorithm. The vision-based navigation consists of two main processes which are the detection of a laser spot using image processing and the calculation of the distance based on laser spot position on the image. [Keywords: Vision Based, Image Processing, Laser Spot]
Introduction Design and development of several UUV prototypes have been conducted elsewhere1. Vision based navigation has been investigated and an approach by using single laser pointer is presented in this paper. UUV is usually equipped with camera as the “eye” of the operator. On other hand, camera supported by computer vision can also give some important information. In this paper it is proposed the design of system and algorithm to be used for calculating horizontal and vertical distance between an object and camera. Beside the camera, a laser pointer is used for the setup. It is assumed that a standard computer is used for image processing and data calculation. Typical underwater vehicle platform installed with camera and laser pointer is depicted in Fig. 1. There are two major works in designing this distance measurement system. First is obtaining a real time image processing algorithm needed for laser spot/mark detection. Second is finding a scaling factor or formula that convert the object position (pixels) on
the image into real world position (meters). The paper comprises the relevant aspects of image processing requirement, image processing algorithm, camera mounting, laser pointer mounting, detail calculation of distance measurement and the experimental results.
_____________ *Author for Correspondence
Fig. 1—Underwater vehicle platform with camera and laser pointer
MULJOWIDODO et al.: VISION BASED DISTANCE MEASUREMENT SYSTEM
Materials and Methods Image processing requirement
In our system, we need appearance of object on the image captured by video camera. Based on the detected object one can measure the relative distance between object and the camera. To be implemented in real time, one must be able to process the images with a frequency of at least 25 frames per second. There are many well-known algorithms such as MSER2, SIFT3, etc. to be used for object detection and recognition. The main problem with those algorithms is the processing time. It takes few hundreds milliseconds up to few seconds for complex image per frame. Thus, it is difficult to be used as part of a real time control system. Considering this limitation, it is necessary to develop as simple image processing algorithm as possible. It is assumed that a laser pointer can produce a salient spot. This saliency can simplify the detection and recognition of image processing task. It reduces processing time substantially. In our proposed technique, we just use a laser pointer instead of two lasers as proposed in elsewhere4. Image processing algorithm
A red laser pointer is used for experiment. The light spot of the laser is object to detect. It is a relatively simple object. There was no need of a complex feature extraction algorithm. It required a red segmentation or separation as main task filtering in our image processing algorithm. Then, by applying a simple object center finding we can obtain the position of the detected object. The algorithms for color segmentation or separation have been introduced in many papers5. The present study also presents implementations for a traffic sign recognition. In the past, our red segmentation was based on HSV (Hue, Saturation, Value) color space. This is a standard method for color segmentation. Color can be represented by only hue components. Moreover, hue is invariant to the variations in light. This is the important feature of hue. We used only hue colors, while others use hue-saturation color. Red lies in certain range value in hue color space. By applying a simple thresholding, red can be separated from other colors based on that range value. One should define the range so that our intended red object can be thresholded as distinct as possible. The main drawback of this algorithm is that we have to set up a good range value of red. It is not easy to have a good
325
value since a certain range of values could work in some situation but fail in others. An algorithm presented in Flegh5 et al offers a solution to overcome this dynamic behavior. It is called dynamic threshold algorithm. It takes into consideration the variation of global color of the image. The range value of red used for thresholding is influenced by this global value. In their experiment, a red segmentation based on this algorithm yields a good result. Whereas, in our system we cannot achieve a real time processing using that algorithm. It is proposed a simple red segmentation with no value setting. Our algorithm converts RGB color space into a single component color space which represents the degree of red. The redder the color of a pixel in RGB color space, the higher degree of red it will have. Let Ri, Gi, Bi be the Red, Green, Blue values of a pixel. Let DRi be the degree of red of a pixel. For all pixel then: Ri = Ri - (Gi+Bi) Gi = Ri - Gi Bi = Ri - Bi If (Ri