ENVIRONMENT IDENTIFICATION TECHNIQUE USING HYPER OMNI ...

3 downloads 47269 Views 607KB Size Report
based on Google map technology and image around the vehicle using hyper omni-vision for operators. .... with a 360°field of view to us. The hyperboloidal.
ENVIRONMENT IDENTIFICATION TECHNIQUE USING HYPER OMNI-VISION AND IMAGE MAP N. Murakami1, A. Ito1, Jeffrey D. Will2, Michael Steffen2, K. Inoue1, K. Kita1, S. Miyaura1 1

National Agricultural Research Center for Hokkaido Region (NARCH) Hitsujigaoka 1 Sapporo JAPAN 2

Electrical & Computer Engineering Valparaiso University 206 Gellerson Center Valparaiso, IN 46383 USA

Abstract: This paper describes the developed software platform for HST drive crawler type robotic vehicle for agriculture. The developed platform provides a field navigator based on Google map technology and image around the vehicle using hyper omni-vision for operators. A mapping method has been developed to generate undistorted flat field image from omni-directional image. As a result of experiment, the developed method could generate undistorted images given the parameters that were pre-computed by train images of flat fields. The actual maximum error of the generated images is 0.6m within a radius of 10m from the camera. Copyright © 2002 IFAC Keywords: Agriculture, Computer vision, Autonomous mobile robot, Communication, Vehicle. 1. INTRODUCTION The decreasing number of farmers is a significant issue facing Japanese agriculture. Many studies concerning autonomy of agricultural machines have been done as a preliminary solution of this problem. However, most solutions in the agricultural arena are still far from commercialization. One of the main reasons for this is concern for safety: it is extremely challenging to prepare for unpredictable conditions while executing autonomous operations. Keeping a human in the loop appears to be indispensable as they are best able to monitor autonomous agricultural tools working in the field. In previous research, a teleoperation technique was developed for unmanned farming which was augmented by the use of remote machinery autonomy. It can help in reducing the challenges presently faced by teleoperation. However it is still necessary to obtain more information of the current working area to make sense to operators grasping current situation of the vehicle for safe and efficient operation. In this paper, an improved user interface is described, which has a real field image map navigator based on 1

This work is supported by the JSPS Grants-in-Aid for Scientific Research program [C]-16580215

Google map technology and omni-directional vision image viewer for observing around the vehicle. Those functions can help to recognize the working condition. This study shows a proof-of-concept of the improved user interface for vehicle automation in agricultural environments. 2. TELEOPERATION PLATFORM The system consists of the control unit, a CCD camera, a GPS and actuators. To contribute to reduce initial cost, the unit, the sensors and the electric cylinders for positioning levers are detachable. The system is able to be adapted to commercialized clutchless transmission vehicles such as those with HST drives and two-lever operation without special reconstruction. A typical example of this adaptable system is the manure spreader shown in Fig.1. There are many studies reported concerning teleoperation (Simmons, 1998; Oomichi, 2000). Some major challenges have been found to be caused by time delay of communication and insufficient field information. Because of such restrictions, teleoperations tend to be conservative as compared with in-cab controlled operations. However, as there are many routine tasks in agricultural operations, developed layered architecture and task level command can be realized to enhance system performance while minimizing stress and fatigue on

the operators (Murakami, et al., 2003; Murakami , et al., 2004) .

data. Using satellite image map, operators can recognize the surrounds of the vehicle even though it works in unfamiliar fields. The developed interface is shown in Fig.4

Robot Internet Control Program

Fig.1 The developed platform based on manure spreader with omni-vision. In addition to above-mentioned functions, we improved the control software to display real field image map and bird view of the vehicle, which is shown in Fig.2.

CGI for transmitting latitude and longitude dataAPI Google map Google map Fig.3 Linkage between control software and Google map web page.

Vehicle

Fig.2 Software for teleoperation. 2.1 Fields display using Google map API Previous system has a black and white simple raster map windows for monitoring the current location of the vehicle and setting the travel path on it, and the robot can travel along the path by autonomous functions. A more precise field map is preferable to make sense actual fields to operators. However, high resolution map is expensive for purchase and has to be updated date regularly as same as other GIS system. Google. Inc provides high resolution satellite images map, vector map and application programming interface (Google Maps API) to us recently. It allows to embed Google Maps in own web pages with JavaScript. It makes possible to add overlays to the map. It means that the sophisticated robot monitoring and navigation interface can be easily developed by this API and it will be available as far as the system being connected to internet. As it is shown in Fig. 3, a CGI (common gateway interface) program is applied to link between former windows base software platform and the web page to show the vehicle location by acquired GPS position

Fig.4 Developed field map viewer using Google map API. 2.2 Hyper omni-vision system Many studies are reported concerning robot navigation and GPS application using omnidirectional vision (Das, et al., 2001; Kato, et al. 2001). In order to obtain accurate and real time omnidirectional visual image around the vehicle, an omnidirectional camera (1/2 inch CCD) with a mirror (Diameter: 59mm) with the shape of a hyperbolic function has been employed. This system provide with a 360°field of view to us. The hyperboloidal mirror always has one optical center. Using this function, distorted original image can be transferred to ground plane or any other plane. The relationship of between omni-directional camera image and true certain plane image is depicted in Fig.5.

2.3 Computation of vision parameters

Vision parameters to generate flat ground images were estimated using actual field images at the field of NARCH. Vision was set at 1.85m height. Some markers were set on the ground, and took some images (480 * 360 pixels) including them. Assuming the vehicle on a flat ground, the parameters are given by following equations. The equation 2 is implemented using original equation (Yagi, et al., 2001). We add an extrinsic parameter Cp. This parameter is aim to take unknown camera lens parameters into consideration.

updating frequency. However the field navigator sometime skipped the position data. To optimize communication between them, it may need to apply an asynchronous data communication technology such as Ajax (Asynchronous JavaScript + XML) . 3.2 Ground image mapping The estimated parameters are shown in Table.1. Image mapping module was developed using those parameters. Parameters

Table.1Estemated parameters. b(mm) c(mm) f(mm) Cp 15.40 46.45 1.95 1.45

The developed model and estimated parameters were certified by the accuracy of the estimated position of the markers. As a result of experiment, position errors of the markers are 0.60m, 0.63m and 0.50m, at 4m, 6m and 8 m far from camera center. This mapping method is effective to grasp actual distance from the vehicle to other objects such as obstacles for teleoperation, meanwhile if this method will be applied to make a field crop map, it must be improved to obtain higher precise and resolution.

Fig.5 The relationship between hyper omni-vision image and true 3dimensional position. X  Y  Z  1 1 a b  b 2  c2 H r cp r p Rp  b2  c2f - 2bc cp r p 2  f 2 2

2

2

2

2

2

where c

a2  b2

rp 

x2  y

2

2 2 Rp  X  Y a, b, c : mirror parameters r p : distance from center of image plane R p : true distance on the X - Y plane

f : focal distance

3. EXPERIMENTAL RESULT AND DISCUSSIONS 3.1 Image map user interface based on Google map As result of checking of updating timing of web page, latitude and longitude data were sent to web page successfully, and it could update the vehicle position every 1 second approximately. Time delay for data communication between the standalone program and web page was not found within this

Fig.6 The original omni-directional image.

Y. Yagi, M. Yachida, Real-time Omnidirectional Image Sensors (2001), IPSJ SIG-CVIM, Vol.42, pp.19-32.

Fig.7 The mapped ground plane image (markers point 4m, 6m and 8m from the camera center). 4. CONCLUSION The user interface for teleoperation of the agricultural vehicle was developed. It can provide precise field map using Google map technology and omni-directional image to observing the vehicle to operators. Those can help to recognize current working condition such as obstacle around the robot and amount of the agricultural materials to spread such as fertilizer. REFERENCES A. K. Das, R. Fierro, other (2001), Real-Time Vision-Based Control of a Nonholonomic Mobile Robot, proc. of IEEE Int. Conf. Robot & Automat. Seoul Korea, Vol.2, pp.1714-1719. K. Kato, H. Ishiguro, Matthew B. (2001), Town Digitizing -Recording of Street Views by using ODVS & GPS-, IPSJ CVIM, Vol.2001-125, pp.111-118 N. Murakami, K. Inoue, S. Miyaura, H. Kunioka: Tele-operation System for Agricultural Machines by Large Coverage Internet Platform (2003) , Robomec’03, Hakodate, Japan, 1P1-2F-A4(CDROM). N. Murakami, K. Inoue, S. Miyaura, H. Kunioka, Teleoperation platform for HST drive agricultural vehicle (2004) , conf. of ATOE2004, Kyoto Japan, pp.463-466. R. Simmons (1998),Xavier :An Autonomous Mobile Robot on the Web,IROS’98,Victoria, BC, canada, pp.43-47. T. Oomichi:Control System of Application for Outdoor Robotics (2000), JRSJ, Vol.18, No.7, pp.11-14.

Suggest Documents