Two popular methods for solving the above equation are; HornâSchunck ... that the optical flow is smooth over the entire image, the HornâSchunck method [1].
6th IEEE International Conference on Industrial and Information Systems (ICIIS), 2011, 489‐492
An Autonomous Robot Navigation System Based on Optical Flow G.D. Illeperuma and D.U.J. Sonnadara Department of Physics, University of Colombo, Colombo 3, Sri Lanka
ABSTRACT A simulation and a pilot scale implementation of a vision based robot navigation system was carried out to determine the feasibility and the efficiency of using optical flow based algorithms in autonomous robot navigation. For the simulation work, VRML 97 was used to create the virtual world and the robot. Simulink was used to implement the algorithm and the optical flow calculations. The video stream captured through a virtual camera as seen by the robot was used to calculate the optical flow to determine the direction and the speed of the robot for the next step. A mathematical model was used to solve the problem analytically. The same algorithm was implemented and tested in real‐time in a controlled environment. Data gathered through the simulation and the actual implementation showed that it is possible to use optical flow based algorithms in robot navigation. Key words: Optical Flow, Simulations, Robotics, VRML 1. INTRODUCTION The simulation of a physical system can have multiple advantages. Most computer simulations run multiple times faster than the real system, saving time for the researcher. Identical parameters can be used to simulate the same situation multiple times over. There are certain conditions which cannot be easily created in the physical world but these can be simulated easily using a computer. It is easy to acquire data through a simulation. These advantages are highly desirable if a model is to be developed, debugged, improved or compared with other algorithms before implementation in real‐time. This research has two objectives. The first objective is to determine the feasibility of using optical flow to control the navigation of a robot. The second objective is to determine the feasibility of using Virtual Reality Modeling Language (VRML) to simulate vision based applications. In this research VRML and Simulink were used to simulate a virtual world and to capture video scenes from the virtual world. The data was used to simulate navigation of an autonomous robot using optical flow based algorithms. Most autonomous robots use active sensors which emit energy to the surrounding. These have high energy consumption and cannot be operated in stealth mode. In this research a passive video camera is used as the sensor. Due to its optical flow based nature it is expected to be less computationally demanding and can be implemented in a parallel manner. The developed algorithms were later tested in a real‐time implementation within a controlled environment. Simulink is a high level visual programming language. It is shipped as a standard component of MATLAB. It provides many rapid developing tools. The major advantage of Simulink is its ability to handle calculations in real time. It also integrates other tool boxes in MATLAB such as image acquisition tool box.
6th IEEE In nternational C Conference on n Industrial an nd Informatio on Systems (IC CIIS), 2011, 48 89‐492 The physsical world consists of four dimensio ons; three dim mensions of space and o one dimensio on of time. V Virtual Realitty is a techno ology that alllows us to m model all theese dimensions and to ad dd interactio on to it as w well. But the u use of virtuaal reality as aan input source in another program is rare. In this t research h, VR was used to simulaate the real world w and to o see the preediction of an a automateed navigation n. was introduceed to providee interactivee 3D worlds tto Virtual Reality Modelling Languagge (VRML) w L is an XML like languagge where ussers define d different objects. A VRM ML internet users. VRML nsists of mulltiple three d dimensional interactive v vector graphic objects. A VRML‐engin ne world con is needed d to view and d interact witth VRML worrlds. In this eexperiment V VRML 97 was selected du ue to its compatibility with Simulink. VRML is a human readable XM ML compatible definitio on RML world (V VRML Modell) using a texxt file. language. It is possiblle to create aa complete VR ws. First the results are presented p in n detail for the t simulatio on The papeer is organizzed as follow work. The real‐time implementation results w which supporrt the simulaation results are describeed he paper. towards tthe end of th ALL SYYTEM M ARCHITECTURE 2. OVERA navigation sy ystem, two m main models were needed d; In order to simulate the optical fflow based n ot which willl observe thee surroundin ng and move around the virtual world, (1) A model of a robo A environm ment which is filled by d different obsstacles and resembles th he real worlld and (2) An textures aand lighting conditions.
hart for the ssimulation Figurre 1: Flow ch Figure 1 illustrates the t flow chart of the com mplete simu ulation. The image i is cap ptured from a ng techniquess. The Horn‐‐Shunk method [1] is useed camera. IIt is enhanceed using imagge processin to calculaate the opticaal flow. Valu ues of this op ptical vector are forwarded to the decision makin ng block. Th he calculated new directio on and speed d are forwarrded to the controller blo ock which wiill convert them t to VRM ML compatib ble comman nds. These co ommands arre then rediirected to th he VRML mo odel which w will then chan nge the modeel to match th he new param meters. obot: In VRM ML a special o object called avatar is useed as the cen nter of focus which is useed Virtual Ro to simulaate the robot.. Following p parameters w were used to simulate thee VR robot.
6th IEEE International Conference on Industrial and Information Systems (ICIIS), 2011, 489‐492 Table 1: Properties of VR robot Parameter Value Unit Collision Distance 0.25 m Height 1.6 m Step Size 0.75 m Speed 1 m/s Visibility Limit 500 ‐ Navigation Type walk ‐ Transmission Type Linear ‐ To make the final simulation more realistic, a model of a car (see Figure 2) was put in the place of robot. It also helped to easily determine the orientation and the heading direction of the robot.
Figure 2: Virtual car as the robot
The Obstacles: In the calibration phase, to make the comparison and debugging easier, brick walls were used as the default obstacles. They were created by wrapping texture around cuboids. The ability to add texture to objects was important since it can be used to simulate the new design under different environments. Changing the size, color and illumination had a considerable effect on the accuracy of the navigation algorithm. A monochrome obstacle was the hardest to avoid. Due to the idealistic nature of the simulation, a solid fill does not allow camera to identify any feature. It is assumed that in the real world scenario there are no such obstacles. For example, there will be some change, at least a shadow on objects. Virtual Lights and Virtual Cameras: VRML world needs light sources. Two point light sources were position high up in the model with a larger dispersion radius mimicking the Sun or a bulb fixed above. A directional light source was added to the front of the car. This allows the robot to eliminate shadows and get a clear view on the objects on its path. VRML cameras with following parameters were used to capture the view seen by the robot in the virtual world; Resolution: 200×320 pixels, Refresh rate: 10 Hz, and, Color depth: RGB (24 bit). By adding more than one camera objects it is possible to get a bird’s eye view of the scenario which helps to evaluate the efficiency and detect errors in the algorithm (see Figure 3).
6th IEEE International Conference on Industrial and Information Systems (ICIIS), 2011, 489‐492
Figure 3: Bird’s eye view of the virtual world Animations and Interactions: Objects in the VRML world have a position and an angle. Other features such as the scale were ignored assuming neither the root nor the obstacles in the world will make changes to the size, color or texture during the simulation. By manipulating parameters an object in virtual world can be moved or rotated. A block was coded to do the necessary translations from mathematics to VRML [2]. Image Processing: Three cameras were placed in the virtual world. One was to emulate the camera mounted on the robot. This camera would set its translation and direction to match the position and the direction robot was travelling. Other camera was fixed high up in the world giving a bird’s eye view of the robot. This made it easier to debug the problems in the algorithm and to have a rough estimation on performance. Third camera was a movable camera. With the third camera, a user can move to any position in the virtual world to observe scenes with respect to different points and different angles. Preprocessing was used to enhance the image quality and thereby improve the optical flow calculation. Preprocessing also reduced the noise. If a high resolution camera is used image resizing may be required to lower the number of pixels that needs processing to make the algorithm run faster. In this research work two steps (segmentation and converting to gray scale) was carried out to enhance images. The contrast was set to the optimum value. Optical Flow Calculation: Optical flow is also called the image velocity. It is the amount of image movement with respect to time. Animals sense this by having special sensitivity to movements. In a video, optical flow can be calculated using special algorithms. Since optical flow equation (1) is an under constrained equation it can be solved in different methods. 0 Here , are the spatiotemporal image brightness derivatives, u is the horizontal optical flow and v is the vertical optical flow. Two popular methods for solving the above equation are; Horn‐Schunck Method [1] and Lucas‐ Kanade Method [3]. In this work Horn‐Schunck method was used.
6th IEEE International Conference on Industrial and Information Systems (ICIIS), 2011, 489‐492 By assuming that the optical flow is smooth over the entire image, the Horn‐Schunck method [1] , that minimizes the equation (2); where is the computes an estimate of the velocity field, smoothness factor.
(2)
Navigating Decisions: Three different methods were tested; (1) Centre path navigation (2) Rotate at edge navigation, and, (3) Fuzzy navigation. In the first approach, optical flow was calculated for the two regions on either side. If the difference was greater than a threshold value a tuning process was activated. If the left optical flow is higher which means objects are either closer or moving faster on the left side, the vehicle would turn to right and vice versa. The turning angle depended on the amount of optical flow difference. This approach was tested and found to be reasonably accurate at keeping the vehicle at the middle of the road. It works well with smooth curved roads and avoiding obstacles. However whenever there is a gap in the flow it tends to move towards it. A solution is proposed in the next approach.
Figure 4: Calculated optical flow vectors Second method is almost the inverse of the above. It let the robot move in a straight line (or through a path defined by the higher layers). If the flow of a side of the optical flow exceeds a predefined value it detect it as an obstacle and navigate the vehicle away from the direction of higher optical flow value (see Figure 4). This prevents the robot to move towards gaps in the flow. Disadvantage of this method is that it will not create a smooth and optimum curve when moving against a bend. A new fuzzy base approach is being studied at present to tackle this problem. 3. RESULTS Results of optical flow based simulations have been tested with real world situations [4] and the mathematical models in the past [5]. In this work, in order to simplify the comparison, a circular well (radium 50 m) was created and the robot was placed inside the well.
6th IEEE International Conference on Industrial and Information Systems (ICIIS), 2011, 489‐492 Once the robot was close enough to the wall it started to move in a circle. Mean radius followed by the robot was 28.3 m with a standard deviation of 0.3 m (Figure 5). When the world is symmetric and robot is heading exactly perpendicular to a face of a symmetric wall it failed to avoid it. path of the robot 50 path wall
40
30
20
y
10
0
-10
-20
-30
-40
-50 -50
-40
-30
-20
-10
0 x
10
20
30
40
50
Figure 5: Robot motion in a circular wall A micro mouse robot was used to test the developed optical flow based algorithms in real‐time implementation. For the present work, optical flow algorithms were executed through a personnel computer connected to the robot. Navigation commands were sent to the robot using a wireless link. Overview of the pilot‐system is shown in Figure 6.
Caster wheel
Controller Board Stepper Motors Camera
Figure 6: Micro Mouse Bot
Robot is a mobile ground based kilo‐bot scale non holonomic robot with two degrees of freedom. PIC16F877A microcontroller was used as the onboard processor. Other onboard modules includes RS232 wireless transparent communication link module, stepper driver module, CCTV module and a power regulator module. CCTV receiver and RS 232 transceiver was connected to the personnel computer. Simulink was used to calculate the optical flow and to make decisions. In early experiments a wired web camera was used. Due to the low frame rate of 10 fps it was suitable only for low speed movements. Later a wireless close circuit television camera was modified using a USB video grabber and a virtual web camera emulator which yielded real time videos with 30 fps. Optical flow calculation and decision making was done similar to the simulation. Only difference was disabling the optical flow calculation during a rotation
6th IEEE International Conference on Industrial and Information Systems (ICIIS), 2011, 489‐492 movement. Required movements were encoded and sent to the microcontroller using RS232 protocol. Microcontroller computes the step size and direction for the wheels required by the stepper motor controller. The system was tested under controlled environment with good light conditions and low amount of movements. Robot was able to detect and avoid static obstacles. When an object was placed in front of the moving robot it was identified and direction was changed to avoid collision. 4. CONCLUSIONS There is one main draw back in the present system. With the present camera position on the robot, the camera view does not capture the objects on the two sides. Due to this, in rare situations, the robot may try to turn without seeing an obstacle close to the wheels of the robot. The work demonstrates that it is possible to simulate and capture video streams from VRML worlds to test and develop real world applications. It can also be concluded that optical flow can be used to avoid obstacles in autonomous robot navigation. Acknowledgments: This research is supported by the National Science Foundation (NSF) Grant RG/2007/E/03. Authors wish to acknowledge NSF for providing financial support in carrying out this work. 5. REFERENCES [1]. Horn, B.K. & Shunck, B.G. (1981). Determining Optical Flow. Artificial Intelligence. 17: 185‐ 203 [2]. Andrea, L.A. David, R.N. & John, L.M. The VRML 2.0 Source Book. John Wiley & Sons [3]. Barron, J.L. Fleet, D.J. & Beauchemin, S. (1994). Performance of optical flow techniques. International Journal of Computer Vision. 12(1): 43‐77 [4]. Dev, A. Krose, B. & Groen, F. (1997). Navigation of mobile robot on the temporal development of optical flow. IEEE International Conference on Intelligent robots and systems. 2: 558‐563 [5]. Krose, B.J.A. Dev, A. & Groen F.C.A. (2000). Heading direction of a mobile robot from the optical flow. Image and vision computing, 18(5): 415‐424