224
C. H. Chen, C. Cheng, D. Page, A. Koschan, and M. Abidi, "Modular Robotics and Intelligent Imaging for Unmanned Systems," in Proc. SPIE Unmanned Systems Technology VIII, Vol. 6230, Orlando, FL, pp. 43-52, April 2006.
Modular Robotics and Intelligent Imaging for Unmanned Systems Chung-Hao Chen*, Chang Cheng, David Page, Andreas Koschan, Mongi Abidi Imaging, Robotics, and Intelligent Systems Laboratory Department of Electrical & Computer Engineering The University of Tennessee, Knoxville, TN 37996 ABSTRACT The Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at the University of Tennessee is currently developing a modular approach to unmanned systems to increase mission flexibility and aid system interoperability for security and surveillance applications. The main focus of the IRIS research is the development of sensor bricks where the term brick denotes a self-contained system that consists of the sensor itself, a processing unit, wireless communications, and a power source. Prototypes of a variety of sensor bricks have been developed. These systems include a thermal imaging brick, a quad video brick, a 3D range brick, and a nuclear (gamma ray and neutron) detection bricks. These bricks have been integrated in a modular fashion into mobility platforms to form functional unmanned systems. Research avenues include sensor processing algorithms, system integration, communications architecture, multi-sensor fusion, sensor planning, sensor-based localization, and path planning. This research is focused towards security and surveillance applications such as under vehicle inspection, wide-area perimeter surveillance, and highvalue asset monitoring. This paper presents an overview of the IRIS research activities in modular robotics and includes results from prototype systems. Keywords: modular architecture, navigation planning, sensor brick, multimodal data fusion
1. INTRODUCTION An increase in global terrorism has created a need for unmanned surveillance and inspection systems. For example, under-vehicle inspection is a challenging topic for threat detection where current solutions require security personnel to close to a vehicle and to use a mirror on the end of a stick. When terrorists use vehicles as bombs to kill lives and destroy property, this traditional mirror-on-a-stick approach puts security personnel in harms way. Additionally, the mirror-on-a-stick method can reliably reach along the edges of the vehicle but has difficulty with the central portion of the vehicle. In other words, complete vehicle coverage is a challenge. Figure 1 shows an example of a mirror-on-a-stick system.
(a)
(b)
Figure 1: An example of the traditional mirror-on-a-stick approach to under-vehicle inspection. (a) The mirror system. (b) The view the mirror might provide under the vehicle. *
Further author information: Email:
[email protected] ; phone: (865) 974-9737; fax: (865) 974-5459
Unmanned Systems Technology VIII, edited by Grant R. Gerhart, Charles M. Shoemaker, Douglas W. Gage, Proc. of SPIE Vol. 6230, 623006, (2006) · 0277-786X/06/$15 · doi: 10.1117/12.666444
Proc. of SPIE Vol. 6230 623006-1
For different scenarios, we need robotics with different capabilities and functionalities. Currently, the approach is to develop robotic systems for a mission-specific design where the needs of the mission drive the functionality and capability of the unmanned platform [1-7]. This one-of-a-kind design philosophy leads to high cost and limits the range of use for the unmanned system. An alternative solution is a modular robotic system [8-10]. In particular, this paper proposes a modular multi-sensor robotic system, which is able to support a wide range of applications. The challenge lies in making the unmanned system completely modular, scalable, controllable, and programmable. To this end, this paper further proposes independent modules, known as sensor bricks. Each sensor brick is designed with a different sensor such that each brick operates independently of the other bricks, but the bricks can be plugged together to form a complete unmanned robotics system. The brick systems can share information through either wired or wireless network connections, and on-board processing within each brick enables the system to distribute data processing. The system or operator can thus choose which bricks are most suitable for a specific application, and higher level data fusion of multiple sensors yields a more coherent view for situational awareness. For example, some threat objects in the undervehicle inspection scenario are not clearly visible with a normal video camera (visual sensor brick) but are easily detected with a thermal camera (thermal sensor brick). Additionally, a range sensor brick provides 3D information. In this paper, we propose SafeBot as a modular unmanned robotic system. SafeBot currently includes four sensors in a modular configuration: laser range sensor brick, thermal sensor brick, visual sensor brick and gamma and neutron sensor brick. The term “sensor brick” denotes a self sufficient component that can act as a plug-and-play device. The main potential application of SafeBot is for under vehicle inspection at an entrance gate to a secure facility, for example. This paper presents the hardware and software architecture for the SafeBot robotic system. We first introduce the concept and architecture of the modular robotic system. Then we present sensor bricks, mobility bricks and system control architecture individually. Finally, the experiment and conclusion are addressed.
2. MODULAR ROBOTIC SYSTEM The Modular Robotic System (multi-sensor, multi-mobility robot system) consists of four component blocks: • • • •
A sensing component (different types of sensors such as the vision sensor, the thermal sensor, the laser range sensor, the radiological detector, etc.), A mobility component, A processing and intelligence component, and A human interface component.
Modularity is a major concern and is maintained at every level of the brick, such that some other model can replace each block of the brick without affecting the overall performance. The choice of these components used for the various blocks of the brick could be made, as per the demands of the application, which minimizes the hardware required. The power block present in the brick, provides power to the other blocks and enables each brick to be self-sufficient. The bricks are designed to be plug-and-play devices that can be placed and removed when required. Each component has its own key task and can act as an independent, autonomous unit. The design of the system allows upgrading or overhauling the system modularly. That is, when a system fails or is replaced by a more capable unit, the replacement is accomplished quickly, easily, and automatically by the system. Moreover, the system must be functional in both generic applications, and in specific situations or purposes. Figure 2 shows the modular robotic system architecture.
Proc. of SPIE Vol. 6230 623006-2
Mobility Device (Robot)
Sensor Brick (Embedded Computer System)
Data Storage
Artificial Intelligence
Mobility Control Component
Sensing Objective
Routing Component
Power
Remote Control Terminal (Joystick or PC)
Figure 2: This block diagram illustrates the logical architecture of the modular robotics system architecture.
The system architecture requires the sensor components to be self-sufficient modules, and they will only provide the system with information of the platform’s environment. In addition, the sensor component is also a plug-and-play device. The human interface is a connection between the human operator and the unmanned system. The routing device is the wireless transmission component, which exchanges information data between components. Next, the host computer is the central intelligence component. Its function is to control the generic algorithms and other software components contained in the block. These algorithms include image processing, path finding, storage control, etc. Furthermore, each algorithm can be replaced by a more efficient or applicable one assuming that the communication protocol is the same.
3. SENSOR BRICKS The generic sensor brick architecture used for the modular robotic system is shown in Figure 3. Communication Block
Preprocessing Block
Acquisition/Sensor Block
Power Block Figure 3: The generic architecture for each individual sensor brick. A brick consists of a communication block, a processing (CPU) block, a sensor block, and finally a power block.
Incorporating a modular philosophy, the sensor brick unit is designed to be robust, compact, modular and independent. Once the unit is powered on, operations can be done remotely via wireless connection. The sensor brick is comprised of four functional units in the form of “blocks.” If a unit needs to be replaced, the replacement is done easily due to the modularity in the design architecture of the brick. This allows the use of the brick in difficult situations where the brick needs to be operated continuously without interruption. Currently, four sensor bricks are built based upon this concept. They are the laser range sensor brick, the vision sensor brick, the thermal sensor brick, and the gamma neutron radiological sensor brick. The blocks for a sensor brick are: •
Acquisition Block: The main objective of this block is to acquire data from the object or scene. This block is essentially the sensor of interest whether it is a normal camera, a thermal imager, a laser range scanner, or a gamma ray neutron detector.
Proc. of SPIE Vol. 6230 623006-3
•
• •
Preprocessing Block: Data captured by the acquisition block is in raw format therefore preliminary processing is needed to get the image or profile in the required form. Basic processing might involve noise removal, contrast enhancement, rotation, smoothing, edge detection, etc. High level processing may also be required for the raw data including registration, fusion and 3-D modeling. Communication Block: This block is responsible for communicating with the central control unit or remote host. The block can send and receive the range data or images in raw form or in the processed format to the remote host via a communication interface. Power Block: This unit provides the electrical power requirements for the sensor brick. This block uses DC-toDC converters to provide the electrical input to each block, depending upon the specific requirements of each respective block.
In our current sensor bricks, the acquisition block may be a visual camera, laser range camera, or thermal camera. The acquired data is passed to the next block—the preprocessing block—to perform low-level image processing. An ASUS “P4P800-VM” mini-ITX motherboard is used in the vision sensor brick for preprocessing operation. The transfer of preprocessed data from the individual sensor brick to the central computer is achieved using a wireless communication block, which is a Linksys “WMP 54G” wireless card of 802.11g IEEE standard. The power block comprises of 12V batteries of Panasonic LC-RA1212P Lead Acid Battery for the purpose of actuation and PW-70A DC-DC converter to power the processor and each sensor. 3.1. Visual Sensor Brick The sensor component (the acquisition block) of the Vision Sensor Brick is a Clover Color Dome Quad Camera DQ205 with four built-in cameras and a quad splitter (see Figure 4). The camera has high resolution, a low illumination requirement, and provides 360 degrees coverage with manual control for tilt. The purpose of the camera is to capture real-time images to assemble into video. The advantage of this video system is the ability to supervise multiple areas, simultaneously. The quad splitter is able to compress images from four separate cameras and simultaneously display them on a single monitor screen. As the name implies, the splitter allows the viewing and recording of images simultaneously from four cameras. Vision for the robot is necessary for both autonomous and manually controlled operations. Accordingly, the Vision Sensor Brick serves as the eye of the system. Manually controlled operations require a clear picture of where the robot is heading, the path it has to follow, and the surrounding environment. The vision sensor can provide a view of the environment by capturing images and transmitting them to the remote control computer. A complete field-of-view of 3600 degrees with a zooming capability of the area is possible. The complex application areas of the Vision Sensor Brick include areas such as robot navigation, video tracking, and intrusion and threat detection.
(a)
(b)
Figure 4: Visual sensor brick. (a) Top view of the brick showing the quad camera dome. (b) Example data acquired from brick.
3.2. Laser Range Sensor Brick The Laser Range Sensor Brick (see Figure 5) comprises a laser scanner (LMS 200 manufactured by the SICK Inc.) for acquisition.. Laser range sensor can be employed to measure the geometry of objects and the range map of the surrounding space. It operates by sweeping a laser across a scene and at each angle measuring the range and the
Proc. of SPIE Vol. 6230 623006-4
returned intensity. Range images are different from the images acquired by a normal camera in a sense that a normal camera measures reflected light from the scene being observed and each pixel value of the obtained image represents the intensity of the reflected light value, whereas the pixel value of a range image represents the distance from the object being acquired by the camera. In SafeBot robotic system, laser range sensor is mainly used for autonomous navigation and 3-D scene modeling.
(a)
(b)
Figure 5: Laser range sensor brick. (a) Top view of the brick showing the SICK scanner at the front. (b) Example 3D image from the scanner of the under-vehicle scenario. This data is a range image.
3.3. Thermal Sensor Brick An Omega infrared camera is used to capture infrared images for the Thermal Sensor Brick (see Figure 6). The Omega is a long wavelength thermal camera with sensitivity in the range of 7.5 microns – 13.5 microns. Small size, light weight and low power consumption are the key features of the Omega camera. Thermal sensors serve as an important part in a Modular Robotics System. It provides vision beyond the frequency range of human eyes, overcoming the limitations of vision cameras. In other words, it provides night vision capabilities to the robot to control its movement in the dark. Besides that, the night vision ability makes thermal sensors especially useful for safety inspection and surveillance under limited illumination conditions. In SafeBot robotic system, thermal sensor brick is mainly used for robot navigation and intrusion and threat detection.
(a)
(b)
Figure 6: Thermal sensor brick. (a) Top view of the brick showing the small Omega thermal camera. (b) Thermal image data from the sensor showing the exhaust of a vehicle.
3.4. Gamma and Neutron Radiological Sensor Brick Gamma and Neutron radiological sensor brick uses a low profile neutron and gamma ray detector for acquisition block. See Figure 7. This is a passive detector, which measures neutron and gamma radiation. Seven channels are for gamma radiation, and one channel is for neuron radiation. Gamma and neutron radiological sensing is important due to the emerging need for the detection of radiological weapons. Since these weapons cause large-scale damage, sensing for detection of these devices is critical. Passive detection occurs because threat devices naturally emit gamma radiation or neutrons. Another form of sensing nuclear-material laden devices involves the active interrogation of the suspect device with certain types of radiation emitters. The interrogative action forces a “reflection” that serves as a signature of the material type. This radiological or gamma sensor detects the reflection which provides a characterization of the suspect
Proc. of SPIE Vol. 6230 623006-5
device. The characterization can identify the content or the nature of the device. The detection of nuclear-type devices mandates a characterization in terms of its contents or state. If the device has not exploded, characterizing the content or nature of the weapon provides information that is relevant to engaging the appropriate dismantlement or disposal procedures and policies so that abatement of the risk may begin. In these procedures, robots may remove, disarm, or disengage the potential of explosion. If an explosion occurs (an after-effect scenario), there is a unique need to scout and sample the area. Without the further (and unnecessary) threat to human safety, modular robotic systems can see what type of damaging contaminants is involved, and where the greatest concentration or location of the contaminants.
._ ,_ —
—
(a)
I—
II
(b)
Figure 7: Gamma and Neutron radiological sensor brick. (a) Top view of the brick. (b) Software interface for the brick.
3.5. Mobility Brick (Safebot System) The mobility brick of SafeBot consists of two independent tracks. See Figure 8. These two tracks are interchangeable by each other and both of them are self-suffice modules. The benefit is that if any of the tracks fails, it can be replaced by other tracks quickly and easily. Each track has its own controller, battery, and motor. We designed and implemented two control systems for SafeBot: manual control system and automatic control system. SafeBot can be manually controlled by Joystick. The signal generated by Joystick is sent to the two robot controllers in the two tracks simultaneously. The two robot controllers then drive the left and right motor independently using that signal. A better way to control SafeBot is to use a computer (sensor brick) to directly send motion commands into the robot controllers through RS232 cable. Program inside robot controllers will then decide how to execute these motion commands. To help the robot controller measure the accurate translation distance, in each track, an encoder is installed and connected to robot controller. The robot controllers will receive the accurate information about the robot’s speed from the encoders and calculate the translation distance accordingly so as to decide when to stop drive motor. This automatic control system can drive SafeBot very accurately and therefore give SafeBot autonomy capability.
Figure 8: SafeBot robotic system showing the independent mobility bricks on each side, which are mechanically linked by the cross struts. The joystick in the middle is a hand-held operator interface that enables teleoperation of the platform.
4. SYSTEM CONTROL ARCHITECTURE The SafeBot control methodology can be divided into two categories: remote terminal control or autonomy control. In the remote terminal control architecture, a host computer works as a control center. All sensor bricks will automatically
Proc. of SPIE Vol. 6230 623006-6
establish a wireless connection to the control center. These sensor bricks first read data from sensors, do some low-level sensor data processing, then send the processed sensor data wirelessly to the control center. All these sensor bricks can be connected to SafeBot mobility bricks through RS232 cable. Therefore if these sensor bricks receive motion commands from the control center, they can directly send these motion commands to the mobility brick to drive the robot platform to move. In the control center, an intelligent system is implemented. This intelligent system includes image processing subsystem, navigation subsystem and radioactive source search subsystem. Image processing subsystem can do some high-level image processing operations such as feature extraction, objective recognition and tracking etc. on video or thermal images received from vision or thermal sensor bricks. Figure 9 shows the architecture of the remote terminal control and autonomy control. In the autonomy control, subsystems are added into the sensor brick for different missions. This subsystem mainly serves for data collection, safety inspection and surveillance.
Gamma and Neutron Radiological Sensor Brick
Thermal Sensor Brick
Laser Range Sensor Brick
ar =
Visual Sensor Brick
SafeBot mounted with one sensor brick.
Wireless Communication
Remote control (Joystick or PC) with intelligent systems
Figure 9: The system can have both an autonomous or teleoperated control.
5. EXPERIMENTAL RESULTS Currently, the potential application for SafeBot robotic system is under-vehicle inspection as illustrated in Figure 10.
Figure 10: Example of under-vehicle inspection with SafeBot using autonomous control. The four tires represent an arbitrary configuration of a vehicle. The SafeBot using the 3D range brick localizes position and then plans a path to cover the entire vehicle.
Proc. of SPIE Vol. 6230 623006-7
Three experiments are designed for this purpose. The first one is autonomous under vehicle mapping. We want SafeBot robot to be able to automatically cover the whole under vehicle area. A “visual vehicle” is built in IRIS Laboratory in University of Tennessee: four tires are put on the floor, the deployment is just like a real vehicle. We then let the SafeBot navigate in that area. The laser range sensor is used to detect those four tires during navigation so as to avoid colliding to these tires. These four tires can also be used as landmarks for SafeBot to localize its location relative to these tires. By periodical localization, SafeBot can overcome motion errors and orientation errors and finish this task very reliably. The practical test results show that SafeBot robot can autonomously cover the whole under vehicle area with 60 seconds. The second experiment is threat object detection in Figure 11. We hide a threat object under a real vehicle. Then we put visual and thermal sensor brick in the SafeBot platform and manually drive it into the under vehicle area and get the real-time visual/thermal images back to the control center. From these real-time visual/thermal images, people can easily distinguish the hided threat object. Especially for thermal images, the big heat difference between the threat object and other parts of that vehicle makes the threat object very noticeable in those images. From this experiment, we can see that visual/thermal sensor bricks are very useful for under vehicle safety inspection. In the future, we want to combine the laser range sensor brick with these visual/thermal sensors.
II
Threat Object
Threat Object (a)
(b)
Figure 11: The comparison of visual and thermal imaging. (a) Visual sensor brick for under-vehicle inspection has difficulty identifying the threat object. (b) Thermal sensor brick enables detection with a better contrast.
The third experiment is gamma radiation source localization in Figure 12. We put several Cs-137 and Co-60 radiation sources in our “visual” vehicle. Then we put Gamma and Neutron Radiological sensor brick in SafeBot platform and let SafeBot take this sensor move around that vehicle. During this process, the sensor will take a measurement every second. An iterative optimization method is used to localize the radiation source based on sensor measurements. This method first breaks the area up into a grid. Then every point of that grid assumes it is the radiation source and calculates the predicted source intensities in these locations where sensor measurement is taken. The point with predicted source intensities closest to real sensor measurements will be the most likely source location. The test result shows that even the radiation source is very weak, this method will still find the source location with high accuracy. For one source case, the average source localization error is about 0.04 meters. For two source case, the average source localization is about 0.05 meters and for three source case, the average source localization is about 0.14 meters.
Proc. of SPIE Vol. 6230 623006-8
St.,, net
(a) (b) Figure 12: Gamma and neutron radiological sensor brick. (a) The brick mounted on the SafeBot. (b) Test sources for the system.
6. CONCLUSION In this paper, we presented the design and implementation of the SafeBot robotic system. The SafeBot unmanned system includes three components: mobility component, sensing component and intelligence component. These components represent a modular approach to unmanned robotic systems. Each sensor brick is independent and functions as a separate entity or can be used as a plug-and-play device to modular robotic system. The whole unit can operate continuously for up to two hours on a full charge of the batteries. These bricks facilitate the options of monitoring the data in real time as well as capturing the data and storing it for off-line processing. The presence of wireless communication through IEEE 802.11g standard makes it possible to transfer the data acquired by the brick to any remotely located control computer, where the data can be further analyzed. The modular design of SafeBot enables multiple capabilities and functionalities, which makes it well applied to scenarios where various tasks need to be executed. In this paper, we have demonstrated one potential application for SafeBot, which is under-vehicle inspection.
ACKNOWLEDGEMENTS This work is supported by the University Research Program in Robotics under grant DOE-DE-FG52-2004NA25589 and by the DOD/RDECOM/NAC/ARC Program under grant W56HZV-04-2-0001.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9.
Remotec. Andros family of tracked vehilcles. Oak Ridge, TN, www.remotec-andros.com. Y. Takahashi, T. Arai, Y. Mae, K. Inoue, and N. Koyachi, “Development of multi-limb robot with omnidirectional manipulability and mobility”, in Proc. IEEE Intelligent Robots and Systems 2000 (IROS 2000), Vol. 3, pp. 20122017, 2000. N. S. Flann, K. L. Moore, and L. Ma, “A small mobile robot for security and inspection applications,” Control Engineering Practice, Vol. 10, pp. 1265-1270, Nov. 2002. iRobot, Packbot family of tracked vehicles, Burlington, MA, www.irobot.com. R. R. Murphy, “Rescue robotics for homeland security”, Communications of the ACM 2004, Vol. 47, No. 3, pp. 66-68, 2004. H. R. Everett, “Robotic security systems,” IEEE Instrumentation & Measurement Magazine, Vol. 6, No. 4, pp. 3034, 2003. K. Osuka and H. Amano, “Development concept of rescue robot against nuclear plant accidents”, Proc. ICASE/SICE Joint Workshop 2002, pp. 185-190. C. Qian, D. Page, A. Koschan, and M. Abidi, “A brick-architecture-based mobile under-vehicle inspection system,” Proc. SPIE Unmanned Ground Vehicle Technology VII, Vol. 5804, Orlando, FL, pp. 182-190, March 2005. A. Koschan, D. Page, J.-C. Ng, M. Abidi, D. Gorsich, and G. Gerhart, “SAFER Under Vehicle Inspection Through Video Mosaic Building,” International Journal of Industrial Robot, Vol. 31, No. 5, pp. 435-442, September 2004.
Proc. of SPIE Vol. 6230 623006-9
10. 11. 12.
D. L. Page, Y. Fougerolle, A. F. Koschan, A. Gribok, M. A. Abidi, D. J. Gorsich, and G. R. Gerhart, “SAFER Vehicle Inspection: A Multimodal Robotic Sensing Platform,” in Proc. SPIE Unmanned Ground Vehicle Technology VI, Vol. 5422, Orlando, FL, USA, pp. 549-560, April 2004. J. S. Albus, “A reference model architecture for intelligent systems design,” in An Introduction to Intelligent and Autonomous Control, Kluwer Academic Publishers, 1993, pp.27-56. R. C. Luo, K. L. Su, K. H. Tsai, “Intelligent security robot fire detection system using adaptive sensory fusion method,” IEEE Industrial Electronics Society, IECON 2002, Volume 4, pp. 2663-2668.
Proc. of SPIE Vol. 6230 623006-10