MODULAR SENSOR “BRICKS” AND UNMANNED SYSTEMS FOR PERSISTENT LARGE AREA SURVEILLANCE
D. L. Page, A. F. Koschan, C.-H. Chen, M. Jackson, C. Chang, M. A. Abidi Imaging, Robotics, and Intelligent Systems Laboratory, The University of Tennessee, 1508 Middle Drive, Knoxville, Tennessee 37996-2100,
[email protected]
Monitoring of large outdoor areas requires significant manpower, equipment, planning, and the constant vigilance of security personnel. Ideally, we would like to replace the individual human responsibilities with automated solutions – minimizing both risk and cost if possible. Therefore, this paper proposes a modular robotic system designed to autonomously perform wide-area reconnaissance and intruder tracking in indoor/outdoor environments. The system employs a modular approach to the robotic architecture using a concept called sensor “bricks.” Using these bricks, we utilize a multi-robot system to maintain perimeter surveillance. A visual servo control algorithm in conjunction with perimeter camera emplacements is proposed to maximize overhead camera ability while allowing for high-resolution object inspection. Experiments show that these automated solutions allow for better overall monitoring with less human involvement. I. INTRODUCTION Large area surveillance in the context of physical security is a high priority for the Department of Energy (DOE) National Nuclear Security Administration (NNSA) and as such requires constant vigilance for both safeguarding US nuclear assets and fostering nonproliferation on the international stage. At NNSA facilities, special fencing, intrusion sensors, surveillance cameras, and enhanced lighting along with highly trained personnel maintain the safety and security of the nuclear weapons stockpile. At facilities abroad, NNSA assists individual nations with their own nuclear security through cooperative efforts to improve these same capabilities— fencing, sensors, cameras, lighting, and personnel. As the NNSA seeks to increase capability, to become more agile, and to reduce costs—“better, faster, cheaper,” the enhancement and adoption of emerging technologies in unmanned systems for physical security are necessary for enterprise transformation. The Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at the University of Tennessee, Knoxville (UTK) is developing research in
modular unmanned systems as a paradigm shift away from more traditional “bottom-up” designs. II. MODULAR BRICK CONCEPT The bottom-up approach usually occurs with a solesource supplier of an unmanned system that is designed for a specific application. The modular approach, advocated by UTK IRIS, designates logical and physical modules, called sensor “bricks”, with standardized interconnects.1,2 By analogy, computing platforms prior to the 1970s followed the “bottom-up” paradigm. If one needed a ballistics table, a company would design and field a special purpose computer that only computed ballistics tables. During the 1970s, standardized interfaces (ATA, IDE, USB, FireWire) and communications protocols (IP, HTTP, FTP) evolved such that now when one buys, for example, a Dell computer (which can compute more than just ballistics tables), it is only a Dell in name. Dell is an integrator. This latter paradigm has led to increased capability and reduced costs in the computing industry, and such a paradigm, if adopted by companies that develop unmanned systems, offers the same potential. UTK IRIS has developed hardware and software systems that implement a modular “brick” methodology for unmanned systems. The brick concept attempts to achieve an interchangeable suite of sensors and mobility units for robotics. A single brick consists of four sub-modules as noted in Fig. 1. The input/output sub-module contains either sensors, mobility devices, manipulators or other systems that enable the robot to sense and take action on its environment. The processing sub-module fuses data from the input/output sub-module and incorporates reasoning and analysis. The communications module transmits this information to appropriate end users or to other robotics platforms. The power sub-module includes batteries and other sources such that the other submodules can be easily plugged into the system on a standardized grid. The brick concept allows the user to easily deploy and upgrade the system as new bricks become available.
2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008
137
III. PERIMETER SURVEILLANCE Input Output
Processing
Comms.
Power
Fig. 1. The sensor brick concept consists of sub-modules. UTK IRIS has implemented the modular brick approach in the context of large area surveillance through a variety of unmanned platforms. UTK IRIS has identified three types of unmanned surveillance platforms: (1) stationary ground platforms, (2) mobile ground platforms, and (3) aerial platforms. Stationary ground platforms are primarily pan-tilt-zoom cameras that include standard digital cameras for daylight surveillance, cameras telephoto lens for long-range surveillance, and infrared cameras for night time surveillance. Mobile ground and aerial systems are essentially these same as camera sensors mounted on either a ground or aerial vehicle, respectively. UTK IRIS has developed modular approaches to the hardware systems associated with these systems with examples shown in Fig. 2, which include unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV).
(a)
The protection of sensitive areas such as NNSA facilities requires constant surveillance and monitoring. However, significant manpower is needed to adequately monitor such large outdoor environments. While the replacement of patrolling guards and watchtowers with camera emplacements decreases the number of total guards, the new problem of monitoring the displays from these cameras becomes a significant burden. One solution is automated surveillance, such as automatic video tracking and autonomous robotic systems. Therefore, here we propose an automated surveillance system consisting of perimeter-mounted cameras that carry on the automated video tracking task, one UGV robot that assists the automated tracking system with the investigation of suspicious objects, which cannot be seen clearly on those perimeter-mounted cameras, and another UGV robot that patrols the area, detects objects of interest, and tracks these objects using an onboard camera. In the security application, there are two primary tasks that need to be executed by autonomous robotic systems3,4 for reducing the use of manpower in a generic case: reconnaissance and tracking. During the reconnaissance task, the robot autonomously patrols the perimeter of an area while avoiding obstacles in real-time. If a suspicious object or intruder is detected, the robot tracks the object based on video feedback coupled with tracking approaches. Simultaneously, the robot maintains real-time obstacle detection and avoidance via its 3D range sensor. Fig. 3 depicts the interaction of reconnaissance and tracking tasks.
(b)
Fig. 3. This diagram illustrates the interaction of reconnaissance and tracking tasks.
(c) (d) Fig. 2. Examples of UTK IRIS robotic platforms that employ the brick concept. (a) Autonomous UGV developed from a commercially available teleoperated system. (b) Long-range zoom system with automated pantilt for target tracking. (c) Low-profile UGV for under vehicle inspection. (d) UAV helicopter.
In the reconnaissance research area, beacon-based positioning systems7 distribute beacons that assist with localization. While system performance does not degrade over time, extra effort is required to initialize beacons in constantly changing environments. CyberScout8 combines the sensory capabilities of surveillance with the mobility of reconnaissance via camera-mounted mobile robotic platforms. However, the system can lose objects that go out of surveillance range.
2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008
138
In the tracking research area, NASA has developed an equipment carrier robot that autonomously follows astronauts walking on the moon.9 Intelligent vehicles8 have the capacity to discern relevant traffic participants (other vehicles, pedestrians, etc.) and detect potential dangerous situations. However, these systems are primarily concerned with object tracking, not obstacle avoidance. Furthermore, the problem of guarding the interior of a perimeter, modeled as a 2D polygon with n vertices, has been well-studied as the Art Gallery Problem.19,20 Simply replacing the generic art gallery guards with stationary camera emplacements is one method of optimizing the coverage versus the number of cameras required. These stationary sensors5,6 require the desired object, such as an intruder, to stay within range and line of sight. However, for many outdoor environments, only low-resolution images of interesting objects can be obtained due to the sheer size of the interior. Moreover, the object within the scene is often blurred or occluded. Thus, most feature recognition and tracking algorithms become unreliable at best. Therefore, a robotic platform could be incorporated into the video tracking system to assist with the tracking and investigation of suspicious objects, illustrated in Fig. 4. An overhead camera monitors the scene in real-time. The user at an attached PC controller can trigger the system to automatically move the robot to a location where it can acquire detailed information about an object of interest. The interaction of an overhead camera and the robot in the scene is visual servo control. Overhead Camera
sensor is used to calculate the robot’s position in relation to nearby walls and obstacles. To facilitate both indoor and outdoor operation, the ANDROS Mark VA,14 an allterrain robot, was chosen as the test platform. Designed for navigating rough terrain and climbing stairs, few obstacles pose a problem. Therefore, the 3D range sensor has been mounted horizontally at a 60 cm height. Let Dr, Dl, and Df represent radial distances acquired by the 3D range sensor between the robot and the right end point, left end point, and front plane, respectively (see Fig. 5 below). r represents the angle between the middle point and right end point, while l represents the angle between the middle point and the left end point of the wall or obstacle. d Wall Left End
Dl ©l
Robot
Visual Servo Controller
Fig. 4. Illustration of the coordination between robotic system and video tracking system. IV. AUTONOMOUS RECONNAISSANCE APPROACHES In both indoor and outdoor fenced perimeter environments, walls are natural localization points. Therefore, to autonomously navigate a perimeter, our robot utilizes enhanced wall-following combined with obstacle avoidance. Sensor data supplied by a 3D range
R
Dr
Df
The variable d is the difference in the projections of the left and right end points of the wall or any point of the obstacle.
d
Wireless Communication
©r
Fig. 5. The robot with a 3D range sensor, R, encounters a wall and samples left and right end points (top view).
Location B
Robot (Location A)
Right End
Dr cos T r Dl cos T l
T r T l d S2
(1)
When d is greater than zero, the robot is moving away from the wall or obstacle. Conversely, when d is less than zero, the robot is moving toward the wall or obstacle. A zero value for d implies the robot is perfectly parallel to the wall. Once d is calculated, the robot can correct its heading relative to the wall. Meanwhile, the robot detects forward obstacles via Df and maneuvers around them. However, in the event that an object is deemed too close to maneuver around, the robot simply halts. Otherwise, the program loops infinitely until manually halted. Fig. 6 shows the flow chart of the fundamental approach. The constants , , and μ are the thresholds for deeming a forward obstacle too close, determining if the robot is approaching a corner, and determining if the robot is parallel to the wall, respectively. Based on this fundamental design, we can extend it into two approaches, dead-reckoning wall-following navigation (DRWFN) and feedback-sensing wall-following
2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008
139
navigation (FSWFN). In FSWFN, the robot’s forward travel and turn angles are not calculated before sending control commands. Instead, the robot turns and moves forward in fixed units until an obstacle or corner is sensed. In DRWFN, the robot’s turning angle, T, and the projected forward travel, Mf, is calculated before sending commands.
TT Mf
dO
(2)
Df Z
(3)
where and are coefficients transforming d and Df into the angle, T, and distance, Mf, respectively. Start
generating robot control commands in the control command generator phase, we sample 3D range sensor data to sense obstacles in the projected path. If no obstacles are detected, the control command generator phase and robot mobility phase are activated sequentially. Subsequently, the control of the system returns back to the image input phase. Otherwise, the system activates the obstacle avoidance phase for generating an alternate robot control command in order to avoid the obstacle. The robot control command is calculated beginning with (4) below. In (4), M and M0 represent the current mass motion vector of the moving object and the origin vector which indicates the center of the image, respectively. The matrix converts the image coordinates into the 2D world coordinate system. The result W represents the difference vector between the current and original position in world coordinates. Control commands are then generated based on the difference vector.
Df >
FALSE
ªXi (M M o )« ¬X j
TRUE
Obstacle and Corner Avoidance
Turn Right ELSE
Df >
Get Information from 3D Range Sensor
TRUE Computed
Calibration Phase
d>µ
Turn Left
d < -µ
Turn Right
Following Wall |d|