mSecurity - Modular system for active security inside buildings Janusz Bedkowski
Tresya Yuliana Fitri
MANDALA Al. Jerozolimskie 202, Warsaw, Poland Email:
[email protected]
MANDALA Al. Jerozolimskie 202, Warsaw, Poland Email:
[email protected]
Karol Majek Institute of Mathematical Machines ul. Krzywickiego 34, Warsaw, Poland Email:
[email protected] Abstract—The goal of the research is the modular system for active security inside buildings composed of autonomous mobile security agents, 3D perception nodes, car scanning module and data center for data processing and visualization in the cloud. This system is designed for protection of urban environments including critical infrastructures. The system will work in urban areas into which large numbers of citizens are freely admitted, for usual activities or special events or routinely reside or gather. Among others, these include parks, squares and markets, shopping malls, train and bus stations, passenger terminals, hotels and tourist resorts, cultural, historical, religious and educational centres and banks. In the paper the core component of the system - 360 degree laser system will be presented with the calibration method. The method for search of dangerous objects will be discussed. The prototype of the mSecurity project was tested in the Museum of the History of Polish Jews (Warsaw, Poland), and Airport in Lodz, Poland. System was able to find abandoned hand luggage. The system is able to improve existing security system.
I.
I NTRODUCTION
Mobile robots can efficiently map indoor and outdoor environments. They can perform this task autonomously without stop. Thus, robots are robust and do not experience fatigue. The minor disadvantage is the need of recharging batteries. We are focused on the system capable performing 3D mapping in certain indoor location for security reasons. Robot is able to compare current 3D information with the reference model to locate new objects. These objects are considered as dangerous ones. In this work we are focused on high Technology Reediness Level (TRL) of proposed system. Therefore, we have chosen well-established and predictable autonomous mobile platform PIONEER 3DX. This mobile platform can efficiently navigate in indoor environment based on 2D map built based on laser readings. This map is accurate and overall navigation system is cost efficient end reliable enough to consider it within the context of our system. We designed 3D measurement unit (it is available in the market [6], figure 4). This work extends our previous research on 6DSLAM [2] and related with work of other researchers [7] using similar hardware. There are many solutions of similar robotic equipment. Figure 1 shows 3D measurement unit KaRoLa. This unit is very efficient and accurate within the context of calibration method described in [9]. Unfortunately this robotic hardware is not manufactured for market; therefore it can be used only as a reference state of the art. Figure 2 shows an example of new robotic
3D sensor: Multisense-SL Sensor Head (combined sensor: Hokuyo laser range finder mounted on the rotating head, stereo camera [4]). The MultiSense SL is a tri-modal (laser, 3D stereo, and video), high-resolution, high-data-rate, and highaccuracy 3D range sensor designed for robots working in demanding conditions. Vendor claims that the sensor is suitable for use in a wide variety of robotics, automation, and sensing applications, such as autonomous vehicles, 3D mapping, and workspace understanding. It is stated that the MultiSense SL is packaged in a rugged, compact housing, along with a lowpower FPGA processor, and is precalibrated at the factory. An important background is that the MultiSense SL is the sensor of choice for the Atlas humanoid robots in the DARPA Robotics Challenge (DRC). As the head of the humanoid, the SL provides the majority of perceptual data used for teleoperation as well as automated control. In the 2013 trials, 5 of the 8 top scoring teams used the Carnegie Robotics SL sensor head. The MultiSense SL produces dense 3D point clouds from both the spinning laser and the stereo camera, which are accurately aligned and colorized onboard the sensor. The stereo sensor provides extremely dense, full frame range data at high frame-rates, which is complemented by high accuracy data, at lower rates, from the spinning laser. This sensor can also output standard color video. Figure 3 shows already well established 3D sensor for mobile robotics designed and manufactured in Fraunhofer IAIS institute [5]. This is continuously rotating 3D-Laser-Scanner with one small 2D-Laser-Scanner. Unfortunately there is no information concerning integration issues. Vendor does not provide any calibration procedure. Figure 4 shows commercial 3D measurement unit used in this work [6]. This unit was designed for easier integration with mobile platforms (PIONEER 3AT, PIONEER 3DX, drRobot, Husky). The advantage of this approach is possibility to integrate two planar lasers and other hardware via RS232/RS485 interfaces. Thus, horizontal 2D laser measurement system is dedicated for SLAM and obstacle avoidance purpose, second laser mounted onto rotating head can gather 3D data. It is possible to mount any laser from SICK family or Hokuyo. The interface is simplified into single Ethernet connector and 12V power. Embedded RS232/485 servers integrate robot or other sensors; therefore the integration effort is reduced. The unit is compatible with ROS and dedicated C++ SDK is available. Figure 5 demonstrates integrated solution tested in this work - autonomous mobile robot PIONEER 3DX equipped with
3D unit. In this paper the calibration procedure is proposed. Therefore, sensor provides accurate data.
Fig. 1.
3D measurement unit KaRoLa [9].
Fig. 3. 3D sensor for mobile robotics designed and manufactured in Fraunhoffer IAIS institute http://www.3d-scanner.net/index.html.
Fig. 2. Example of modern robotic 3D sensor: Multisense-SL Sensor Head (combined sensor: Hokuyo laser range finder mounted on the rotating head, stereocamera, http://carnegierobotics.com/multisense-sl/).
II.
S YSTEM OVERVIEW
The main component of the system is mobile robot shown on figure 5. Robot is equipped with planar laser range finder for SLAM purpose and autonomous navigation. Robot is also equipped with continuously rotating planar laser for 3D measurement. This 3D data is used for 3D map building and detection of new objects. Robot is connected to base station via
Fig. 4. Commertial 3D measurement unit used in this work (http://www.mandalarobotics.com).
1 [3]). Our system uses ARIA for creating 2D floor plan of indoor environments (example in figure 6). This plan includes obstacles, goals and home position (charging station). We use this floor plan for autonomous 3D mapping. Thus robot visits goal position and performs 3D measurements in stop-scan fashion. Example 3D map is shown in figure 7. This 3D map is reference information for robot searching hazardous objects.
Fig. 5. The core module of mSecurity system. Mobile robot PIONEER 3DX equipped with new 3D laser unit.
WiFi router. All computation is done in Base Station. We built the system based on ARIA software [1]. ARIA (Advanced Robot Interface for Applications) from MobileRobots is a C++ library SDK for all MobileRobots/ActivMedia platforms such as well-known PIONEER platform. ARIA dynamically controls robot’s velocity, heading, relative heading, and other motion parameters either through its high-level Actions infrastructure. ARIA receives position estimates, sonar readings, and all other current operating data sent by the robot platform. We are using MobileRobots software (Mapper3, ARNL, RobotServer, MobileEyes) with new software modules integrated via ArNetworking module. Mapper3 is a tool for creating maps of a robot’s operating environment, for autonomous localization and navigation. These maps can be used in the MobileSim simulator or any software using the ARIA library. We use Mapper3 to place and edit obstacles (objects such as walls visible to robot sensors) and logical items such as goal points, entry points for docking station and forbidden areas that navigation software should plan around. An interesting functionality is that Mapper3 can import results of scanning an environment with the robot and laser, automatically rectifying and converting the scanned data into a correct map of the environment. This results in a highly accurate map suitable for localization and navigation with ARNL. ARNL is very accurate localization in a mapped space using robot odometry combined with laser rangefinder data. This software package is very robust 2D navigation even in crowded environments. RobotServer is basic functionality of our robot. It can receive current mission plan. This module stores 2D map with defined mission plan as a set of goals. Ones robot reaches the goal it sends the acknowledgement to higher logic layer. RobotServer controls the robot and perform all needed processing for navigation, localization and data acquisition from basic sensors (odometry, planar laser range finder, sonars, battery status). RobotServer is connected with onboard robot controller. MobileEyes is user interface. This software enables sending commands to the robot and receiving all information on-line. New software modules makes robot capable building 3D map of the environment based on ICP method (algorithm
Fig. 6.
2D floor plan of indoor environments.
Fig. 7.
Example 3D map created by robot.
Algorithm 1 Iterative Closest Point ICP INPUT: Two point clouds A = {ai }, B= {bi }, an initial transformation T0 OUTPUT: The transformation T, which aligns A and B T ← T0 for iter ← 0 to maxIterations do for i ← 0 to N do mi ← FindClosestPointInA(T · bi ) if kmi − T · bi k ≤ dmax then wi ← 1 else wi ← 0 end if end for nP o 2 T ← argmin i wi kT · bi − mi k end for
T
A. 3D laser callibration Calibration procedure of our 3D mapping system is inspired by work [9]. Authors shown the light-weight and compact 3D laser scanner KaRoLa and the calibration procedure based on Particle Swarm Optimization PSO method. Authors shown that by taking a 360-degree scan, the resulting point cloud contains redundant information as each point in the environment is observed twice. Given the exact mounting pose, they can deform the point cloud correcting this misalignment. By splitting each scan line in half; they create two separate point clouds, which should ideally, given perfect mounting, be completely identical except for sensor noise. Calibration method corrects the misalignment based on single measurement in indoor environment. Thus, we do not need any calibration equipment such as chessboard etc. This method is very pragmatic from real application point of view. Our method is based on Iterative Closest Point algorithm (algorithm 1) that finds misalignment of laser mounting pose (algorithm 2). Figure 8 shows 3D cloud of points before callibration. Figure 9 shows 3D cloud of points after callibration. Algorithm 2 3D laser calibration INPUT: Two point clouds A = {ai }, B= {bi }, an initial transformation Mc0 OUTPUT: The callibration transformation Mc Mc ← Mc0 for iter ← 0 to maxIterations do Transform point cloud A via Mc Transform point cloud B via Mc T ← ICP (A, B) {tx , ty , tz , yaw, n pith, roll} ← T o t pitch roll Mc ← Mc + t2x , 2y , t2z , yaw , , 2 2 2 end for
Fig. 9. 3D cloud of points after callibration. Red color - 3D cloud of points created from left part of scan line, blue color - 3D cloud of points created from right part of scan line.
•
objectdangerous ,
•
objectordinary .
GUI marks dangerous object by red color (figures 13, 14). In current implementation of the system operators decide about next action concerning neutralization of dangerous objects. The goal of the robotic action is to provide necessary information (colored cloud of point) in autonomous fashion. This information is available via Ethernet for authorized personnel and it will be discussed in next section. An important issue is proper initialization of the system by providing reference 3D model with minimized number of dynamic obstacles. This method guarantees marking all dynamic obstacles during mobile robot survey. The 3D data acquisition system limits the minimum dimensions of the obstacles. Currently we are able to detect average backpacks located 5m from robot. Algorithm 3 Detection of dangerous objects INPUT: Reference model Rm, Actual measurment Am OUTPUT: Label {objectdangerous , objectordinary } for i ← 0 to NAm do if N N S(Ami , Rm) then labelAmi ← objectdangerous else labelAmi ← objectordinary end if end for C. GUI over Ethernet
Fig. 8. 3D cloud of points before callibration. Red color - 3D cloud of points created from left part of scan line, blue color - 3D cloud of points created from right part of scan line.
B. Detection of dangerous objects To find dangerous objects we use NNS (Nearest Neighborhood Search) procedure for each query point in current 3D measurement. Robot autonomously reaches temporary goal with assigned heading. We perform ICP alignment of current 3D scan to global reference model. Thus error of localization is reduced. Each query point from current 3D scan that the nearest neighbor does not exist, within the certain radius, is considered as a part of dangerous object. Algorithm 3 marks all points into two labels
The GUI system is implemented using a SaaS (Software as a Service) model with the capability to perform High Performance Computing in the Cloud (HPC). The HPC approach is a relatively new topic, especially in mobile robotics. We are using it for the task of 3D mapping (NNS, ICP) and for detection of dangerous objects. HPC is connected with GPU virtualization technology, allowing many users to have access to computational demanding applications in the Cloud. We have presented this technology applied for Search and Rescue ICARUS system in work [8]. We decided to provide needed computation power through a mobile data center based on NVIDIA GRID server (Supermicro RZ-1240i- NVK2 with two VGXK2 cards - in total 4 GPUs with 4GB RAM each) capable of GPU virtualization. In this project we have used the Citrix XenApp infrastructure for building SaaS model. In XenApp, many users can share a single GPU by being giving access to a Microsoft Windows Server 2012 system or a group
of published applications. XenApp model allows sharing a single GPU for numerous application using CUDA. Published applications can be accessed as SaaS from web browser after installing Citrix Receiver - thin client compatible with all popular operating system for mobile devices (Mac OS X, Linux, Windows, iOS, Android etc.). The limitation is that the newest GRID compatible GPU drivers support only the CUDA 5.5 programming framework and the GRID GPU processors offer CUDA 3.0 compute capability. The provided functionalities are sufficient for our application. Besides CUDA, GRID technology supports building applications with 3D rendering (OpenGL 4.4). Thus, in our system it is used for remote GUI for authorized personnel. We hope this this functionality will be very interesting for EOD personnel. Our system is able to improve global awareness of the situation by providing 3D rendering of the mapped environment with marked dangerous objects over Ethernet. Our SaaS based security system enables integration with existing CCTV systems. The advantage of the approach is that all information from robot can be transmitted over Ethernet. Thus, for example Crisis Management Centers will be able to use robotic information to coordinate crisis action. We tested experimental setup in Institute of Mathematical Machines shown in figure 10. Our system is able streaming rendering information from robot for 20 users simultanusly.
Fig. 11. mSecurity project tested in the Museum of the History of Polish Jews (Warsaw, Poland).
Fig. 10. Experimental setup of simulated Crisis Management Centre in Institute of Mathematical Machines for testing SaaS model for robotic applications.
III.
E XPERIMENTS
The prototype of the mSecurity project was tested in the Museum of the History of Polish Jews (figure 11), and Airport in Lodz, Poland (figure 12). We were able to deploy system within 30 minutes. After one hour of 2D mapping we obtained accurate floor plan. Mission planning takes in average two hours. Robot is performing 3D measurements autonomously in stop scan fashion therefore human interaction is decreased. Typical single measurement takes one minute. Robot reaches next goal within seconds because average distance between goals should not exceed 5 meters. System was able to find abandoned hand luggage (figure 13). Robot is able to detect people (figure 14). The system is able to improve existing security system.
Fig. 12.
mSecurity project tested in Airport in Lodz, Poland.
Fig. 13.
Found abondend luggadge (red color).
mobile security agents. mSecurity project will use this 3D unit as a core component for mSecurity modules. The added value will be increased area coverage by autonomous mobile security agent, visualization of the threat available over Ethernet for authorized personnel, data registration of 360 degree around mobile security agents (camera and geometry - 3D laser measurement system) and sufficient autonomous search of threats based on geometry comparison between model and current 3D measurement. mSecurity project will be improved in the future. New security sensors will be integrated with autonomous mobile security agent for increasing situational awareness. It is worth investing into artificial intelligence applied in security applications because it assists the security personnel in their routines by fatigue reduction, which increases the safety level. Furthermore novel 360-degree area coverage with mobile autonomous security agent will provide new functionality available for future crisis analysis and training. The mSecurity idea is that all information has to be recorded with maximum resolution. This problem will be the main aspect within the context of future feasibility study.
Fig. 15. Future modules of mSecurity project. Left: static 3D unit for screening of 3D environments, middle: gate for car screening, right: new autonomous mobile robot extended by spherical camera (Ladybug 3). Fig. 14.
Detected people (red color).
VI. IV.
C ONCLUSIONS
The goal of the research was the design of the modular system for active security inside buildings composed of autonomous mobile security agents, 3D perception nodes, car scanning module and data center for data processing and visualization in the cloud. This system is designed for protection of urban environments including critical infrastructures. The system was tested in urban areas into which large numbers of citizens are freely admitted, for usual activities or special events or routinely reside or gather. Among others, these include parks, squares and markets, shopping malls, train and bus stations, passenger terminals, hotels and tourist resorts, cultural, historical, religious and educational centers and banks. In the paper the core component of the system 360-degree laser system was presented with the calibration method. The method for search of dangerous objects was discussed and the results in real task scenarios were presented. The prototype of the mSecurity project was tested in the Museum of the History of Polish Jews (Warsaw, Poland), and Airport in Lodz, Poland. System was able to find abandoned hand luggage. The system is able to improve existing security system. V.
This work is done with the support of NCBiR (Polish Centre for Research and Development) project: Research of Mobile Spatial Assistance System Nr: LIDER/036/659/L4/12/NCBR/2013 R EFERENCES [1] [2]
[3]
[4] [5] [6] [7]
[8]
F UTURE WORK
Figure 15 shows the visualization of new mSecurity modules. The novelty of mSecurity project is an integrated solution for security inside buildings. The system will fulfill the gap in existing CCTV systems by introducing autonomous
ACKNOWLEDGMENT
[9]
ADEPT. http://robots.mobilerobots.com/wiki/ARIA, 2015. J. Bedkowski, A. Maslowski, and G. de Cubber. Real time 3D localization and mapping for USAR robotic application. Industrial Robot, 39(5):464–474, 2012. Paul J. Besl and Neil D. McKay. A Method for Registration of 3-D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239–256, February 1992. carnegierobotics. http://carnegierobotics.com/multisense-sl/, 2015. fraunhofer. http://www.3d-scanner.net/index.html, 2015. MANDALA. http://www.mandalarobotics.com, 2015. Andreas N¨uchter, Kai Lingemann, Joachim Hertzberg, and Hartmut Surmann. 6D SLAM3D mapping outdoor environments. Journal of Field Robotics, 24(8-9):699–722, 2007. M. Pelka, J. Bedkowski, K. Majek, P. Musialik, A. Maslowski, A. Coelho, R. Baptista, R. Goncalves, G. De Cubber, H. Balta, J. Sanchez, and S. Govindraj. Training and support system in the cloud for improving the situational awareness in search and rescue (sar) operations. In IEEE International Symposium on Safety, Security, and Rescue Robotics, 2014. Lars Pfotzer, Jan Oberlnder, Arne Roennau, and Rdiger Dillmann. Development and calibration of karola, a compact, high-resolution 3d laser scanner. In Proceedings of the IEEE International Symposium on Safety, Security, and Rescue Robotics, 2014.