A Real-Time High Performance Computation Architecture for ... - MDPI

4 downloads 188362 Views 6MB Size Report
Feb 12, 2017 - Computation (HPC) problems such as statistical big data analytics ..... More tests illustrating the speed advantages of Cloud are shown in ...
sensors Article

A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units Kui Liu 1 , Sixiao Wei 1 , Zhijiang Chen 1 , Bin Jia 1 , Genshe Chen 1 , Haibin Ling 2 , Carolyn Sheaff 3 and Erik Blasch 3, * 1

2 3

*

Intelligent Fusion Technology, Inc., Germantown, MD 20876, USA; [email protected] (K.L.); [email protected] (S.W.); [email protected] (Z.C.); [email protected] (B.J.); [email protected] (G.C.) Department of Computer and Information Sciences, Temple University, Philadelphia, PA 19122, USA; [email protected] Air Force Research Laboratory, Information Directorate, Rome, NY 13441, USA; [email protected] Correspondence: [email protected]; Tel.: +1-315-330-2395

Academic Editor: Joonki Paik Received: 29 November 2016; Accepted: 7 February 2017; Published: 12 February 2017

Abstract: This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. Keywords: high performance computation; cloud infrastructure; target detection; target tracking

1. Introduction Currently, various sensor platforms are used for persistently monitoring very large areas. For instance, Wide Area Motion Imagery (WAMI) systems on aerial platforms flying at 7000 feet and covering an area of a 2 mile radius can be used as an aid in disaster relief, emergency response, and traffic management [1]. Such systems typically produce an overwhelmingly large amount of information as provided in the Columbus Large Image Format (CLIF) dataset [2]. Thus, the management of WAMI is a Big Data problem [3,4]. Monitoring such a large amount of data with a human operator is not feasible, which calls for an automated method of processing the data to support information fusion. The detection and tracking of the moving objects, as well as data storage and retrieval, are critical in the design and implementation of such WAMI data management systems. Compared with traditional video surveillance tasks, WAMI collections are characterized by large areas, low frame rates, while the pixels on targets are small. Thus traditional computer vision methods or architectures are inadequate to solve these problems due to the high computational complexity Sensors 2017, 17, 356; doi:10.3390/s17020356

www.mdpi.com/journal/sensors

Sensors 2017, 17, 356

2 of 15 16

complexity and low resolution of the target of interest in the imagery. This paper presents and low resolution the targetwith of interest in Processing the imagery. This(GPUs) paper presents (1) combinesmanner Cloud (1) combines Cloud of technology Graphic Units in a complementary technology with Graphic Processing Units (GPUs) in a complementary manner within the framework within the framework of real-time high performance computation architecture and (2) applies to of highof performance computation and (2)targets appliesbased to the application of detecting thereal-time application detecting and tracking architecture multiple moving on Wide Area Motion and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). Imagery (WAMI). The system gives gives aa flexible to add The Cloud Cloud computation computation system flexible architecture architecture which which enables enables applications applications to add to to or or modify modify the the data data system system as as the the needs needs change. change. Cost-effective Cost-effective and and readily-available readily-available components components from A Graphical Graphical Processing Processing Unit Unit (GPU), (GPU), from any any Information Information Technology Technology (IT) (IT) vendors vendors can can be be used. used. A found on video is aa specialized processor that that can can rapidly rapidly found on video cards cards and and as as part part of of display display systems, systems, is specialized processor execute commands for manipulating and displaying images. GPU-accelerated computing execute commands for manipulating and displaying images. GPU-accelerated computing offers offers faster across aa broad faster performance performance across broad range range of of designs, designs, animations animations and and video video applications. applications. This This paper paper presents architecture integrating presents aa design design and and implementation implementation of of aa computation computation architecture integrating the the Cloud Cloud and and GPU GPU high efficiency platforms for detecting MUltiple MOving Targets (MUMOTs) based on WAMI images. high efficiency platforms for detecting MUltiple MOving Targets (MUMOTs) based on WAMI Using theUsing resulting detectionsdetections and tracks,and it istracks, possible pattern-of-life (PoL) analysis(PoL) [5,6] images. the resulting it to is implement possible toaimplement a pattern-of-life and anomaly detection algorithms MUMOTs. it However, is worth noting that the introduced analysis [5,6] and anomaly detectionfor algorithms forHowever, MUMOTs. it is worth noting that the approach in this paper is general purpose in the sense that it is applicable to other High Performance introduced approach in this paper is general purpose in the sense that it is applicable to other High Computation (HPC) problems suchproblems as statistical data analytics based on many object such Performance Computation (HPC) suchbig as statistical big data analytics based onfeatures many object as size, frequency and moving direction [7], aerospace satellite orbit propagation problems [8,9] and features such as size, frequency and moving direction [7], aerospace satellite orbit propagation pedestrian detection [10,11] in large scale images. Moreover, the same computation architecture can be problems [8,9] and pedestrian detection [10,11] in large scale images. Moreover, the same applied to conventional vision to problems. The illustration of the problems. speed-up performance on computation architecturecomputer can be applied conventional computer vision The illustration the WAMI dataset is demonstrated in the experiment section (Section 4). of the speed-up performance on the WAMI dataset is demonstrated in the experiment section (Section 4). Traditional detecting aa limited limited number number Traditional visual visual detection detection and and tracking tracking algorithms algorithms mainly mainly focus focus on on detecting of objects in in small small scenes scenes and and high high frame frame rate rate image image sequences. sequences. Therefore Therefore traditional traditional methods methods cannot cannot of objects be directly generalized to WAMI scenarios with low frame rates. The large scale images taken by be directly generalized to WAMI scenarios with low frame rates. The large scale images taken by WAMI WAMI systems systems are are more more than than 140,000,000 140,000,000 pixels pixels as as shown shown in in Figure Figure 1. 1. The The frame frame rate of the real-time input image is at most two frames per second. Objects in WAMI data are much smaller full input image is at most two frames per second. Objects in WAMI data are much smaller the full the motion motion imaging systems, with vehicle sizes ranging from 4 to 70 pixels in grayscale image groups. imaging systems, with vehicle sizes ranging from 4 to 70 pixels in grayscale image groups. The lack The lack of computationally efficient tools analysis hasabecome a bottleneck for WAMI utilizing WAMI data of computationally efficient analysis has tools become bottleneck for utilizing data in urban in urban surveillance. Accordingly, it is desirable to develop a computational and data management surveillance. Accordingly, it is desirable to develop a computational and data management platform platform for detecting on large scale aerial imagesvia viahigh high performance performance for detecting multiplemultiple movingmoving objectsobjects basedbased on large scale aerial images computing technology. technology.

(a)

(b)

Figure 1. from thethe AFRL CLIF dataset [1] and (b) the dataset [12]. [12]. The Figure 1. (a) (a)Example Exampleimages images from AFRL CLIF dataset [1] and (b) WPAFB the WPAFB dataset CLIFCLIF images are not and the dashed red lines show boundary of six images different The images arestitched not stitched and the dashed red linesthe show the boundary of sixfrom images from electro-optical (EO) cameras; whereas the WPAFB images are the results after stitching the image different electro-optical (EO) cameras; whereas the WPAFB images are the results after stitching the from six cameras. image from six cameras.

Sensors 2017, 17, 356

3 of 16

Target tracking has been extensively investigated as described in many papers [13–17]. Santhaseelan et al. [18] proposed a robust feature-based method to track objects in WAMI data by exploiting local phased and orientation information based on the monogenic signal representation. However, this method does not present a systematical architecture for a prototype. Eekeren et al. [19] built a pipeline which comprises a static vehicle detector, a smart object filtering algorithm using a 3D building reconstruction, and multi-camera object tracking based on template matching. This method focused on 3D reconstruction and abnormal detection while no experimental results are reported. Pelapur et al. [20] proposed Likelihood of Features Tracking (LoFT) system that is based on fusing multiple sources of information about the target and its environment akin to a track-before-detect (TBD) approach. These proposed methods reported on track accuracy versus comparisons of the computational complexity and performance time that is dependent on actual real-time image registration and stitching methods. Likewise, these proposed methods or systems did not highlight the MUMOTs problem under realistic conditions. The proposed computation architecture presented in this paper differs from all the previous works in the sense that this architecture makes the inevitable high computational complexity problem a real-time implementation by using Cloud architecture. This paper develops a framework which utilizes image sets from WAMI sensor streaming to detect and track objects of interest for real-time applications. The High Performance Computation (HPC) framework includes: An Apache Hadoop distributed storage and distributed processing system; Multiple GPUs for parallel computations; and a front-end web server for data storage and retrieving. The Apache Hadoop framework utilizes the MapReduce programming model to distribute the computational algorithms parallel on each cluster. Each cluster includes a CPU and multiple GPUs. The MapReduce program is composed of a Map() procedure which performs registration, background generation, foreground generation, vehicle detection, data association and trajectories generation and a Reduce() procedure which performs a summary operation (generating the target track identifications (IDs) and saving the detection and trajectories information in HDFS (Hadoop Distributed File System)). Moreover; the MapReduce program arranges the distributed clusters and runs the GPU tasks in a Compute Unified Device Architecture (CUDA) parallel computing platform. In this GC-MTT MUMOTs detection and tracking system, registration, background generation and foreground generation are performed in GPUs. A front-end web server is developed to present the most useful data and obtain abstract and meaningful information for human analysts. Specifically, a web-based data visualization system is developed to communicate with the Apache Hadoop cluster for conducting the real-time tracking analysis and user interaction. More details is introduced in Section 3. Comparing the MUMOTs tasks with CPUs or GPUs alone, the application of distributed and parallel computing structure based on Apache Hadoop MapReduce and CUDA Basic Linear Algebra Subroutines (cuBLAS) can achieve a real-time outcome of detection and tracking [21,22]. Moreover, the obtained detection and recognition results for the MUMOTs indicate that the parallel-based approach provides drastically improved, speed-up performance in real-time and under realistic conditions. One of the contributions of the paper is that a non real-time algorithm achieves real-time performance based on the application of a Cloud and GPU parallel computing infrastructure. Cloud and parallel computation has become increasingly important for computer vision and image processing systems [23,24]. A Cloud-based framework uses Cloud computing, which is constructed within high performance clusters to include the combination of CPUs and GPUs [25–27]. A local server in the Cloud is provided for the data storage and retrieving and a web portal server is provided for the user. Based on the local server, the tracking results (trajectories of the objects of interest) generated from the computation nodes in a Hadoop Distributed File System (HDFS) are converted and saved in the data base. From the web portal, the user chooses algorithms, datasets and system parameters such as the number of computation nodes in operation, the image registration methods and the processing units (with or without Hadoop, CPU or GPU processing). A controller in the Cloud will then decide the amount of computing resources to be allocated to the task in order

Sensors 2017, 17, 356

4 of 16

to achieve the user's requirements of performance. Inside the Cloud, each computation node is actually within each cluster. One CPU and multiple GPUs comprise each cluster. The computation nodes are capable of running various high performance computation tasks. In this paper, the high performance tasks for image-based MTT include registration, background generation, foreground generation, detection, data association, and trajectories generation which are usually run by several threads in one or more computation nodes in parallel. The rest of the paper is as follows. Section 2 summarizes the GC-MTT framework. Section 3 discusses the Cloud and GPU infrastructure, block-wise image registration, parallel implementation and the front-end web-based demonstration. Section 4 presents the detection and tracking results and the speed-up performance and Section 5 provides conclusions. 2. Overview of the Proposed GC-MTT Framework Until recently, typical experiments conducted for WAMI object tracking reported in the literature mostly are CPU-based schemes that do not explicitly consider computational complexity. However, in practice, the resolution of a WAMI dataset captured under realistic conditions is considerably large while the resolution of objects of interest is small. As a result, target detection, recognition, and tracking becomes quite challenging where many false alarms are generated by existing approaches. Key tracking computation modules consist of a register, detector and associator in the Cloud architecture. Figure 2 depicts a high level view of a host in the Cloud system. The HPC serves as the computation nodes in the Cloud. All components of the same task have access to share storage in the Cloud in the HDFS. The user only interacts with the system through the Web Graphic User Interface (GUI). The user’s computing requests from the web GUI are passed to the controllers for further processing. The controller assigns an appropriate number of jobs to computation nodes for each request. Each node runs one assigned task (Register, Detector and Associator) and sends the results back to HDFS and then the local server. The web GUI will then display the processing results in real-time once the backend processing finishes. The local server uses a database to store real-time performance of all tasks in the system. It can also monitor the Cloud’s performance such as average CPU/GPU load and memory usage [28]. The user can choose what metrics to be displayed on the web GUI and can call other visual analytic tools such as the Global Positioning System (GPS) coordinates of the objects of interest at a particular instant, the 3-dimensional trajectories of an object [29], or pattern of life (PoL) analysis of MUMOTs. In the MUMOTs detection and tracking scenarios, the key components of the proposed Cloud and GPU system perform the following tasks:



• •

• •

User: The user chooses the system configuration, such as the options of various register, detector and associator algorithms; assigns computation nodes in operation and the selection of processing units, and sends comments to the system to initiate a task. Web GUI: The web GUI communicates with the user and by receiving input commands, displaying processing results and presenting analytical system performance. Controller: The controller receives commands from the web GUI, makes decisions on how many resources are needed to satisfy the required performances, assign jobs and tasks to computation nodes in the Cloud, calculates processing speed in real-time, and informs the web GUI the processing results. Visualization: Local server collects performance metrics such as processing speed and system load, and provides a web GUI query service when there is a need to display the metrics. High performance clusters: Each cluster can act as a register, detector or associator in the system. The tasks which will be performed in CPU or GPUs are decided by the controller. Details of the above key modules will be further described in the following sections.

Sensors 2017, 17, 17, 356 Sensors Sensors2017, 2017, 17,356 356

55 of 15 5of of16 15

Figure Figure 2. 2. The The high high level level view view of of the the Cloud Cloud system system for for MUMOTs MUMOTs detection and tracking. Figure 2. The high level view of the Cloud system for MUMOTs detection and tracking.

3. Proposed MUMOTs Detection Infrastructure Infrastructure 3. 3. Proposed Proposed MUMOTs MUMOTsDetection Detection Infrastructure 3.1. Detail Cloud Computation Infrastructure 3.1. Detail Cloud Computation Infrastructure Figure 3 shows the overall overall data data flow flow of of the the Cloud Cloud and and GPU GPU infrastructure. infrastructure. Besides GPU Figure 3 shows the overall data flow of the Cloud and GPU infrastructure. Besides GPU acceleration, all the WAMI image processing tasks perform registration, background and foreground acceleration, all the WAMI image processing tasks perform registration, background and foreground generation, generation, while while vehicle vehicle detection detection and and data data association association were were performed performed on on a Cloud using GPU generation, while vehicle detection and data association were performed on a Cloud using GPU enabled computation nodes. enabled computation nodes.

Figure 3. The Cloud and GPU high performance computation infrastructure for MUMOTs problem. Figure Figure 3. 3. The The Cloud Cloud and and GPU GPU high high performance performance computation computation infrastructure infrastructure for MUMOTs problem.

This Cloud computation system consists of the following components: This ThisCloud Cloudcomputation computationsystem systemconsists consistsof ofthe thefollowing followingcomponents: components:

Sensors 2017, 17, 356

6 of 16

3.1.1. Mapper in Hadoop The Mapper of the GC-MTT system starts with user's selection of the WAMI image dataset. The Controller will automatically distribute the images to the computation nodes. A master node (could be node0) populates the jobs into the computation nodes and launches a number of operational nodes. Based on each operational computation node, the image registration transforms different Sensors 2017, 17, 356 6 of 15the image sets of data into one coordinate system, since the homography matrix generated from registration can further achieve the rotation and translation matrices. With the rotation and translation 3.1.1. Mapper in Hadoop matrices, the coordinate in the previous frames are projected in to the current frames and thus a general The Mapper of the GC-MTT system starts with user's selection of the WAMI image dataset. The background of the image sets can be generated through the image stitching techniques. Controller will automatically distribute the images to the computation nodes. A master node (could The background generation process includes a background setting step, an image averaging step, be node0) populates the jobs into the computation nodes and launches a number of operational and a background extraction step. The background is a parallelized nodes. Based on each operational computation node,extraction the image registration transformsprocess differentimplemented sets of data into one coordinate system, since the homography matrix generated from the image based on the GPU which uses data structure dim3. registration can further achieve the rotation and translation matrices. With the rotation and The Foreground generation process comprises a pixel value comparison step, a value assigning translation matrices, the coordinate in the previous frames are projected in to the current frames and step, and thus a foreground extraction also implements the Hyper-Q computation framework a general background of thestep. image It sets can be generated through the image stitching techniques. The background generation processaincludes setting step, an image averaging step, to enable multiple CPU cores to launch job ona abackground single GPU simultaneously for increasing GPUs a backgroundCPU extraction Theand background extraction is a parallelized processUnit implemented utilization,and minimizing idlestep. time, introducing a Grid Management (GMU) to create based on the GPU which uses data structure dim3. multiple hardware work queues to reduce synchronization time [30]. The Foreground generation process comprises a pixel value comparison step, a value assigning The SVM Vector Machine) classification process implements histogramframework of oriented step, (Support and a foreground extraction step. It also implements the Hyper-Q computation to gradients multiple CPU cores to launch a job on gradient a single GPU simultaneously fororients increasing GPUs (HoG) to enable compute color gradients and obtain magnitudes and via convolution, utilization, probabilities minimizing CPU time, and levels introducing Grid Management (GMU) to create and then calculates oridle confidence of thea MUMOTs basedUnit on the gradient magnitudes multiple hardware work queues to reduce synchronization time [30]. and orientations. The SVM (Support Vector Machine) classification process implements histogram of oriented The Data association the key component combine the detected MUMOTs in the gradients (HoG) toprocess computeiscolor gradients and obtaintogradient magnitudes and orients via convolution, and then calculates probabilities orDetails confidence levels of the MUMOTs process based on can the be found consecutive WAMI frames into target trajectories. of the data association gradient magnitudes and orientations. in our previous work in [31]. 3.1.2.

The Data association process is the key component to combine the detected MUMOTs in the consecutive WAMI frames into target trajectories. Details of the data association process can be found Reducer in Hadoop in our previous work in [31].

The Reducer in the Hadoop system performs a summary operation which generates the target 3.1.2. Reducer in Hadoop track IDs and saves the detection and trajectories information in HDFS. 3.1.3.

The Reducer in the Hadoop system performs a summary operation which generates the target track IDs and saves the detection and trajectories information in HDFS. System Implementation

3.1.3. System Implementation A distributed, multi-node Apache Hadoop cluster was designed in the back-end for conducting A distributed, multi-node cluster was designed in the back-end for conducting the real-time computation analysis,Apache whichHadoop includes a Hadoop Distributed File System (HDFS) based the real-time computation analysis, which includes a Hadoop Distributed File System (HDFS) based on HPCs running on Ubuntu Linux. In the front-end, a web-based data visualization system presents on HPCs running on Ubuntu Linux. In the front-end, a web-based data visualization system presents the most useful data and obtains meaningful information for human analysts in support of high level the most useful data and obtains meaningful information for human analysts in support of high level information fusion [32] fusion 4 the details the system information fusionand [32] context-enhance and context-enhanceinformation information fusion [33].[33]. FigureFigure 4 details system architecture of both the front-end back-end systems. systems. architecture of both the front-end andand thethe back-end

Figure 4. SystemImplementation Implementation Architecture. Figure 4. System Architecture.

On the back-end, the Hadoop is the main framework which implements all the tracking algorithms. Specifically, Hadoop is a main framework for job scheduling and cluster resource On the back-end, the Hadoop is the framework which implements all themanagement tracking algorithms. andHadoop MapReduce based parallel processing of large and data sets. Once resource Hadoop is running, a Specifically, isisasystem framework for job scheduling cluster management and master machine will assign computation works towards different slaver machines, and then all the MapReduce is system based parallel processing of large data sets. Once Hadoop is running, a master machine will assign computation works towards different slaver machines, and then all the outputs

Sensors 2017, 17, 356

7 of 16

from slaver machines will be collected and stored in the HDFS, which provides high-throughput access to application data. The service integrators will perform the interaction between front-end and Sensors 2017, 17, 356 back-end and provide analyzed real-time data to the front-end database for user requests. 7 of 15 On the front-end, the main components Client and and Web Server. TheHDFS, Client which is directly connected outputs from slaver machines will be are collected stored in the provides to the Web Server via Cache. When any functions are required by client, the web server will call high-throughput access to application data. The service integrators will perform the interaction between their front-end related application interface (API) data to create requests to the supported and back-endprogramming and provide analyzed real-time to the information front-end database for user requests. database.On Once requests sentcomponents out from front-end web GUI, will The be accepted the service theuser front-end, theare main are Client and Webthey Server. Client isby directly connected to the Web Server viaand Cache. When any based functions are required the web server and distributor. Then multi-threading multi-stream processing willby beclient, triggered to execute will call their related application programming interface (API) to create information requests to the generate tracking results by service integrators. Finally, the demonstration results will be sent back to supported database. Once user requests are sent out from front-end web GUI, they will be accepted front-end web-based display for the user interaction. by the service distributor. Then multi-threading and multi-stream based processing will be triggered to execute and generate tracking results by service integrators. Finally, the demonstration results will 3.2. Block-Wise Image Registration be sent back to front-end web-based display for the user interaction.

Due to characteristics of WAMI images, including the overwhelming increase in image size, 3.2.present Block-Wise Image Registration results a prohibitive memory requirement and computational complexity such that the coarse image registration usually takes an unfavorable long processing time based on CPU Due to characteristics of WAMI images, including the overwhelming increase in infrastructure. image size, A fast block-wise registration process, shown in Figure 5, is proposed via a CUDA based parallel results present a prohibitive memory requirement and computational complexity such that the coarse computing infrastructure, which image registration usually takescomprises: an unfavorable long processing time based on CPU infrastructure.

• • • •

A fast block-wise registration process, shown in Figure 5, is proposed via a CUDA based parallel a block-wise Speed Up Robust Feature (SURF) [34] extraction process for each image partition; computing infrastructure, which comprises:

matchingSpeed process for each image(SURF) partition; a point a block-wise Up Robust Feature [34] extraction process for each image partition; consensus (RANSAC) a random a pointsample matching process for each imagealgorithm partition; [35] to remove outlier points from the image partitions; andsample consensus (RANSAC) algorithm [35] to remove outlier points from the image  a random partitions; and estimation process of the image partition to generate block-wise a transformation homography a transformation matrices.estimation process of the image partition to generate block-wise homography matrices.

EachEach registration kernel is configured nodeintegrated integrated with n groups of registration kernel is configuredtotohave haveone onecomputation computation node with n groups four of image partitions at a time instant. Stitching portions of the registered image partitions based on four image partitions at a time instant. Stitching portions of the registered image partitions isis based the block-wise homography matrices generated estimation process, wherein on the block-wise homography matrices generatedfrom fromthe the transformation transformation estimation process, wherein a number of threads perblock blockis is consistent consistent with available shared memory of the GPUs. a number of threads per with available shared memory of the GPUs.

Figure 5.5.The block-wiseimage image registration. Figure Theconcept concept of of block-wise registration.

Sensors 2017, 17, 356

8 of 16

3.3. Parallel Background and Foreground Generation As it can be seen from Figure 6, a GPU-based background and foreground generation is provided. As well known, CPU-based image processing techniques requires many two-dimensional traversals of the image sequences. This 2D operational structure includes many computations, especially the input sequence is a set of large size images. The following steps are performed in GPUs: (1) Background setting of each image with a partition to mask with zero pixel values; (2) Image averaging and background extraction are then performed; (3) To generate foreground, pixel values of output images can be compared with a predetermined threshold value. For example, if a gray value of a pixel is larger than the predetermined threshold, the pixel can be determined as a portion of the foreground image, and the pixel can be assigned as a value of “0”. On the other hand, if a gray value of a pixel is smaller than the predetermined threshold value, the pixel can be determined as a portion of the background image, and the pixel can be assigned as a value of “1”. Thus the binary foreground image is extracted.

Figure 6. The concept of GPU based background and foreground generation.

3.4. Front-End Web-Based Demonstration To present the most useful data and obtain abstract and meaningful information for human analysts [32], a data visualization system is introduced, which applies a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on CUDA. The following are some essential components of establishing a usable interface:

Sensors 2017, 17, 356

• • • • •

9 of 16

Programming Languages: PHP, JavaScript, HTML, CSS; Webserver: Node.js; Database: MongoDB; Platform: Cloud 9 IDE; and Plugin: Bootstrap, JQuery

To distinguish between multiple targets being tracked in an image, a tensor-based data association method is employed to disambiguate different targets for the user. 3.5. Tensor Based Multi-Target Data Association In this section, a brief introduction is provided about tensor and its rank-1 approximation. Details about the tensor formulation and data association can be found in [31]. A tensor is the high dimensional generalization of a matrix. For a K-order tensor S ∈ R I1 × I2 ×...× IK , each element is represented as si1 ...ik ...iK and 1 ≤ ik ≤ Ik . In the tensor terminology, each dimension of a tensor is associated with a mode. Like matrix-vector and matrix-matrix multiplication, tensor has similar operation, we give the following definition: the n-mode product of a tensor S ∈ R I1 ×...In−1 In ...× IK and a matrix E ∈ R In × Jn , denoted by S ⊗n E, is a new tensor B ∈ R I1 ×...In−1 Jn ...× IK . The notation is represented as: B = S ⊗n E, bi1 ...in−1 jn in+1 ...iK = ∑iInn=1 si1 ...in−1 in ...iK ein jn . In particular, the n-mode product of S and a vector Π ∈ R In ×1 ,

denoted by S ⊗n Π, is the K − 1 order tensor (S ⊗n Π)i1 ...in−1 in+1 ...iK = ∑iInn=1 si1 ...in−1 in ...iK πin . The framework is outlined in the Algorithm 1. Generally, multi-target association is performed with the batch way. When K frame observations are available, association hypotheses (trajectories) are generated first. With all these hypotheses, a tensor is constructed by computing the trajectory affinities. Then, the `1 norm tensor power solution is performed. Algorithm 1 presents the procedures for the tensor based multi-target association. Algorithm 1. Tensor based multi-target association 1: Input. M frame observation sequence. t0 : Start frame, K: Number of a batch frames. 2: Output: target associations 3: while t0 + K − 1 ≤ M do 4: Collect a batch of K frames observation Φ = {O(t0 ), O(t0 + 1), . . . , O(t0 + K − 1)} 5: Generate two-frame association hypotheses. 6: Generate global trajectory hypotheses. 7: Compute the trajectory affinities and construct the K − 1 order tensor S. 8: Initialize the approximate vectors. 9: `1 row/column unit norm tensor power iteration. 10: Solution: Discretize the approximate vectors. 11: t0 ← t0 + K − 1 12: end while

4. Experiments and Results Figures 7–10 show examples of background generation, foreground generation, MUMOTs detection and trajectories generation; respectively.

11: 𝑡0 ← 𝑡0 + 𝐾 − 1 10: end while 4. Experiments and Results Sensors 2017, 17, 3567–10 show examples of background generation, foreground generation, MUMOTs 10 of 16 Figures

detection and trajectories generation; respectively.

Figure 7. An example of extracted background.

Sensors 2017, 17, 356

Figure 7. An example of extracted background.

Sensors 2017, 17, 356

10 of 15 10 of 15

Figure 8. An example of extracted foreground. Figure 8. An example of extracted foreground. Figure 8. An example of extracted foreground.

Figure MUMOTdetection. detection. Figure9.9.An An example example of MUMOT Figure 9. An example of MUMOT detection.

Sensors 2017, 17, 356

11 of 16 Figure 9. An example of MUMOT detection.

Figure 10. An example of target trajectories generation.

Figure 10. An example of target trajectories generation. The recently available GPU NVidia Kepler K20Xm with 2688 CUDA cores and 6144 MB Memory was applied on each cluster. In this work, theK20Xm number with of operational computation nodes is MB 4. The The recently available GPU NVidia Kepler 2688 CUDA cores and 6144 Memory platform of CPU is Intel Xeon E5-2630 Processor. Another system used was the Ubuntu 14.04,

was applied on each cluster. In this work, the number of operational computation nodes is 4. The platform of CPU is Intel Xeon E5-2630 Processor. Another system used was the Ubuntu 14.04, OpenCV 2.4.13 and CUDA 7.5. The MUMOT detection and tracking algorithms were implemented in these two systems for comparison, while other implementations have supported investigations in cognitive radio [36] and long-range communications [37]. The dataset used to verify the speedup performance is from dataset Columbus Large Image Format 2006 [2] where the size of airborne image frame is 2672 × 1200 pixels. Figure 11 shows the processing time for the algorithm applied on different numbers of frames. 11 × 8 (88 frames) WAMI images were used to test the computation performance of the GC-MTT computation architecture. The run time is generated based on the four different computation architectures (CPU, GPU, Hadoop + CPUs and Hadoop + GPUs); respectively. When processing 56 frames, the duration of Hadoop + CPUs was 192 s. The frame rate of this architecture is about 0.29 frames per second (fps). While based on the computation architecture Hadoop + GPUs, the speed-up performance is more obvious. The duration of processing was 90.94 s. The frame rate of the architecture Hadoop + GPUs is 0.61 frames per second. It is worth noting that the computation time is 1206 s based on CPU and 500 s based on GPU without the Hadoop and the same input, which is to say, the value of Hadoop implementation provides 4 times improvement of the computation time (the number of computation node is 4). As can be expected, the overhead would be much smaller when the input of images is more than 100 frames. In the CPU and GPU without Hadoop versions, the run time increases when the frame numbers were larger. In the versions with Hadoop implementation, it was observed that it does not significantly change. Figure 12 shows the speed-up performance of the platforms GPU, Hadoop + CPUs and Hadoop + GPUs as compared to the CPU platform. As can be seen, the speed-up performance of platforms Hadoop + CPUs achieves 4 times increase in fps at frame number 24. The speed-up performance of the platform Hadoop + GPUs provides very promising results from the very beginning. With the increasing of the frame number, the speed-up performance improves greatly. The speed-up performance of GPU platform stabilizes around 2 fps. This result further demonstrates that the proposed GPUs and Hadoop integration architecture is a highly efficient computation platform.

Hadoop + GPUs as compared to the CPU platform. As can be seen, the speed-up performance of platforms Hadoop + CPUs achieves 4 times increase in fps at frame number 24. The speed-up performance of the platform Hadoop + GPUs provides very promising results from the very beginning. With the increasing of the frame number, the speed-up performance improves greatly. The speed-up performance of GPU platform stabilizes around 2 fps. This result further demonstrates Sensors 2017, 17, 356 12 of 16 that the proposed GPUs and Hadoop integration architecture is a highly efficient computation platform.

Figure 11. Computation performance comparison based on CPU, GPU, Hadoop + CPUs and Figure 11. Computation performance comparison based on CPU, GPU, Hadoop + CPUs and Hadoop + +GPUs. Hadoop GPUs.

Sensors 2017, 17, 356

12 of 15

Figure 12. Speed-up performance of GPU, Hadoop + CPUs and Hadoop + GPUs comparing to Figure 12. Speed-up performance of GPU, Hadoop + CPUs and Hadoop + GPUs comparing to CPU platform. CPU platform.

More tests tests illustrating illustrating the the speed speed advantages advantages of of Cloud Cloud are are shown shown in in Figures Figures 13 13 and and 14. 14. The The tests tests More were conducted conducted with with two, two, three three and and four fourcomputation computation nodes nodes available available to to the theframework. framework. The Thenumber number were of frames involved in each setup varies from 8 up to 90 frames. The frame rates are plotted against of frames involved in each setup varies from 8 up to 90 frames. The frame rates are plotted against the number number of of frames frames for for each each setup setup based based in in GPUs GPUs and and CPUs CPUs in in Figures Figures 13 13 and and 14. 14. The The results results for for the the single node setup are included for comparison. As can be seen in Figures 13 and 14, with the the single node setup are included for comparison. As can be seen in Figures 13 and 14, with the implementationof ofaa Cloud Cloudarchitecture, architecture,the the average averageframe framerates ratesare are improved improved as as expected expected whether whether implementation using the the GPU GPU or or CPU CPU platform. platform. Figure using Figure 15 15 illustrates illustrates the the detection detectionmiss missrate rate(non-detected (non-detectedcases) cases)asasa function of the false positive per image (FPPI)—such as a window wrongly classified as pedestrians. a function of the false positive per image (FPPI)—such as a window wrongly classified as pedestrians.

The detection performance of CPU and GPU is similar. As the FPPI increases, the miss rate decreases, leading to a more reliable system. The area below the curves is shown as legend values: the lower the value, the more reliable is the detector [38]. As can be seen in Figure 15, the performance of CPU and GPU shows very little difference.

Sensors 2017, 17, 356

13 of 16

Sensors 2017, 17, x FOR PEER REVIEW The detection of CPU Sensors 2017, 17, xperformance FOR PEER REVIEW

13 of 16 and GPU is similar. As the FPPI increases, the miss rate decreases, 13 of 16 leading to a more reliable system. The area below the curves is shown as legend values: the lower the value, the more reliable is the detector [38]. As can be seen in Figure 15, the performance of CPU and value, the more reliable is the detector [38]. As can be seen in Figure 15, the performance of CPU and GPU shows very little difference. GPU shows very little difference.

AverageFrame FrameRate(frame Rate(frameper persecond) second) Average

0.8 0.8

1 node 1 node 2 nodes 2 nodes 3 nodes 3 nodes 4 nodes 4 nodes

0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.110 10

20 20

30 30

40 50 60 40 50 60 Number of Frames Number of Frames

70 70

80 80

90 90

Figure GPUs. Figure 13. 13. Average Average frame frame rate rate for for multiple multiple nodes nodes based based on on Figure 13. Average frame rate for multiple nodes based on GPUs. GPUs.

AverageFrame FrameRate(frame Rate(frameper persecond) second) Average

0.4 0.4 0.35 0.35 0.3 0.3

1 node 1 node 2 nodes 2 nodes 3 nodes 3 nodes 4 nodes 4 nodes

0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.0510 10

20 20

30 30

40 50 60 40 50 60 Number of Frames Number of Frames

70 70

80 80

90 90

Figure Figure 14. 14. Average Average frame frame rate rate for multiple nodes based on CPUs. Figure 14. Average frame rate for multiple nodes based on CPUs.

Sensors 2017, 17, 356 Sensors 2017, 17, x FOR PEER REVIEW

14 of 16 14 of 16

40 HOG-SVM-CPU(19.95) HOG-SVM-GPU(21.12) 35

Miss Rate (%)

30

25

20

15

10 -2 10

-1

10 False Positive Per Image

0

10

Figure 15. Objective term (lower is better).

5. Conclusions 5. Conclusions In this this paper, paper, aa Cloud Cloud and and GPU-based GPU-based high high performance performance computation computation system system for for MUltiple MUltiple In MOving Targets (MUMOTs) detection and tracking was introduced. The proposed GPU-based and MOving Targets (MUMOTs) detection and tracking was introduced. The proposed GPU-based and Cloud Multiple Tracking (GC-MTT) (GC-MTT) method method is is one one of of the the first first attempts attempts at at detecting detecting and and Cloud Multiple Target Target Tracking tracking MUMOTs from WAMI image sequences sequences using tracking MUMOTs from WAMI image using Cloud Cloud and and parallel parallel computation computationtechnology. technology. The proposed proposed GC-MTT GC-MTT computation computation architecture MUMOTs detection detection and and tracking tracking consists consists of of The architecture for for MUMOTs registration, background background generation, generation, foreground foreground generation, generation, vehicle vehicle detection, detection, data data association association and and registration, trajectories generation. It was shown that by applying the GC-MTT method leads to a much faster, trajectories generation. It was shown that by applying the GC-MTT method leads to a much faster, more reliable, reliable, and and real-time real-time performance performance of of detection detection and and tracking tracking as as compared compared to to situations situations when when more the workflow workflow is is applied applied on on aa single the single CPU CPU or or GPU GPU alone. alone. Future Future work work will will include include pattern pattern of of life life (PoL) (PoL) trajectory analysis comparison with cloud architectures for documenting and understanding trajectory analysis comparison with cloud architectures for documenting and understanding multiple multiple interacting target in routines urban areas such that for traffic This analysis. This information interacting target routines urban in areas such as that forastraffic analysis. information can then can then be potentially used to build models of normalcy, predict future actions, and determine be potentially used to build models of normalcy, predict future actions, and determine anomalous anomalousfor behaviors for urban safety andhighways. intelligent highways. behaviors urban safety and intelligent paper. The research was Acknowledgments: The authors appreciate the reviewer comments to improve the paper. supported in part herein are supported in part by by an an AFRL AFRL grant grant FA8750-15-C-0025. FA8750-15-C-0025. The The views views and and conclusions conclusions contained contained herein are those those of as necessarily representing the the official official policies policies or or endorsements, endorsements, of the the authors authors and and should should not not be be interpreted interpreted as necessarily representing either expressed or implied, of the United States Air Force. either expressed or implied, of the United States Air Force. Author Contributions: E.B., C.S., and G.C. conceived and designed the experiments; K.L. performed the Author Contributions: E.B., C.S., and G.C. H.L. conceived and designed the experiments; K.L. the experiments; S.W. and Z.C. analyzed the data; contributed reagents/materials/analysis tools;performed K.L., B.J., and experiments; and Z.C. analyzed the data; H.L. contributed reagents/materials/analysis tools; K.L., B.J., and E.B. wrote theS.W. paper. E.B. wroteofthe paper. The authors declare no conflict of interest. Conflicts Interest: Conflicts of Interest: The authors declare no conflict of interest.

References References 1. Mendoza-Schrock, O.; Patrick, J.A.; Blasch, E. Video Image Registration Evaluation for a Layered Sensing 1. 2.

2. 3.

Environment. In Proceedings the Blasch, IEEE 2009 National, Aerospace & Electronics Conference (NAECON), Mendoza-Schrock, O.; Patrick,ofJ.A.; E. Video Image Registration Evaluation for a Layered Sensing Fairborn, OH, In USA, 21–23 Julyof2009. Environment. Proceedings the IEEE 2009 National, Aerospace & Electronics Conference (NAECON), ColumbusOH, Large Image Format (CLIF) Dataset. Available online: https://www.sdms.afrl.af.mil/index.php? Fairborn, USA, 21–23 July 2009. collection=clif2006 (accessed on 7 August 2016). Columbus Large Image Format (CLIF) dataset. Available online:

https://www.sdms.afrl.af.mil/index.php?collection=clif2006 (accessed on 7 August 2016). Liu, K.; Liu, B.; Blasch, E.; Shen. D.; Wang, Z.; Ling, H.; Chen, G. A Cloud Infrastructure for Target Detection and Tracking Using Audio and Video Fusion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; pp. 74–81.

Sensors 2017, 17, 356

3.

4.

5.

6. 7.

8.

9.

10.

11. 12. 13. 14. 15. 16. 17. 18.

19.

20.

21. 22.

15 of 16

Liu, K.; Liu, B.; Blasch, E.; Shen, D.; Wang, Z.; Ling, H.; Chen, G. A Cloud Infrastructure for Target Detection and Tracking Using Audio and Video Fusion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; pp. 74–81. Yi, M.; Yang, F.; Blasch, E.; Sheaff, C.; Liu, K.; Chen, G.; Ling, H. Vehicle classification in WAMI imagery using deep network. In Proceedings of the SPIE 9838, Sensors and Systems for Space Applications, Baltimore, MD, USA, 17 April 2016. Gao, J.; Ling, H.; Blasch, E.; Pham, K.; Wang, Z.; Chen, G. Pattern of Life from WAMI Objects Tracking based on Context-Aware Tracking and Information Network Models. In Proceedings of the SPIE 8745, Signal Processing, Sensor Fusion, and Target Recognition, Baltimore, MD, USA, 29 April 2013. Gao, J.; Ling, H.; Blasch, E. Context-aware tracking with wide-area motion imagery. SPIE Newsroom. 2013. [CrossRef] Shi, X.; Ling, H.; Blasch, E.; Hu, W. Context-driven moving vehicle detection in wide area motion imagery. In Proceedings of the International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 2512–2515. Jia, B.; Liu, K.; Pham, K.; Blasch, E.; Chen, G. Accelerated space object tracking via graphic processing unit. In Proceedings of the SPIE 9838, Sensors and Systems for Space Applications, Baltimore, MD, USA, 13 May 2016. Liu, K.; Jia, B.; Chen, G.; Pham, K.; Blasch, E. A real-time orbit satellites uncertainty propagation and visualization system using graphics computing unit and multi-threading processing. In Proceedings of the IEEE/AIAA Digital Avionics Systems Conference (DASC), Prague, Czech, 13–17 September 2015; pp. 8A2-1–8A2-10. Hammoud, R.I.; Sahin, C.S.; Blasch, E.P.; Rhodes, B.J.; Wang, T. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance. Sensors 2014, 14, 19843–19860. [CrossRef] [PubMed] Ma, Y.; Wu, X.; Yu, G.; Xu, Y.; Wang, Y. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery. Sensors 2016, 16, 446. [CrossRef] [PubMed] Air Force Research Laboratory, WPAFB2009 Dataset. Available online: https://www.sdms.afrl.af.mil/index. php?collection=wpafb2009 (accessed on 20 June 2016). Priddy, K.L.; Uppenkamp, D.A. Automated recognition challenges for wide-area motion imagery. In Proceedings of the SPIE, Baltimore, MD, USA, 3 May 2012; Volume 8391. Yan, Y.; Huang, X.; Xu, W.; Shen, L. Robust Kernel-Based Tracking with Multiple Subtemplates in Vision Guidance System. Sensors 2012, 12, 1990–2004. [CrossRef] [PubMed] Qin, L.; Snoussi, H.; Abdallah, F. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance. Sensors 2014, 14, 9380–9407. [CrossRef] [PubMed] Yang, T.; Wang, X.; Yao, B.; Li, J.; Zhang, Y.; He, Z.; Duan, W. Small Moving Vehicle Detection in a Satellite Video of an Urban Area. Sensors 2016, 16, 1528. [CrossRef] [PubMed] Jung, J.; Yoon, I.; Paik, J. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System. Sensors 2016, 16, 982. [CrossRef] [PubMed] Santhaseelan, V.; Asari, V.K. Tracking in wide area motion imagery using phase vector fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Portland, OR, USA, 23–28 June 2013; pp. 823–830. Eekeren, A.V.; Huis, J.R.V.; Eendebak, P.T.; Baan, J. Vehicle tracking in wide area motion imagery from an airborne platform. In Proceedings of the SPIE 9648, Electro-Optical and Infrared Systems: Technology and Applications XII; and Quantum Information Science and Technology, Toulouse, France, 13 October 2015. Pelapur, R.; Candemir, S.; Bunyak, F.; Poostchi, M.; Seetharaman, G.; Palaniappan, K. Persistent target tracking using likelihood fusion in wide-area and full motion video sequences. In Proceedings of the 2012 15th International Conference on Information Fusion (FUSION), Singapore, 9–12 July 2012; pp. 2420–2427. NVIDIA. CUDA C Programming Guide. Available online: http://docs.nvidia.com/cuda (accessed on 10 May 2016). NVIDIA, How to Access Global Memory Efficiently in CUDA C/C++ Kernels. Available online: http://docs.nvidia.com/cuda (accessed on 2 February 2016).

Sensors 2017, 17, 356

23.

24.

25. 26.

27. 28. 29.

30. 31.

32. 33. 34. 35. 36. 37.

38.

16 of 16

Blasch, E.; BiBona, P.; Czajkowski, M.; Barry, K. Video Observations for Cloud Activity-Based Intelligence (VOCABI). In Proceedings of the IEEE National Aerospace and Electronics (NAECON), Dayton, OH, USA, 24–27 June 2014; pp. 207–214. Jia, B.; Ling, H.; Blasch, E.; Sheaff, C.; Chen, G.; Wang, Z. Aircraft ground monitoring with high performance computing multicore enabled video tracking. In Proceedings of the IEEE/AIAA 33rd Digital Avionics Systems Conference (DASC), Colorado Springs, CO, USA, 5–9 October 2014; pp. 6B2-1–6B2-9. Liu, K.; Ma, B.; Du, Q.; Chen, G. Fast Motion Detection from Airborne Videos Using Graphics Computing Unit. J. Appl. Remote Sens. 2011, 6, 061505. Liu, K.; Yang, H.; Ma, B.; Du, Q. A joint optical flow and principal component analysis approach for motion detection. In Proceedings of the 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Dallas, TX, USA, 14–19 March 2010; pp. 1178–1181. Liu, K.; Du, Q.; Yang, H.; Ma, B. Optical Flow and Principal Component Analysis-Based Motion Detection in Outdoor Videos. EURASIP J. Adv. Signal Process. 2010, 2010. [CrossRef] NVIDIA, CUDA C Programming Best Practices Guide. Available online: http://docs.nvidia.com/cuda (accessed on 13 January 2016). Blasch, E. Enhanced Air Operations Using JView for an Air-Ground Fused Situation Awareness UDOP. In Proceedings of the IEEE/AIAA Digital Avionics Systems Conference (DASC), East Syracuse, NY, USA, 5–10 October 2013; pp. 5A5-1–5A5-11. NVIDIA, NVIDIA CUDA Compute Unified Device Architecture Programming Guide. Available online: http://docs.nvidia.com/cuda (accessed on 23 March 2016). Shi, X.; Ling, H.; Xing, J.; Hu, W. Multi-target tracking by rank-1 tensor approximation. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 2387–2394. Blasch, E.P.; Bosse, E.; Lambert, D.A. High-Level Information Fusion Management and Systems Design; Artech House: Norwood, MA, USA, 2012. Snidaro, L.; Garcia-Herrero, J.; Llinas, J.; Blasch, E. (Eds.) Context-Enhanced Information Fusion: Boosting Real-World Performance with Domain Knowledge; Springer: Berlin, Germany, 2016. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. SURF: Speeded Up Robust Features. Comput. Vis. Image Underst. (CVIU) 2008, 110, 346–359. [CrossRef] Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003. Xiong, W.; Mukherjee, A.; Kwon, H. MIMO Cognitive Radio User Selection With and Without Primary Channel State Information. IEEE Trans. Veh. Technol. 2016, 65, 985–991. [CrossRef] Li, L.; Wang, G.; Chen, G.; Chen, H.M.; Blasch, E.; Pham, K. Robust Airborne Image Transmission Using Joint Source-Channel Coding with UEP. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 5–12 March 2016; pp. 1–7. Campmany, V.; Silva, S.; Espinosa, A.; Moure, J.C.; Vazquez, D.; Lopez, A.M. GPU-based Pedestrian Detection for Autonomous Driving. In Proceedings of the International Conference on Computational Science (ICCS), San Diego, CA, USA, 6–8 June 2016; Volume 80, pp. 2377–2381. © 2017 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents