Camera Deployment for Video Panorama Generation in Wireless Visual Sensor Networks Enes Yildiz and Kemal Akkaya
Esra Sisikoglu and Mustafa Sir
Ismail Guneydas
Department of Computer Science Department of Industrial Engineering Department of Computer Science Southern Illinois University Carbondale University of Missouri Columbia Southern Illinois University Carbondale Carbondale, IL 62901 Columbia, MO 47907 Carbondale, IL 62901 Email:
[email protected] Email:
[email protected],
[email protected] Email: {sisikoglue, sir}@missouri.edu
Abstract—In this paper, we tackle the problem of providing coverage for video panorama generation in Wireless Heterogeneous Visual Sensor Networks (VSNs) where cameras may have different price, resolution, Field-of-View (FoV) and Depth-ofField (DoF). We utilize multi-perspective coverage (MPC) which refers to the coverage of a point from given disparate perspectives simultaneously. For a given minimum average resolution, area boundaries, and variety of camera sensors, we propose a deployment algorithm which minimizes the total cost while guaranteeing full MPC of the area (i.e., the coverage needed for video panorama generation) and the minimum required resolution. Specifically, the approach is based on a bi-level mixed integer program (MIP), which runs two models, namely masterproblem and sub-problem, iteratively. Master-problem provides coverage for initial set of identified points while meeting the minimum resolution requirement with minimum cost. Sub-problem which follows the master-problem finds an uncovered point and extends the set of points to be covered. It then sends this set back to the master-problem. Master-problem and sub-problem continue to run iteratively until sub-problem becomes infeasible, which means full MPC has been achieved with the resolution requirements. The numerical results show the superiority of our approach with respect to existing approaches. Index Terms—multi-perspective coverage; video panorama generation; camera deployment; Visual Sensor Networks;
I. I NTRODUCTION With the availability of low-cost wireless cameras [1], the deployment of a large number of such cameras for surveillance and monitoring missions in the outdoor context has gained acceleration. In particular, Wireless Multimedia Sensor Networks (WMSNs) [2] and Visual Sensor Networks (VSNs) [3] have been proposed for a variety of needs. While WMSNs consider the deployment of cameras in addition to other types of scalar sensors, VSNs only deploy cameras with a variety of different processing and imaging capabilities. Typical applications of VSNs include remote surveillance, target tracking and environmental monitoring. Camera deployment has been one of the well-studied problems in WMSNs and VSNs. Based on the idea of Art Gallery problem [4], the goal of the previous works was to ensure the full coverage of the monitored regions with the least number of cameras or costs. With the availability of a variety of lowcost wireless cameras, it is possible to deploy a large number of cameras for covering events from different perspectives simultaneously. In this way, one can create video panorama
of a scene which can be stored over a period of time for future querying. With video panoramas being available, one can get more information about the occurring events which can be used in a variety of applications. For instance, law enforcement can do quality investigations, tracked objects could be recognized better in a timely fashion, etc. In addition to be able to providing video panorama via multi-perspective coverage (MPC), image resolution and cost are other factors that need to be considered in deployment given the wide range of wireless cameras. In particular, the proposed deployment architectures for WMSNs propose use of a large number of low cost cameras which will trigger the high-resolution cameras on demand basis [2]. This requires deployment of cameras with a variety of FoV, DoF and cost while still providing the full MPC and meeting resolution requirements of different sub-regions of the monitored region if any. Current deployment schemes consider homogeneous cameras for achieving full coverage. There is a need to consider heterogeneous cameras with different resolutions, FoVs, DoFs and price and still provide full MPC with the least cost. In this paper, we consider the deployment problem of cameras in VSNs. The goals are three folds: 1) Provide a deployment scheme to provide video panorama of the region which means 100% ω-perspective coverage of the region where ω refers to the number of perspectives; 2) Meet the resolution requirements of the application by providing a minimum average resolution of the whole region; 3) Minimize the cost by keeping the number of cameras low. The problem is modeled as mixed-integer program, which is based on a novel bi-level approach. Rather than discretizing the region with points as was done in previous works, our approach guarantees 100% coverage by identifying uncovered points based on the current camera locations and refining camera locations to cover the uncovered points. Our algorithm starts with a set of discrete points that are located at the boundaries of the region. The first-level optimization model, which is referred to as master-problem, determines optimal camera locations that cover the initial set of points while meeting a predetermined minimum resolution threshold. The second-level optimization model, referred to as sub-problem, finds points within the region that are not currently covered
considering the location, FoV and DoF of the currently deployed cameras. These uncovered locations are then passed to the master-problem. The bi-level algorithm terminates when the sub-problem becomes infeasible indicating that there are no uncovered points left. Note that the same bi-level algorithm is run for each perspective considered. The experiment results indicate that our approach always provide full MPC with the least cost while still meeting the given resolution requirements. This paper is organized as follows. In the next section, we discuss the previous work. Section III provides the preliminaries, including the MPC metric, assumptions and problem definition. The mixed integer program is described in Section IV. Section V is dedicated to experimental evaluation. And finally, Section VI concludes the paper. II. R ELATED W ORK Multiple camera views have been considered in the recent works for different purposes. For instance, the work in [5] was the first to define MPC and solve the corresponding corresponding camera deployment problem. The authors defined MPC, studied how to assess it bases on the number of perspectives, and provided a camera deployment solution. Their approach considers a set of control points and camera placement points. The cameras are located at the control points so that these points are covered from ω perspectives. This problem is modeled as a binary integer program (BIP). This approach, however, does not guarantee full MPC since the BIP only tries to cover the grid points (not the whole region). In this paper, we consider the same problem but provide an exact solution with optimal number of cameras. Moreover, while the previous work considered deployment of a homogeneous set of cameras (i.e., with the same FoV, DoF, etc.), in this paper we consider least-cost deployment using heterogeneous cameras to provide full MPC. Our recent work [6] also studied a similar problem by utilizing a bi-level approach similar to the one pursued in this paper. However, two works differ in several ways. First, [6] considered only homogeneous cameras while the solution in this paper is geared for cameras which may have different price, resolution, FoV and DoF. This can eliminate any possible issues in the real world (e.g., limited availability of camera types, fixed FoVs, price-resolution trade-off) and thus the approach may be better applied to video panorama generation. Second, the proposed algorithm in this paper considers resolution which was not discussed in [6] at all. Specifically, the approach provides flexibility to the application designers to request specific resolutions for a particular sub-region in the monitored area. Finally, the solution to this problem required a major change to the mixed-integer program. As a result, the experiments performed were also different than the ones in [6]. The works considering multiple views for events include [7][8][9] and [10]. [7] considers circular coverage for an event with randomly placed cameras. The goal is to select a subset of cameras to minimize overlaps among FoVs while selecting cameras with higher resolution. This study is different than
ours in that it considers a randomly deployed network and thus their problem is camera selection rather than deployment. The authors of [9] considered a different definition of multiperspective coverage and defined it as “full-view coverage.” This definition is based on the facing direction of observed objects. Full-view coverage ensures that there is always a camera sensor within the range of object’s facing direction. The difference of our approach is that we do not solely focus on the objects’ facing directions but consider any type of event with any perspective as part of the coverage. The work in [8] has focused on how to get an image of the room from any requested viewpoint in a 3-D environment where multiple cameras are located on the walls. To achieve this, cameras perform basic image processing and send small images to a central location to generate the requested image. The work in [10] has examined how multiple perspectives of an environment will increase the ability of tracking a mobile object indoors. The paper has also focused on the presence of occlusions and how to deal with this problem while tracking object. Our work focus on camera placement to guarantee full MPC of a region as opposed to assessing the improvement on the quality of object tracking when multiple perspective views are available. Nonetheless, our work can complement the work in [10] by providing different resolution levels and checking how these different levels are improving the quality of object tracking. III. P RELIMINARIES Each camera has a FoV f and DoF d which refer to the angle and the distance respectively where the camera can capture an accurate image/video. The FoV, DoF, resolution as well as the price of the cameras maybe different. The perspective of a camera refers to the orientation of the camera. Note that we will use the terms orientation and perspective interchangeably throughout the paper. Assuming a circle around an event, in the ideal case the perspectives should be evenly distributed along this circle in order to construct video panorama of an event as seen in Figure 1(b). However, this may not always be the case as seen in Figure 1(a).
Event
(a)
Event
(b)
Fig. 1. (a) 3 cameras providing 3 similar perspectives. (b) 3 cameras with 3 diverse perspectives in the ideal case.
With this interpretation of perspectives, panorama of a region is provided with special deployment of cameras. For instance, in Fig. 2a, the triangular intersection area is covered by all three cameras from 3 perspectives which is called 3 perspective coverage (3-pC). Generalizing this case, ω-pC of a region is calculated by finding a coverage weight (i.e., ω-pCW)
as will be explained below and multiplying it with the area of the region [5] where ω refers to the number of perspectives. ω-pC refers to perspective coverage on every point in the area continuously. Note that the panorama can be obtained with any ω value but the FoVs of cameras should be set such that their aggregate coverage should form a circular covered region on the event (i.e., 360◦ ). ω-pCW can be calculated by finding the ratio of combined area of the corresponding sectors of cameras in a unit disk which is called perspective disk (PD) to the area of the PD (i.e., 1) as shown in Fig. 2(a) and (b) [5]. One can achieve a ω-pCW of 1 by having ω number of cameras oriented such that the orientations are distributed equally around the PD and there is no overlap among their sectors in PD (see Fig. 2(b) for the case ω=3). 60
1
θ2
θ3
3
θ1
120
2
60
60
(a) 3 cameras with different perspectives. Fig. 2.
(b) Perspective Disk (PD) for the three cameras. Fully covered event by three cameras with ω = 3.
1
However, in case of overlapping FoVs, the ω-pCW can be between 0 and 1. We provide an example in Fig. 3. For the gray region in Fig. 3(a), 3-pC and 4-pC PDs are shown in Fig. 3(b) and (c). As the PD is not fully covered, the 3-pCW and 4-pCW would be between 0 and 1.
60
60
θ2
θ1
2
θ3
θ3
θ1
θ3
90
θ2
θ2
θ1
120
60
3
(a)
Fig. 3. in (a)
(b)
(c)
requirements for sub-regions of A if this is desirable as part of the application. IV. B I - LEVEL A LGORITHM A. Approach Overview We propose a two-stage model based on mixed integer programming. The approach which is called Bi-level algorithm (BLA) consists of two mixed integer programming models that run iteratively one after the other. These models are named as "master-problem" and "sub-problem". The master-problem provides a camera deployment solution in order to cover an initial set of discrete points (control points) identified in area A while minimizing the total camera cost and satisfying the desired average resolution. The sub-problem then finds the furthest uncovered point (i.e., not covered by any camera) in area A with reference to camera locations and types provided by master-problem.The master-problem solves the same problem by extending the control points with the new uncovered point from sub-problem. Iterative run of master-problem and sub-problem continues until sub-problem becomes infeasible, which means there is not any uncovered point in area A and full coverage is achieved for that particular perspective. BLA runs separately for each perspective and at the end it combines the result for each perspective which ensures 100% ω-pC. Considering 4-pC as an example, BLA needs to run 4 times independently for the cases where the orientation of cameras is set to 0, π2 , π and 3π 2 (i.e., east, north, west and south). In the following subsections, two main steps of BLA, master-problem and sub-problem will be discussed in details. B. Master Problem The master-problem determines the camera types (i.e., with different FoV, DoF, resolution and cost) and their locations that cover a given set of discrete points (the set of control points denoted by Θ) while providing minimum average resolution and full coverage for a given area A with minimum cost. The basic idea of master-problem depends on a technique offered to the maximal covering location problem by Church and ReVelle [11]. The parameters for the master-problem are listed in Table I. TABLE I PARAMETERS I
PDs for finding the 3-pCW (b) and 4-pCW (c) for the gray region
The problem can then be defined formally as follows:“Given a region of interest, denoted by A, with known boundaries x and y, cameras with different F oV , DoF , resolution, and cost, determine the locations and perspectives of the cameras to provide 100% ω-pC for the entire region while satisfying the required minimum average resolution for A with minimum total camera cost." Average resolution is defined as the average resolution of all the points in the region. Note that there may be multiple cameras covering a point. In that case, the highest resolution provided for that point is picked as its resolution. One can also request different minimum resolution
Θ P R M K I
Set of control points to be covered. Set of placement points for cameras. Set of resolution points (a subset of Θ). Number of control points (|Θ|). Number of placement points (|P |). Number of resolution points (|R|).
In order to locate cameras continuously in area A, infinitely many points should be defined in area A as potential camera points which is not possible in practice. To address this issue, we follow the same approach used in [6] and come up with a finite set of points based on the intersection of mirror images representing the FoVs. Further details can be found in [6].
TABLE II PARAMETERS II T ct rt θt RSmin
Number of types of camera sensors that can be utilized. Cost of a camera sensor of type t, t=1,2,. . . , T. Resolution of a camera sensor of type t, t=1,2,. . . , T. Orientation of a camera sensor of type t, t=1,2,. . . , T. Desired minimum average resolution.
The entire process for master-problem can be summarized as follows: After initializing control points Θ, for each pair of control points and for each type of camera, draw the mirror images of FoV pies and find their intersection points and put them into the set of placement points P . Define resolution points set R which is a subset of initial control points and is used for grid approximation to calculate the average resolution. The parameters for the master-problem is defined in Table II. • For t = 1, 2, . . . , T , k = 1, 2, . . . , K, and j = 1, 2, . . . M, 1 if a camera of type t placed at k covers t ajk = point j, 0 otherwise, Then, we need to define our decision variables as follows: • For t = 1, 2, . . . , T , and i = 1, 2, . . . I, 1 if the resolution of point i is equal to t zi = resolution of camera type t, 0 otherwise, •
For k = 1, 2, . . . , K, ( 1 if a camera of type t is positioned at point k, t wk = 0 otherwise.
The following mixed integer programming model will determine camera types and locations, as a subset of P , that cover all control points Θ while providing required minimum resolution on average (calculated based on R) and full coverage for a given area A with minimum cost.
min
T X K X
ct wkt
(1)
t=1 k=1
subject to
T X K X
atjk wkt ≥ 1,
j = 1, 2, . . . , M,
(2)
t=1 k=1 K X atik wkt ≥ zit , i = 1, 2, . . . , I, (3) k=1 T X zit = 1, i = 1, 2, . . . , I, (4) t=1 I X T X rt zit ≥ RSmin I i = 1, 2, . . . , I, (5) i=1 t=1 wkt ∈ {0, 1}, (6)
In all of the above constraints we define t=1,2,. . . , T,
k=1,2,. . . , K.
In the above mixed-integer programming model, the objective (Eq. 1) is to minimize the total cost of camera deployment while covering area A. Constraint 2 ensures that all control points are covered by at least one camera. Constraint 3 and Constraint 4 ensure that if a resolution point is covered by different type of cameras, then the camera with the highest resolution will determine the resolution. Finally, Constraint 5 provides that minimum average resolution is acquired for that particular perspective.
C. Sub-Problem Sub-problem is the second stage of the algorithm and the goal is to find a point in region A that is not covered by the cameras that are deployed according to the master-problem solution. Let N t be the number of cameras deployed for camera type t, t = 1, 2, . . . , T in the previous master-problem and let Ati be the area covered by camera i = 1, 2, . . . , N t . The sub-problem is solved to find an uncovered point u such that u 6∈ Ati , i = 1, 2, . . . , N , t = 1, 2, . . . , T and is furthest away from all cameras already deployed. To bring computational efficiency, we linearize the arcs of the pie-shaped camera regions via an approximation of Euclidean Norm. Next we define additional parameters (shown in Table III) and decision variables below and discuss the details of linearization. TABLE III PARAMETERS III Nt xti dt ft
Number of cameras of type t deployed in the master-problem stage. Coordinates of camera i, i = 1, 2, . . . , N t of type t. Depth-of-Field (DoF) of the camera of type t. Field-of-View (FoV) of the camera of type t. where t = 1, 2, . . . , T .
Decision variables: • •
u : coordinates of the uncovered point. α : an auxiliary variable to find an uncovered point as further from a camera sensor as possible.
The linearization will be done by approximating the pieshaped FoVs using a piece-wise linear function. First, we need to ensure that the distance between vectors u and xti is greater than or equal to dt which is equivalent to ensuring that the vector u − xti lies outside of the arc of pie with radius d centered at the origin. We use identical triangles to acquire a linear approximation for arcs of the pies. Arc of a pie is divided into a desired number of triangles as shown in Fig. 4. Since identical triangles are used for linear approximation, the apothem of these identical t triangles (with respect to the origin) t is equal to d cos fs . Let s be the number of triangles that will be used in the approximation. The normal vector to edge etj , j = 1, 2, . . . , s, is given by
atj
t cos π + θt + (j − 21 ) fs . = t sin π + θt + (j − 21 ) fs
(7)
ets-1 ets
ats-1
.... ....
ats
et2 et1
at2
at1
dt
ats+1
ats+2
ets+2
ets+1 t
f /s ft θ
t
xti
Fig. 4. origin.
Linearization of an arc by using identical triangles centered at the
The normal vectors to edges ets+1 and ets+2 are given respectively by cos π2 + θt ) , (8) ats+1 = sin π2 + θt ) and ats+2
=
cos f t + θt − π2 ) . sin f t + θt − π2 )
(9)
Therefore, by defining the binary decision variables for each camera i, i = 1, 2, . . . , N t , t = 1, 2, . . . , T , , and for each edge k, k = 1, 2, . . . , s, s + 1, s + 2, 1
approximating the arc (i.e., e1 . . . ets in Fig. 4). Constraint 12 assures that the uncovered point will be outside of the right and left edges of the pies (i.e., ets+1 , ets+2 in Fig. 4). Constraint 13 provides that for each camera the uncovered point needs to be outside of only one edge since it already implies that the point is outside of the “linearized” pie. Finally, Constraint 14 ensures that the uncovered point is within area A. The approximation scheme explained above is setup so that the approximation of identical triangles are entirely contained in the pie. Therefore, if the actual distance between the vectors u and xi is greater than d (i.e., the difference vector is not contained in the pie and also not contained in the union of triangles), at least one of the corresponding vij , j = 1, 2, . . . , s, will be 1. However, if the vector u − xi is contained in the pie but not in the union of triangles then one vij , j = 1, 2, . . . , s, will still be 1 even though the difference between the vectors is less than d. Therefore, with the approximation scheme used in the model, there is a possibility of making an error in the definition of variables vij . Even though this shortcoming causes just 1 more iteration in the algorithm, it can be easily alleviated by increasing the number of identical triangles. As the sides of the approximation triangles increases, the area between the circle and the union of triangle decreases. V. E XPERIMENTAL E VALUATION
We have evaluated the performance of BLA by using Gurobi if vector u − xti violates the constraint defining t th optimization tool. We have initially deployed 1000 control vik = k edges, k = 1, 2, . . . , s + 2 and t = 1, 2, . . . , T points uniformly to the considered region and assigned them 0 otherwise, also as resolution points. Recall that these resolution points The following mixed integer programming model finds the are used to calculate the average resolution which is a grid maximum possible distance of an uncovered point according approximation. We have picked two different type of camto already placed cameras: eras: type1 (low-resolution: 2Mpx) and type2 (high-resolution: 10Mpx). Since high-resolution cameras are expensive and provide less coverage compared to the low-resolution ones, we max α (10) d2type t 1 = 2) and assumed same FoV, but different DoFs (i.e., d2type f T t 2 subject to atj u − xti ≥ α + dt cos vij costs for these cameras. The ratio of FoV area and the area of s Area(F oVtype1 ) the region for type and type cameras (e.g., t 1 2 Area(A) −Ω(1 − vij ), Area(F oVtype2 ) and ) are selected as 0.2 and 0.1 respectively. t Area(A) i = 1, 2, . . . , N , j = 1, 2, . . . , s, (11) In other words, a type2 camera provides half of the coverage t t t T am u − xi ≥ α − Ω(1 − vim ), (12) provided by a type1 camera and is ten times expensive (i.e., t i = 1, 2, . . . , N , m = s + 1, s + 2, ctype1 = $50 and ctype2 = $500). s+2 X Since our algorithm guarantees full ω-pC for area A all the t vik ≥ 1, i = 1, 2, . . . , N t , (13) time, we have used cost and resolution as performance metrics. k=1 We have considered several different resolution requirements u ∈ A, (14) starting from 2Mpx up to 10Mpx. Recall that ω-pC is 100% α ≥ 0, in all of the experiments which is the motivation of our t algorithm. As a baseline, we have compared our approach with vik ∈ {0, 1}, the existing technique of linearization in [5] which is shown i = 1, 2, . . . , N t , k = 1, 2, . . . , s + 2. as BIP in the graphs. For this approach, we have used 1000 control points to be consistent with our BLA approach and where Ω is a sufficiently large number and t = 1, 2, . . . , T . In this model, the objective (Eq. 10) is to maximize the 1600 placement points as indicated in [5]. We have considered distance (α) of an uncovered point that can be further away the above types of type1 and type2 cameras for this case from already placed cameras. Constraint 11 secures that the as well. However, they are not mixed. These are shown as uncovered point will be outside of linearized edges used in BIPtype1 and BIPtype2 in the graph.
As shown in Fig. 5, experiment results indicate that BLA requires less cost for both low(2Mpx) and high(10Mpx) resolution requirement compared to the BIP approach. In particular, when the resolution requirement is increased to 10Mpx, BLA provides around 15% cost savings. For the 2Mpx case, the cost is still less (although not apparent in the figure due to its size) due to using less number of cameras. Note that while BLA guarantees full ω-pC for ω = 1 to 4, BIP does not guarantee 100% ω-pC and thus the coverage percentage values for BIP are shown with arrows in Fig. 5. Finally, BIP approach works for only one certain type of cameras (i.e., either 2MPx or 10MPx) which is not the case for BLA. We have also looked at the cost-resolution trade-off for the BLA approach as shown in the same Fig. 5. The results have shown that in order to provide high resolution on average, the cost should be increased linearly for each particular number of perspectives. This is due to the fact that we need to increase the number of high resolution cameras which eventually increases the total cost. With the increasing number of perspectives, the cost gap among different resolutions increases even higher due to increasing number of cameras required to provide ω-pC. 3.5
x 10
4
BIPtype , resave=2Mpx
.96
1
BIPtype , resave=10Mpx
3
2
BLA, resave=2Mpx
.98
BLA, resave=4Mpx BLA, resave=6Mpx
Total Cost ($)
2.5
BLA, resave=8Mpx BLA, resave=10Mpx
2
.98
1.5
.99 1
0.5
0 1
.95
.98
.98 2
3
.92 4
Fig. 5. Comparison of BLA and BIP approach and trade-off between resolution & cost for BLA, where ω varies. Coverage values for BIP are shown with arrows.
VI. C ONCLUSION In this paper, we have introduced a camera deployment technique for heterogeneous VSNs to provide video panorama with a trade-off between cost and resolution. Our presented algorithm minimizes cost while providing required resolution on average for a given area A (or different required resolutions for subsets of A). The technique is based on solving a MIP with two phases in a recursive manner. Among these phases, master-problem finds a solution to an initial set of points while sub-problem determines new points that need to be covered and pass them to the master-problem. The solution provided is optimal based on the piece-wise linear approximation of FoV arcs. We have shown that the solution would match the real cases (i.e., with regular FoV arcs) when the used number of edges for optimization is above a certain threshold. Experiment results indicate the superiority of our algorithm with respect to ω-pC compared to existing approaches. In
addition, the algorithm can work for any type of cameras (i.e., cost, resolution, FoV, DoF etc., ) and provides an optimal solution by minimizing total cost. R EFERENCES [1] A. Rowe, A. Goode, D. Goel, and I. Nourbakhsh, “Cmucam3: an open programmable embedded vision sensor,” in Technical Report, RI-TR-07-13, Carnegie Mellon Robotics Institute, 2007. [2] I. F. Akyildiz, T. Melodia, and K. R. Chowdhury, “A survey on wireless multimedia sensor networks,” Computer Networks, vol. 51, no. 4, pp. 921–960, March 2007. [3] S. Soro and W. Heinzelman, “A survey of visual sensor networks,” in Advances in Multimedia Hindawi Journal, vol. 2009, no. 640386, 2009. [4] J. O’Rourke, “Open problem from art gallery solved.” in Int. Journal of Computational Geometry and Applications, vol. 2, 1992, pp. 215–217. [5] A. Newell, K. Akkaya, and E. Yildiz, “Camera placement techniques for providing multi-perspective event coverage in wireless multimedia sensor networks,” in IEEE Local Computer Networks Conference (LCN), 2010. [6] E. Yildiz, K. Akkaya, E. Sisikoglu, and M.Sir, “An exact algorithm for providing multi-perspective event coverage in wireless multimedia sensor networks,” in IEEE International Wireless Communications and Mobile Computing Conference (IWCMC), 2011. [7] K.-Y. Chow, K.-S. Lui, and E. Lam, “Achieving 360 angle coverage with minimum transmission cost in visual sensor networks,” in IEEE Wireless Communications and Networking Conference(WCNC 2007), March 2007. [8] S. Soro and W. Heinzelman, “Camera selection in visual sensor networks,” in IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS 2007), 2007. [9] Y. Wang and G. Cao, “On full-view coverage in camera sensor networks,” in IEEE INFOCOM’11, Shanghai, China, April 2011. [10] A. Ercan, A. Gamal, and L. Guibas, “Object tracking in the presence of occlusions via a camera network.” in Proceedings of the Int. Conference on Information Processing in Sensor Networks, 2007, pp. 509–518. [11] R. Church and C. ReVelle, “The maximal covering location problem,” Papers in regional science, vol. 32, no. 1, pp. 101–118, 1974.