Joint Optimization of Receiver Placement and

1 downloads 0 Views 2MB Size Report
Jun 14, 2017 - for solving the root value of an equation was proposed in [25,26]. ...... transmitters, and the perpendicular bisector symmetrical point of r1 and r2 ...
sensors Article

Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network Rui Xie 1 , Xianrong Wan 1, *, Sheng Hong 2 and Jianxin Yi 1 1 2

*

Radio Detection Research Center, School of Electronic Information, Wuhan University, Wuhan 430072, China; [email protected] (R.X.); [email protected] (J.Y.) Cognitive Radio Sensor Networks Laboratory, School of Information Engineering, Nanchang University, Nanchang 330038, China; [email protected] Correspondence: [email protected]; Tel.: +86-27-6877-5997

Academic Editors: Francisco Javier Falcone Lanas, Ana Alejos and Leyre Azpilicueta Received: 10 March 2017; Accepted: 1 June 2017; Published: 14 June 2017

Abstract: The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p-center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations. Keywords: passive radar network; receiver placement; illuminator selection; partition p-center problem; set covering problem

1. Introduction In recent years, passive radars that utilize the broadcast, communication, radionavigation, or presence of non-cooperative radar signals as illuminators of opportunity have become research hotspots [1–6]. The radar netting strategy is gradually employed for passive radars to enhance their performance (such as coverage, positioning accuracy and tracking precision) [7–12]. Because the performance of a passive radar network (PRN) greatly depends on its geometric configurations [1,13–19], the topology optimization problem for a PRN has been widely researched. There are a variety of evaluating indicators for the topology optimization of PRN, and the most important two are coverage and positioning accuracy. The previous literature has paid more attention to the positioning accuracy optimization of PRN [15–17,20]. In [15], the optimization model was converted into a knapsack problem. Then it was solved by a greedy algorithm. The effect of the initial value on algorithm performance was discussed. In [20] two transmitter subset selection strategies for the identification of optimal sets in FM-based PRNs were developed. The purpose was to minimize the number of selected transmitters for a predetermined positioning performance threshold or to minimize the positioning error with a given subset size of selected transmitters. However, little work has been done to address the coverage evaluating indicator. The coverage defines how well the target of interest is monitored, which is a precondition for radar work. In a

Sensors 2017, 17, 1378; doi:10.3390/s17061378

www.mdpi.com/journal/sensors

Sensors 2017, 17, 1378

2 of 20

previous study [21], we preliminarily discussed issues concerning the coverage optimization and proposed a corresponding solving algorithm. This paper conducts further studies on this problem and considers possible improvements for receiver placement and illuminators selection based on the previous paper’s findings. The importance of the receiver placement has been discussed in [21], hence we will focus on the importance of the illuminator selection here. The illuminator selection problem in a PRN should be considered from the uncontrollability of illuminator distributions and the advantages of multiband detection. Firstly, the illuminators of opportunity in the PRN are generally uncontrollable, and their distributions do not entirely aim towards radar detection. Therefore, different illuminators are selected for target detection in different regions. For example, two kinds of illuminators with varying frequencies and coverage areas are presently used in Wuhan, China, which are the China Mobile Multimedia Broadcasting (CMMB) and Digital Television Terrestrial Multimedia Broadcasting (DTMB) both in the ultrahigh frequency (UHF) band. The working frequency of the radar receiver is determined by the illuminators in different areas of interest. Secondly, multiband detection has significant advantages given that multiband radars can improve the detection sensitivity and extend the coverage area. A multiband detection network can be constructed to achieve the performance requirement, especially if greater surveillance ranges are required for target detection. Therefore, the joint optimization of receiver placement and illuminators selection can be treated as a multiband PRN construction problem, and three fundamental questions should be considered: (1) (2) (3)

How many receivers should be placed? Where will the receivers be placed? Which frequency should be selected for each receiver?

By considering the questions (2) and (3) comprehensively, a joint optimization model for the multiband PRN is built with the criterion of the required RCS for target detection. In our optimization model, the total coverage is a combination of the multiple subareas because of the dissimilarities in different coverage areas of each individual illuminator. Therefore our optimization model is named a partition p-center problem (PPCP), which can be simplified into a p-center problem (PCP) when only one frequency and the first order coverage are considered. In the PPCP, the question (1) is regarded as a constraint. When this constraint is exchanged with the objective function of the PPCP, we can establish a partition set covering problem (PSCP), which also can be simplified into a set covering problem (SCP). The PSCP is conducive to solve the PPCP through the above conversion method. Therefore, both the PPCP and PSCP are concerned in this paper. The PCP and SCP are combinatorial optimization problems, and are NP-complete [22]. The PCP is commonly used in national defense and military fields because it tries to optimize the worst conditions. The SCP is a kind of problem that minimizes the total number of receivers or the construction cost to satisfy the coverage requirements. These two problems are the basis for the study of the PPCP and PSCP, hence we will discuss the related research of these two problems respectively in the following content. The PCP can be solved by heuristic and the exact algorithms. In [23], a tabu search and variable neighborhood search were used. In [24], the author summarized many heuristic algorithms, such as the greedy, interchange, alternate (Voronoi diagram) and neighborhood search algorithms. Based on these algorithms, a scatter search algorithm was proposed, which is a hybrid algorithm that was developed by combining with several heuristic algorithms. The simulation for this hybrid algorithm exhibited significant improvements in the global search performance [24]. Heuristic algorithms are fast in solving the PCP, however, the optimization results may be trapped within the local optimal solution if these algorithms are used alone. Therefore, an exact algorithm similar to the dichotomy method for solving the root value of an equation was proposed in [25,26]. This algorithm relies on iteratively solving a series of SCPs. At each iteration, it sets a threshold value to judge on the fulfillment of the constraints of PCP, and updates its lower and upper bounds in light of this information. The key idea of this algorithm will be adopt in our PPCP solution.

Sensors 2017, 17, 1378

3 of 20

The SCP also includes both the heuristic and the exact algorithms. The heuristic algorithms include the greedy algorithm [27], ant colony algorithm [28] and local search [29]. A previous report has discussed the exact algorithm as it is represented by the branch and bound algorithms [30]. The SCP can also be solved by the convex programming technique. Convex optimization is a recently developed heuristic optimization algorithm [31]. Through convex relaxation, this heuristic algorithm constructs a convex problem or several sequential convex problems to approximate the original problem. By solving the relaxed convex problem, an approximate optimal solution to the original problem is obtained. Convex optimization is attractive for both computational complexity and convergence. Nevertheless, a crucial step for the convex optimization technique is to transform the non-convex model into convex model [32–35]. Therefore, the convex relaxation method is the focus of the study in solving the SCP. The PPCP and PSCP models are different from the conventional PCP and SCP models because of different network patterns. In the conventional sensor network, each sensor node is capable of sensing, such that the well-known performance measures from experimental design are convex functions of the placement vector, as discussed in [31–33]. However, in the PRN, each receiver needs to combine with a transmitter to form a sensing unit. This will lead to a different performance measure and a different placement problem. Considering the multiband and the higher order coverage, the established optimization model will be difficult to solve directly by the existing convex optimization. The first contribution of our research work lies in the establishment of the PPCP model. The PPCP model is more complex as compared to the simpler fundamental optimization problem, which only includes one frequency. Its complexity is derived from two aspects, i.e., the bistatic configuration and K-coverage requirements, which is defined as the discovery of a receiver placement mode such that every target in the region is covered by at least K amounts of transceiver pairs. The second contribution of our research work is a proposal of the solving algorithm for the PPCP model. A bisection algorithm to solve the PPCP is proposed based on the characteristics of the objective PPCP function. Since the bisection algorithm relies on the solutions of a series of PSCPs, the algorithm for solving the PSCP is the third contribution of our research work. This study consists of two aspects, namely the convex relaxation method for the PSCP and a hybrid algorithm developed by combining the convex optimization and the greedy dropping algorithm for solving the relaxed convex problem of the PSCP. The rest of this paper is organized as follows: in Section 2, we introduce the joint optimization model for the placement of receivers and selection of illuminators in a passive radar system. In Section 3, we develop a bisection algorithm to solve the PPCP. A hybrid algorithm for solving the PSCP is then proposed in Section 4. The simulation results are demonstrated in Section 5. Finally, conclusions are drawn in Section 6. 2. Problem Description 2.1. Definitions In a multiband radar, the transmitters may present as a single frequency network (SFN) structure, wherein multiple illuminators simultaneously work at the same frequency and transmit the same signal. The symbols of the scenario parameters are listed as follows: N: Number of types of available illuminators; Ω F = { f 1 , f 2 , · · · , f N }: Frequency set; Cn : Number of transmitter sites for each type of illuminator; Ω Tn = {tn,1 , tn,2 , · · · , tn,Cn }: Transmitter coordinate set for each type of illuminator; Ω T = {Ω T1 , Ω T2 , · · · , Ω TN }: Total coordinates of the illuminators; M: Number of the target; Ω M = {s1 , s2 , · · · , s M }: Target set; J: Number of the optional receiver sites;  ΩOR = r1 , r2 , · · · , r J : Optional receiver coordinate set.

Sensors 2017, 17, 1378

4 of 20

The bistatic radar range equation for multiband radar can be expressed as: SNRm,n,i,j

= Pav,n,i − Lm,n,i + σrcs,m − Lm,n,j + Ar,n + Gn − Ls,n − N0,n

(1)

where m indicates that the target follows sm ∈ Ω M , n indicates that the working frequency follows. f n ∈ Ω F , i indicates that the transmitter follows ti ∈ Ω Tn , and j indicates that the receiver follows. r j ∈ ΩOR . The meaning of the other variables is presented as follows: SNRm,n,n,j : Echo signal-to-noise ratio (SNR) of the target sm associated with the bistatic pair  ti , r j and the working frequency f n . Pav,n,i : Effective isotropic radiated power of the transmitter ti ; Lm,n,i : Propagation loss from the transmitter ti to the target sm at the working frequency f n ; σrcs,m : RCS of the target sm ; Lm,n,j : Propagation loss from the target sm to the receiver r j at the working frequency f n ; Ar,n : Receiving antenna effective area at the working frequency f n ; Gn : Processing gain at the working frequency f n ; Ls,n : Hardware system loss at the working frequency f n ; N0,n : Noise power at the working frequency f n . According to (1), the required RCS for target detection can be expressed as: σm,n,i,j = Lm,n,i + Lm,n,j − Gsys,n,i

(2)

where Gsys,n,i = Pav,n,i + Ar,n + Gn − Ls,n − N0,n − SNRmin,n , SNRmin,n is the minimum detectable SNR which is related to the working frequency. Therefore, we define the meaning of the coverage in a PRN: if the actual RCS is not less than the required RCS, then we infer that the target echo SNR is not less than SNRmin,n and claim that the target is covered. In addition, the actual RCS has the characteristics of fluctuation and obeys some statistical probability distributions f (σ ). Thus the target detection is probabilistic and the probability can be R∞ f (σ)dσ. It can be deduced that the required RCS σm,n,i,j is the upper quantile of the expressed as σ m,n,i,j

probability distribution and exhibits the confidence level of the target being covered. 2.2. Assumptions Before introducing the optimization model, two basic assumptions that can simplify the model are described as follows: (1)

(2)

We assume the receiving antenna to be omnidirectional. In fact, the optimization model can be extended while the receiving antenna is directional. The extended method can refer to the multiband network in the following section. Similar statistical target characteristics are observed at different UHF band frequencies such that the required RCS for target detection can be used as the unified criterion for all frequencies. Otherwise, the evaluation criterion of the K-coverage probability should be calculated according to the RCS model and the required RCS.

2.3. Optimization Model Undoubtedly, smaller required RCS, values result in higher probabilities of detecting the target. Therefore, the required RCS can be selected as the objective function for this study. To minimize the maximum value of the required RCS for all targets, namely optimizing the worst condition based on the

Sensors 2017, 17, 1378

5 of 20

fundamental PCP theory, the joint optimization problem is established in this subsection. The required RCS for each target can be given as: ! σm,K = min

f n ∈Ω Fw

min

ti ∈Ω Tn ,r j ∈Ω R

(Θ, Kn )

(3)

where min(Θ, K ) is the Kth minimum value in matrix Θ. When the number of elements in Θ is less than K, the value of min( x, K ) is set to infinity. The structure of matrix Θ can be expressed as (4). N

It is a matrix with a size of M × ( J ∑ Cn ). Each row of the matrix Θ corresponds to a target and n =1

each column corresponds to a bistatic pair. In (3),

min

ti ∈Ω Tn ,r j ∈Ω R

(Θ, Kn ) represents the required RCS for

each frequency network. min indicates at least one frequency network is required to satisfy the f n ∈Ω Fw

K-coverage condition in the multiband PRN:    Θ=  

σ1,1,1:C1 ,1 σ2,1,1:C1 ,1 .. . σM,1,1:C1 ,1

··· ··· .. . ···

σ1,1,1:C1 ,J σ2,1,1:C1 ,J .. . σM,1,1:C1 ,J

··· ··· .. . ···

σ1,n,i,j σ2,n,i,j .. . σM,n,i,j

··· ··· .. . ···

σ1,N,1:CN ,1 σ2,N,1:CN ,1 .. . σM,N,1:CN ,1

··· ··· .. . ···

σ1,N,1:CN ,J σ2,N,1:CN ,J .. . σM,N,1:CN ,J

     

(4)

In the area of interest, the required RCS for all targets, which represents the worst condition, can be formulated as: f (Ω R , Ω Fw ) = max(σm,K ) (5) sm

where Ω R is a subset of ΩOR , and Ω Fw is the corresponding operating frequency. Based on the PCP theory, the optimization objective is to minimize this maximum value with P receivers configured. Then the optimization model in a multiband PRN can be formulated as:  Ω∗R , Ω∗Fw = argmin( f (Ω R , Ω Fw )) Ω∗R ,Ω∗Fw

s.t. kΩ R k0 = P

(6)

where kΩ R k0 denotes the number of elements in Ω R . 2.4. Standard Form of Optimization Model The standard PCP model consists of the standard inputs, the standard outputs and five kinds of constraints [24]. Its inputs include the cost matrix and the number of facilities to locate. The outputs consist of the placement matrix, the assignment matrix and the maximum distance between a demand node and the nearest facility. Its constraints are the maximum distance constraint, the assignment variables constraint, the placement variables constraint, the variables relationship constraint and the integrality constraint. The standard form of the model (6) will be illustrated in this subsection according to the standard PCP model. Firstly, the cost matrix, which is one of the model inputs, has been set up as (4). Subsequently, we define the outputs of the model. A joint placement matrix U is defined as: U = [u1,1 , u1,2 , · · · , u1,J , u2,1 , u2,2 , · · · , u2,J , · · · , un,j , · · · u N,1 , u N,2 , · · · , u N,J ] T

(7)

where the superscript T denotes the transpose of a matrix. The placement variables un,j used in our problem are defined as: ( un,j =

1

If a receiver is located at r j , and works at frequency f n , 0 Otherwise.

Sensors 2017, 17, 1378

6 of 20

The placement variables (uq ) are the variables of the optimization problem, while vm,q0 signify the assignment variables and are defined as: ( vm,q0 =

1

If the target sm is assigned to the q0 th transceiver pair, 0 Otherwise.

The values of each subscript are: m = 1, 2, · · · , M q = 1, 2, · · · , N J N

q0 = 1, 2, · · · , J ∑ Cn n =1

The relationship between n and q can be formulated as n = floor((q − 1)/J ) + 1, where floor(·) is the rounding down operation. Aiming at the variables relationship constraint, we should pay attention to the conversion relationship between the placement matrix and the feasible bistatic pairs. Thus a transform matrix is present as:        T=      

EC1 ,1

 ..

            

. EC1 ,1

..

. ECN ,1 ..

. ECN ,1

(8)

N

( J ∑ Cn )×( N J ) n =1

N

where ECn ,1 is a matrix of ones with a size of Cn × 1, and T is a matrix with a size of ( J ∑ Cn ) × ( N J ). n =1

When considering the assignment variables constraint, we must count the number of bistatic pairs associated with the assessment of the K-coverage condition in each frequency network. The statistical matrix can be expressed as:   E J,1   E J,1    S= (9) ..   .   E J,1

( N J )× N

where E J,1 is a matrix of ones with a size of J × 1, and S is a matrix with a size of ( N J ) × N. Let Q = TS, such that the standard p-center form of the model (6) can be expressed as: min w

(10a)

s.t. Θm,q0 vm,q0 ≤ wvm,q0   vm,q0 ≤ ∑ Tq0 ,q uq

(10b) (10c)

q

max n

 

∑ q0





vm,q0 Qq0 ,n − Kn

 

=0

(10d)



∑ uq = P q

(10e)

Sensors 2017, 17, 1378

7 of 20

vm,q0 , uq ∈ {0, 1}

(10f)

In (10a), w is the objective function value. The meaning of each constraint in (10) can be described as follows: constraint (10c) prevents the assignment of a target to a bistatic pair, which generally does not work. Constraint (10d) requires all targets to satisfy the K-coverage condition in at least one frequency network. A total of P receivers must be configured, which is modeled by constraint (10e). Models in (10a) to (10f) can be boiled down to the PPCP. The reason is that the max function in constraints (10d) has the characteristic of the subsection, thereby partitioning the coverage area of each frequency network. The PPCP can be simplified into the PCP. When Kn = 1, for vm,q0 ∈ {0, 1}, constraints (10d) ( )   can be expressed as max ∑ vm,q0 Qq0 ,n = 1. If N = 1, then ∑ vm,q0 = 1 can be derived from n

q0

q0

constraints (10d), and these are the assignment variables constraint in the traditional PCP. On the other hand, the structure of Θ can be further simplified when Kn = 1. For each optional receiver and each working frequency, only the minimum required RCS of the corresponding feasible bistatic pairs must be given, thereby resulting in equal number of q and q0 values and converting the transform matrix T into an identity matrix, which allows constraints (10c) to be converted into vm,q ≤ uq . These constraints describe the relationship between the assignment variables and the placement variables in the traditional PCP. The abovementioned statements verify the PPCP as an upgraded problem of the PCP. In this paper, we will mainly focus on solving the PPCP to obtain a joint optimization allocation scheme for receiver placement and illuminator selection in a multiband PRN. 3. The Proposed Algorithm for Solving the PPCP In this section, a bisection algorithm was proposed to solve the PPCP, thereby requiring the characteristic of the objective function to first be analyzed. Proposition 1. Assume that the optimal objective function value in (10) is w∗ ( p) with p receivers configured, thus w∗ ( p) decreases with the increase of p. Proof. Assume the corresponding placement matrix of w∗ ( p) is U∗ ( p). The set of receivers to be ∗ ( p ) = { p |U∗ ( p ) = 0} represents the set of receivers that configured is So∗ ( p) = { p|U∗ ( p) = 1}, and Suo don’t open. Using So∗ ( p), we constructed the solution So ( p + 1| p) and the corresponding objective  function value w( p + 1| p). The construction method of So ( p + 1| p) is So ( p + 1| p) = So∗ ( p) ∪ unq , ∗ ( p ). Thus, w ( p + 1| p ) ≥ w∗ ( p + 1) can be inferred according to the principle where unq ∈ Suo of optimization.  

∗ ( p ), Θ ∗ ( p ), On the other hand, ∀m, wm ( p + 1| p) = min wm ≤ wm m,uq0 where Θm,uq0 is the corresponding value of the new receiver in the matrix. ∗ ( p )) = w∗ ( p ). Thus w( p + 1| p) = max(wm ( p + 1| p)) ≤ max(wm Comprehensively, we can be m

m

sure that w∗ ( p) ≥ w∗ ( p + 1). It is difficult for the condition of this inequality to occur given the equality mark, although exist a very special network topology described in the Appendix A. Thus, w∗ ( p) > w∗ ( p + 1) is established under ordinary circumstances, wherein w∗ ( p) decreases with an increase in p.  According to Proposition 1, the dichotomy used to solve the root of the equation is adopted to obtain the optimal objective function value, which requires reiterative guessing. At each iterative search, the guessed value must be adjusted to satisfy the constraints. In other words, the bisection algorithm is based on solving a series of PSCPs [26]. The PSCP can be expressed as: min kUk0

(11a)

s.t. Θm,q0 vm,q0 ≤ wmid vm,q0

(11b)

Sensors 2017, 17, 1378

8 of 20

vm,q0 ≤

∑ (Tq0 ,q uq )

(11c)

q

max n

 

∑ q0





vm,q0 Qq0 ,n − Kn

 

=0

(11d)



vm,q0 , uq ∈ {0, 1}

(11e)

where kUk0 denotes the number of nonzero entries in U. The bisection algorithm for solving the PPCP is given in Algorithm 1. Algorithm 1: The bisection algorithm for solving the PPCP Input: The matrix Θ and the number of receivers that are to be configured P. ∗ . Output: The optimal placement matrix U∗opt and the optimal objective function value wopt ∗ ∗ Step 1: Initialize the upper bound u = w (1), the lower bound l = w ( N J ), and the iterative termination value ε. Step 2: If u − l ≥ ε then proceed to Step 3. Else, go to Step 4. Step 3: Set wmid = (u + l )/2. Solve (11) and obtain the optimal solution UPSCP . If kUPSCP k0 ≤ P then set u = wmid . Else, set l = wmid . Return to Step 2. Step 4: Construct a set Wopt that contains the possible optimal value according to u and l, such that n o Wopt = Θm,q0 |l ≤ Θm,q0 ≤ u, ∀m, q0 . Scan wmid ∈ Wopt , solve (11) and obtain the optimal solution Uk and the  optimalobjective function value wk . Thus the output can be expressed as ∗ U∗opt , wopt = {(Uk , wk )|kUk k0 = P, wopt,k = min(wk )}. k

When the upper and lower bounds are quite close, the convergence rate of the dichotomy is extremely slow. Because the optimal objective function value must exist in the matrix Θ, fewer elements are present in Wopt , while u and l are quite close. The optimal solution can be obtained through the enumeration method, whose idea is embodied in Step 4. 4. The Proposed Algorithm for Solving the PSCP According to Algorithm 1, the key to solving (10) is to first solve (11). In this section, a hybrid algorithm is combined the convex optimization with the greedy dropping algorithm to solve the PSCP. 4.1. Model Conversion Given the difficulty in directly solving the model, we propose two conversion methods to simplify the model. The conversion methods are introduced in this subsection. Firstly, a decision matrix Ψ is constructed to meet the constraints presented in (11b). Ψ as the same size as Θ. If Θm,q0 ≤ wmid , then Ψm,q0 = 1. Else, Ψm,q0 = 0. Thus, the constraints in (11c) and (11d) can be expressed as: max(ΨTdiag(U)S − K) ≥ O M×1

(12)

n

where O M×1 is an all-zero matrix with a size of M × 1. diag(·) denotes the diagonal matrix of a matrix. h i K is a required coverage order matrix, and can be expressed as K = E M×1 E M×1 is an all-one matrix with a M × 1 size. The model in (11) becomes: min kUk0

K1

K1

· · · KN

.

(13a)

s.t. max(ΨTdiag(U)S − K) ≥ O M×1

(13b)

U ∈ {0, 1} N J

(13c)

n

min U

(13a)

0

s.t. max( ΨTdiag(U)S  K)  OM1

(13b)

U  {0,1} NJ

9(13c) of 20

n

Sensors 2017, 17, 1378

The constraint in (13b) can be expressed as another form: The constraint in (13b) can be expressed as another form: max( ΨTdiag(U)Sdiag(K))  EM1

(14)

n

0

max (ΨTdiag(U)Sdiag(K )) ≥ E M×1 T where K  [1 / K1 ,1 / K2 , ,1 / KNn] .

(14)

model (13) ,is the resultT of the first conversion method such that iteratively solves (13) whereThe K0 = [1/K1in , 1/K 2 · · · , 1/K N ] . based onmodel the reweighted process introduced in [35]. However, of this model notbased ideal The in (13) is the result of the first conversion method the suchsolution that iteratively solves is (13) for quite a few scenes. Thus, (13) must be further converted by the second conversion method. on the reweighted process introduced in [35]. However, the solution of this model is not ideal for quite m,qfurther  Kn indicates that be covered by only one receiver. Ψ  ΨT ,(13) where a few Let scenes. Thus, mustΨbe converted by the the target secondcan conversion method. 0 0 Let Ψ =isΨT, where > K that the target be covered by onlyΦ oneinreceiver.  Kn . We Ψnmindicates construct a Kcan -coverage rate matrix view of Its meaning similar to Ψ that m,q of ,q Its meaning is similar to that of Ψ0m,q = Kn . We construct a K-coverage rate matrix Φ in view of this  Ψm,q / Kn ,1 . The this case. The element of the matrix Φ can be expressed as Φm,q  min case. The element of the matrix Φ can be expressed as Φm,q = min Ψ0m,q /K n , 1 . The K-coverage rate rate represents probability of satisfying the K-coverage condition. The curverate of the K-coverage represents the probability of the satisfying the K-coverage condition. The curve of the K-coverage is K -coverage rate is shown in Figure 1. shown in Figure 1.





Figure1.1.The Thecurve curveofofthe theK-coverage K-coveragerate. rate. Figure

Based on the K-coverage rate matrix, (13) becomes: Based on the K-coverage rate matrix, (13) becomes:

min U 0 min kUk0

(15a) (15a)

s.t. max(Φdiag(U)S) ≥ E M×1

(15b)

U ∈ {0, 1} N J

(15c)

n

In the following subsection, we focus on solving (15), and (13) can be solved by the same method. 4.2. Sparsity-Enhancing Iterative Algorithm In this subsection, we obtain an equivalent model of (15) using the convex relaxation method. The equivalent model is then solved by a sparsity enhancing iterative algorithm. The domain of the optimization variables and all the constraints in (15) are nonconvex. The convex relaxation method is described as follows. First, U ∈ {0, 1} N J is replaced by U ∈ [0, 1] N J , where U ∈ [0, 1] N J means U is a ( N J ) × 1 vector for each element within the interval [0, 1]. Next, the nonconvex constraint in (15b) is relaxed. The max function is replaced by the sum function, such that constraint in (15b) becomes: ΦU ≥ E M×1

(16)

Sensors 2017, 17, 1378

10 of 20

Therefore (15) can be relaxed as: (min kUk0 ΦU ≥ E M×1 s.t. U ∈ [0, 1] N J

(17)

The sparsity enhancing reweighted l1 minimization in [35] is an effective method for solving (17). This method is a recursive realization of the surrogate based on the sum of logarithms. The relaxation model of (17) can be expressed as: T ( min Vk U ΦU ≥ E M×1 (18) s.t. NJ U ∈ [0, 1] where Vk is a weight matrix with a 1 × ( N J ) size, and k is the iteration counter as reweighted process is utilized. When k = 0, the weighted matrix Vk is initialized as an all-one matrix E1×( N J ) . When k ≥ 1, Vk is updated according to: 1 Vk = (19) δ + UkT−1 where Uk is the obtained optimal solution of the kth iteration. δ is a small positive value that prevents the denominator from becoming zero and enhances numerical stability. The model (18) presents a convex optimization model that can be solved within the polynomial time using interior-point methods. The iterative algorithm for solving (17) is summarized in Algorithm 2. Algorithm 2: The sparsity enhancing iterative algorithm for solving (17) Input: The matrix Θ, wmid , all Kn , the maximum number of iterations kmax , δ and a small positive value γ. Output: The optimal placement matrix Uopt_cvx . Step 1: Construct the K-coverage rate matrix Φ. Step 2: Initialize k = 0, and V0 = E1×( N J ) . Step 3: Solve the model presented in (18). Step 4: Increase k. When k attains kmax or the maximum change in the entries of mathb f U is less than γ, terminate the iteration to generate the output. Otherwise, update the weight according to (19) and return to Step 3.

4.3. Iterative Varying Constraint Algorithm The optimal solution in (17) can be obtained using Algorithm 2. However, this solution does not always satisfy the constraint in (15b) as it is directly replaced by the sum constraint. We further investigated the constraint in (15b), which can be expressed as: ΦU ≥ E M×1 + [ΦU − max(Φdiag(U)S)] n

(20)

If (20) is simplified as ΦU ≥ E M×1 , the constraint in (15b) may be not satisfied at some target points. An effective solution would be to adjust the constraint, and continue to solve the new model until (15b) is satisfied. This method is called the iterative varying constraint algorithm and is described as follows. The iterative varying constraint algorithm is a fixed point iteration method. Assume that the probable optimal solution is Up_opt , such that the new constraint that has a reinforcing effect on the original constraint is expressed as ΦU ≥ E M ×1 + [ΦUp_opt − max(Φdiag (Up_opt )S)]. This iteration n

is predicted to reach equilibrium. At equilibrium, the solutions will no longer change, thus allowing the derivation of ΦUp_opt ≥ E M×1 + [ΦUp_opt − max(Φdiag(Up_opt )S)]. The probable optimal solution n

at equilibrium satisfies the constraint in (15b).

Sensors 2017, 17, 1378

11 of 20

Due to the complexity of the optimization function, the iterative varying constraint algorithm is not applicable for all cases. A special case is illustrated as follows. Assume that Φ = [1 − α 1 − α α 0], where 0 < α ≤ 1/3, and S = [1 1 0 0 ; 0 0 1 1] T . The optimal solution U = [0 1 1 0] T can be obtained when the relaxed constraint is ΦU ≥ 1. In the next iteration, the new relaxed constraint is ΦU ≥ 1 + α, and the corresponding optimal solution is U = [1 1 0 0] T . Thereafter, the abovementioned constraints appear alternately. The optimal solution satisfies (13b) when the relaxed constraint is ΦU ≥ 1 + α. Although the above example does not show the possibility of the iterative varying constraint algorithm converging to the fixed point, the algorithm is generally suitable for most actual situations. Monte Carlo (MC) simulation is used to obtain the probability of failure of the algorithm. The simulation results indicate a failure probability of less than 5% and show the applicability of the algorithm in most scenarios. The iterative varying constraint algorithm is summarized in Algorithm 3. Algorithm 3: The iterative varying constraint algorithm for solving (15) Input: The matrix Θ, wmid , all k n , the maximum number of iterations imax , and a small positive value ε. Output: The optimal placement matrix Uopt_vc . Step 1: Construct the K-coverage rate matrix Φ. Step 2: Initialize ivc = 0, and E0 = E M×1 . Step 3: Solve the following model using Algorithm 2. (min kUk0 ΦU ≥ Eivc s.t. U ∈ [0, 1] N J Step 4: Increase ivc . When ivc attains imax or the maximum change of the entries of Eivc is less than ε, terminate ( i −1)

vc the iteration and output the solution. Otherwise, update Eivc = E M×1 + [ΦUopt

where

(ivc −1) Uopt

( i −1)

vc − max(Φdiag(Uopt

n

)S)],

is the optimal solution in Step 3, then return to Step 3.

4.4. Greedy Dropping Algorithm In the previous subsection, the probable optimal solution is obtained by using Algorithm 3. Although the solution satisfies (15b), it does not invariably satisfy (15c) because the solution may have some fractional values. This problem has been solved in previous literatures. In [35], a randomized rounding algorithm for optimizing the solution was proposed. Moreover, a previous report [16] adopted an ordered rounding algorithm. The present study proposes a greedy dropping method to optimize the solution.   We first construct a solution U0 = Uopt_vc , where d·e is the rounding up operation. Given that the objective function of (15) is the least number of receivers, we continuously reduce the number of the receivers based on the solution U0 until it reaches the minimum. The remaining nonzero placement variables construct the final solution. There are few nonzero placement variables in the solution U0 , thus the iterative process is terminated in a finite number of steps. The greedy dropping algorithm is summarized in Algorithm 4. Algorithm 4: The greedy dropping algorithm Input: The solution Uopt_vc , which is obtained using Algorithm 3. Output: The final optimal placement matrix Uopt .   (0) Step 1: Construct the initial undetermined solution Uuc = Uopt_vc . Set f lag = 1. Initialize ic = 0. Step 2: If f lag = 1 then go to Step 3. Else, go to Step 4. Step 3: Increase ic , and reset f lag = 0. Scan the entire solution of the last iteration. Set the nonzero placement variables in the solution to zero and construct a new solution Uuc_temp . Test if max(Φdiag(U)S) ≥ E M×1 is n

(i )

satisfied. If satisfied, set f lag = 1 and add Uuc_temp to the solution of Uucc . When the traversal is complete, return to Step 2. (i )

Step 4: Terminate the iteration. The final optimal solution is Uopt = Uucc .

Sensors 2017, 17, 1378

12 of 20

4.5. Analysis of Computation In this subsection, we emphatically analyze the computation of the proposed algorithms. Assume that the iteration times of Algorithms 2 and 3 are K2i and K3i , respectively. The complexity of the interior-point methods is O( N J log(1/ε cvx )), where ε cvx is the required accuracy parameter. Thus the complexity of Algorithm 3 is O(K2i K3i N J log(1/ε)). The computational complexity of Algorithm 4 can be neglected because it only requires some finite matrix multiplication and numerical comparison operations. In Step 3 of Algorithm 1, the number of solving the PSCP is assumed to be K1i . In Step 4 of Algorithm 1, the number of the elements in Wopt is K1e . In Step 4, it requires O( M (α + N )) time to N

compute the objective function value, where α = ∑ Cn P log2 (Cn P). The total time complexity for n =1

Algorithm 1 can be deduced as O((K1i + K1e )K2i K3i N J log(1/ε) + K1e M(α + N )). In contrast, the complexity of the substitution algorithm (SA) which is introduced in [24] is Sensors 2017, 17, 1378 13 of 21 analyzed. In each iteration, it requires P( N J − P) times to compute the objective function value. Assume that the iteration times of SA is KSA , then the total time complexity of SA can be expressed as Normally, K SA P ( NJ  P ) M ( α  N )  ( K 1i  K 1 e ) K 2 i K 3 i NJ log(1 / ε)  K 1 e M ( α  N ) , thus Algorithm O(KSA P( N J − P) M (α + N )). 1 is more time consuming SA. However, host of) simulations Normally, KSA P( N J −than P) M (α + N ) < SA (K1iusually + K1e )requires K2i K3i N Jalog (1/ε + K1e M(α +toNachieve ), thus the global optimal solution. Algorithm 1 is more time consuming than SA. However, SA usually requires a host of simulations to achieve the global optimal solution. 5. Simulation Results 5. Simulation Results simulations are conducted, as shown in Figure 2. The targets of interest are Two-dimensional located in an area of 80 simulations km by 80 kmare and they are marked by in theFigure black 2. circles. In general, the target Two-dimensional conducted, as shown The targets of interest are points should be obtained by meshing the area of interest. However the calculation amount located in an area of 80 km by 80 km and they are marked by the black circles. In general, the target increases rapidly. Therefore, we selectthe thearea targets of interest dispersed on the boundary ofincreases the area points should be obtained by meshing of interest. However the calculation amount of interest to reduce the computational load. Normally, if all the boundaries are covered, the entire rapidly. Therefore, we select the targets of interest dispersed on the boundary of the area of interest to area of interest is also covered. The optional receivers marked by pink squares in Figure 2 are reduce the computational load. Normally, if all the boundaries are covered, the entire area of interest is dispersed within area.receivers marked by pink squares in Figure 2 are dispersed within this area. also covered. The this optional

Figure 2. Simulation scenario.

Two types of of illuminators illuminators are are present present in in the the simulation simulation scenario. scenario. Their Two types Their different different frequencies, frequencies, effective isotropic radiated powers, locations, and basic parameters are given 1. The effective isotropic radiated powers, locations, and basic parameters are given in in Table Table 1. The red red pentagrams the blue pentagram represents thethe second. In pentagrams represent represent the thefirst firstkind kindofofilluminator illuminatorand and the blue pentagram represents second. the simulation, the structure of the red frequency network is SFN, and we assume that K  4 In the simulation, the structure of the red frequency network is SFN, and we assume that K 1 = 4.. 1

Namely the target should be covered by at least four bistatic pairs at the first frequency. In the second frequency network, only one bistatic pair is required to detect the target such that K 2  1 . Though the abovementioned basic parameters are different, the other system parameters of the two frequency networks are the same (Table 2).

Sensors 2017, 17, 1378

13 of 20

Namely the target should be covered by at least four bistatic pairs at the first frequency. In the second frequency network, only one bistatic pair is required to detect the target such that K2 = 1. Though the abovementioned basic parameters are different, the other system parameters of the two frequency networks Sensors 2017,are 17, the 1378same (Table 2). 14 of 21 Table 1. The Basic Parameters of Two Frequency Networks. Table 2. The Other System Parameters of Two Frequency Networks. Parameters Values The Red Illuminator The Blue Illuminator Coherent integration time 0.1 s Parameters Values Parameters Values Minimum detection SNR 12 dB Frequency 600 Frequency The number ofMHz antenna arrays 8 650 MHz Bandwidth Hardware 8 MHz system loss Bandwidth 6 dB 10 MHz Power 1 kW Power Noise factor 5 dB 2 kW Reference temperature 290 K Table 2. The Other System Parameters of Two Frequency Networks.

5.1. The Solution of PSCP Parameters

Values

In the simulation, the propagation loss of the free space is treated as an example. The matrix Θ Coherent integration time 0.1 s can be calculated according toMinimum (4). In the first simulation, we compare detection SNR 12 dB the solving algorithm of the PSCP. There are six methods for solving the PSCP: The number of antenna arrays(a) Solve (13) 8 using the l1 norm surrogate Hardware system loss 6 dB method; (b) Solve (13) using Algorithm 3; (c) Solve (13) using Algorithm 3 and Algorithm 4; (d) Noise factor 5 dB Solve (15) using the l1 norm surrogate method; (e) Solve (15) using Algorithm 3; (f) Solve (15) Reference temperature 290 K using Algorithm 3 and Algorithm 4. In the simulation, the threshold is set as w  [w* (1)  w* ( NJ )] / 2 . 5.1.midThe Solution of PSCP The solutions of the former three methods are shown in Figure 3, and the solutions of the latter In the simulation, thein propagation thefigures, free space is treated as anrepresent example.the Thecandidate matrix Θ three methods are shown Figure 4. Inloss the of two the first 81 indices can be calculated according to (4). In1,theand firstthe simulation, compare the solving the receivers working with frequency latter 81 we indices represent those algorithm working of with PSCP. There2.are methods solvingofthe (a) Solve using the lmethod norm surrogate method; minimization has less nonzero frequency Assix expected, thefor solution thePSCP: reweighted l1 (13) 1 (b) Solve (13)variables using Algorithm (c) Solve Algorithm 3 and Algorithm (d) Solve (15) using surrogate method. The 4; non-zero placement placement than the 3; solution of(13) the using l1 norm the l1 norm surrogate method; (e) Solve (15) using Algorithm 3; (f) Solve (15) using Algorithm 3 and variables of the reweighted l1 minimization method are sparse. When Algorithm 4 is used, the Algorithm 4. In the simulation, the threshold is set as wmid = [w∗ (1) + w∗ ( N J )]/2. nonzero placement variables of the solution are reduced, which is a feasible solution of the PSCP. The solutions of the former three methods are shown in Figure 3, and the solutions of the latter In addition, the solution for (15) has less non-zero placement variables than that for (13), which three methods are shown in Figure 4. In the two figures, the first 81 indices represent the candidate indicates that the with solution for (15) is more likely81toindices be therepresent optimal solution of thewith PSCP. In the receivers working frequency 1, and the latter those working frequency 2 wmidthe  7.97 dBmof. the Thereweighted objective function values method can be obtained when these two 2.simulation, As expected, solution l1 minimization has less nonzero placement 2 7.94 dBm solutions are returned back into (10). The objective function value of (13) is , while variables than the solution of the l1 norm surrogate method. The non-zero placement variables ofthe the 2 7.29 dBm . The model in (13) requires seven receivers, though its objective function value is other is reweighted l1 minimization method are sparse. When Algorithm 4 is used, the nonzero placement larger than (15), thereby validating the suitability of (15) for solving the PSCP as compared to (13). variables of the solution are reduced, which is a feasible solution of the PSCP.

Figure 3. 3. The Figure The solutions solutions of of (13) (13) using using three threemethods. methods.

Sensors 2017, 17, 1378 Sensors 2017, 17, 1378

14 of 20 15 of 21

Sensors 2017, 17, 1378

15 of 21

Figure4.4. The The solutions solutions of of (15) (15) using using three three methods. Figure methods.

MC simulations are conducted to test the adaptability of the model for solving the PSCP, In addition, the solution for (15) has less non-zero placement variables than that for (13), which wherein the candidate transmitters are randomly generated for each MC simulation. The other indicates that the solution for (15) is more likely to be the optimal solution of the PSCP. In the system parameters are shown in Tables 1 and 2. A thousand MC simulations are conducted and simulation, wmid = 7.97 dBm2 . The objective function values can be obtained when these two solutions are shown in w id is set as the average of the upper and lower bounds. The MC simulation results The solutions of (15) usingof three are mreturned back into (10).Figure The 4. objective function value (13)methods. is 7.94 dBm2 , while the other is Figure 5. Figure 5a shows the solution distribution for both (13) and (15). 87.5% of the solutions 7.29 dBm2 . The model in (13) requires seven receivers, though its objective function value is larger haveMC five simulations or six placed are nodes using (15). probability of acquiring placed using (15) is toThe test the for adaptability thefive model fornodes solving than (15), thereby validatingconducted the suitability of (15) solving theofPSCP as compared to (13).the PSCP, higher than (13). A transmitters comparison between these two modelsfor is presented in Figure 5b.The Of other the wherein the for candidate are the randomly generated each simulation. MC simulations are conducted to test adaptability of the model forMC solving the PSCP, wherein are equal to those in (13), and of the solutions from (15) are solutions from (15), 57.6% 32.6% system parameters are shown Tables 1generated and 2. A thousand MC simulation. simulations are and the candidate transmitters are in randomly for each MC Theconducted other system better than those from (13). The remaining 9.8% of the solutions exhibit the opposite result. Thus as the average of the1upper bounds. MC simulation results are in parameters shown in Tables and 2.and A lower thousand MC The simulations are conducted andshown wmid is w m id is setare (15) more suitable for statistically solving the PSCP. Since (15) is only statistically better, two5. set as is the5. average of the upper lower bounds. Thefor MC simulation are shown in solutions Figure Figure Figure 5a shows theand solution distribution both (13) andresults (15). 87.5% of the models be used to solve the PSCP, for of which(13) the better solution be deemed acceptable Figure 5ashould shows the solution and 87.5%will of the solutions have five or six placed nodesdistribution using (15). The both probability of (15). acquiring five placed nodeshave usingfive (15)oris six placed nodes (15). The probability of acquiring placed using is higher than higher than for using (13). A comparison between these twofive models is nodes presented in(15) Figure 5b. Of the for (13). A comparison between these two models is presented in Figure 5b. Of the solutions from solutions from (15), 57.6% are equal to those in (13), and 32.6% of the solutions from (15) are (15), 57.6% equal to those in (13), and 32.6% of the solutions fromexhibit (15) are than result. those from better thanare those from (13). The remaining of the solutions thebetter opposite Thus 9.8% (13). The remaining 9.8% of the solutions exhibit the opposite result. Thus (15) is more suitable for (15) is more suitable for statistically solving the PSCP. Since (15) is only statistically better, two statistically solving the PSCP. Since (15) is only statistically better, two models should be used to solve models should be used to solve the PSCP, of which the better solution will be deemed acceptable the PSCP, of which the better solution will be deemed acceptable

(a)

(b)

Figure 5. The MC simulation results with randomly generated candidate transmitters. (a) Distribution of the solutions of (13) and (15); (b) Distribution of the difference of the placed node number between two models.

5.2. The solution of PPCP

(a) (b) The performance of the algorithm for solving the PPCP is further investigated based on the Figure 5. The MC with randomly generated candidate (a)are solution of5.the Thesimulation simulation scenario is shown in Figure 2. Assuming thattransmitters. four receivers Figure ThePSCP. MC simulation results results with randomly generated candidate transmitters. (a) Distribution Distribution of the solutions of (13) and (15); (b) Distribution of the difference of the placed node placed, the implementation Algorithm 1 isdifference demonstrated in Figure 6. number between of the solutions of (13) andprocess (15); (b) for Distribution of the of the placed node number between two models. two models.

5.2. The solution of PPCP The performance of the algorithm for solving the PPCP is further investigated based on the solution of the PSCP. The simulation scenario is shown in Figure 2. Assuming that four receivers are placed, the implementation process for Algorithm 1 is demonstrated in Figure 6.

Sensors 2017, 17, 1378 Sensors 2017, 17, 1378

16 of 21 15 of 20

5.2. The solution of PPCP The performance of the algorithm for solving the PPCP is further investigated based on the solution of the PSCP. The simulation scenario is shown in Figure 2. Assuming that four receivers are placed, the17, implementation process for Algorithm 1 is demonstrated in Figure 6. Sensors 2017, 1378 16 of 21

Figure 6. The curves of the upper and lower bounds vary with iteration times.

The upper and lower bounds are approximating each other to obtain the final optimal objective function value. In this simulation, w * (4)  9.56 dBm 2 . The optimal configuration marked by color-filled squares is illustrated in Figure 7. The red-filled squares represent the receivers that work with the first frequency and the blue-filled squares indicate the receivers using the second frequency. iteration times. Figure 6. The curves of the upper and lower vary withalgorithm The contrast between the existing algorithms andbounds the proposed is analyzed in this paper. The comparative indicator is the curve of the objective function value changing with the The upper and are other the optimal objective number iterations. The bounds comparison algorithms in each the simulation include the genetic The of upper and lower lower bounds are approximating approximating each other to to obtain obtain the final final optimalalgorithm objective * 2 2 ∗ function value. In this simulation, . The optimal configuration marked by w (4)  9.56 dBm (GA) [36]value. and the comparison divided four types:configuration Type 1, GA with initial function In SA. thisThe simulation, w (4can ) =be9.56 dBm .into The optimal marked by color-filled squares in Figure 7. The red-filled represent the receivers population 1; Type is 2, illustrated GA with initial population 2; Type 3,squares SA with initial value 1; Type 4,that SAwork with with the the first frequency and the squares indicate the receivers second initial value 2.frequency The initialand values and blue-filled the initial populations are receivers randomly generated. first the blue-filled squares indicate the using theusing secondthe frequency. frequency. The contrast between the existing algorithms and the proposed algorithm is analyzed in this paper. The comparative indicator is the curve of the objective function value changing with the number of iterations. The comparison algorithms in the simulation include the genetic algorithm (GA) [36] and the SA. The comparison can be divided into four types: Type 1, GA with initial population 1; Type 2, GA with initial population 2; Type 3, SA with initial value 1; Type 4, SA with initial value 2. The initial values and the initial populations are randomly generated.

Figure Figure 7. 7. The The optimal optimal solution solution of of the the PPCP. PPCP.

Some representative results are selected to demonstrate in Figure 8. Figure 8a demonstrates The contrast between the existing algorithms and the proposed algorithm is analyzed in this that the GA can be able to achieve the optimal solution after several iterations. However, it is time paper. The comparative indicator is the curve of the objective function value changing with the number consuming and the probability of achieving the optimal solution is less than 10%. Figure 8b of iterations. The comparison algorithms in the simulation include the genetic algorithm (GA) [36] and demonstrates some near-optimal solutions of the SA, namely the SA fails to obtain the global the SA. The comparison can be divided into four types: Type 1, GA with initial population 1; Type 2, optimal solution in 500 simulations. The reason for this result is the premature convergence of the GA with initial population 2; Type 3, SA with initial value 1; Type 4, SA with initial value 2. The initial SA. Figure 7. The optimal solution of the PPCP. values and the initial populations are randomly generated. Some representative results are selected to demonstrate in Figure 8. Figure 8a demonstrates Some representative results are selected to demonstrate in Figure 8. Figure 8a demonstrates that the GA can be able to achieve the optimal solution after several iterations. However, it is that the GA can be able to achieve the optimal solution after several iterations. However, it is time time consuming and the probability of achieving the optimal solution is less than 10%. Figure 8b consuming and the probability of achieving the optimal solution is less than 10%. Figure 8b demonstrates some near-optimal solutions of the SA, namely the SA fails to obtain the global optimal solution in 500 simulations. The reason for this result is the premature convergence of the SA.

Sensors 2017, 17, 1378

16 of 20

demonstrates some near-optimal solutions of the SA, namely the SA fails to obtain the global optimal Sensors 2017, 17, 1378 17 of 21 solution in 500 simulations. The reason for this result is the premature convergence of the SA.

(a)

(b) Figure8.8.The Thecurve curveofofthe theobjective objectivefunction functionvalue valuechanging changingwith withthe thenumber numberofofiterations. iterations.(a) (a)The The Figure result of GA with different initial population; (b) The result of SA with different initial value. result of GA with different initial population; (b) The result of SA with different initial value.

The computational complexity of different algorithms is analyzed in Section 4.5. In this The computational complexity of different algorithms is analyzed in Section 4.5. In this simulation, K 2 i  6 and K 3 i ,max  10 . The simulation platform is a core i5-2410 computer simulation, we6 set we set to K2i = andtoK3i,max = 10. The simulation platform is a core i5-2410 computer with a 2.3 GHz with frequency. a 2.3 GHzThe main frequency. The average consuming forthree a simulation three main average time consuming for time a simulation of the algorithmsofis the given in algorithms is given in Table 3. The SA obviously is the least time consuming, but its worst Table 3. The SA obviously is the least time consuming, but its worst performance of achieving the performance of achieving theinoptimal is the shown in Figure Algorithm 1 is of slightly optimal solution is the shown Figure solution 8b. Algorithm 1 is slightly better8b. than GA in term time better than GA in term of time consuming, however, its performance of achieving the optimal consuming, however, its performance of achieving the optimal solution is greatly improved because solution is greatly improved because solution its probability of simulation. achieving the optimal solution is 1 in the its probability of achieving the optimal is 1 in the simulation. Table 3. The Average Time Consuming of the Three Algorithms. Table 3. The Average Time Consuming of the Three Algorithms. Algorithms Algorithms Algorithm1 1 Algorithm GA GA SA SA

Average Time Consumption Average Time Consumption 165.1 s s 165.1 192.8 s s 192.8 8.4 8.4 s s

AAfinal simulation is performed to testto thetest capability for finding global optimum of Algorithm final simulation is performed the capability forthefinding the global optimum 1. of Four simulation scenarios similar to Figure 2 are present in the simulation. In each simulation scenario, Algorithm 1. Four simulation scenarios similar to Figure 2 are present in the simulation. In each different P and K values are considered. comparison algorithm in the simulation includedinthe P and KThe values are considered. The comparison algorithm the simulation scenario, different simulation included the GA and the SA. Both the GA and SA perform a hundred MC simulations. In each MC simulation, the maximum iteration is set to 300 times. The simulation results are shown in Table 4. In Table 4, all the simulations achieve the global optimal solution using Algorithm 1, thereby deeming Algorithm 1 suitable for achieving the optimal solution with Probability 1. The

Sensors 2017, 17, 1378

17 of 20

GA and the SA. Both the GA and SA perform a hundred MC simulations. In each MC simulation, the maximum iteration is set to 300 times. The simulation results are shown in Table 4. In Table 4, all the simulations achieve the global optimal solution using Algorithm 1, thereby deeming Algorithm 1 suitable for achieving the optimal solution with Probability 1. The GA and SA may not be able to achieve the optimal solution, as most scenarios have very low probability of achieving the optimal solution. In term of computation, Algorithm 1 is less time consuming than 100 reiterations for GA and SA. Therefore, we can conclude that Algorithm 1 performs better in solving the PPCP than GA and SA. Nevertheless, future research requires more simulations to test the performance of the proposed algorithm in finding the global optimum. Table 4. Simulation Results Obtained With Algorithm 1, GA, and SA for PPCP. Scenario

P

4 1

5 6 4

2

5 6 4

3

5 6 4

4

5 6

K

GA

Algorithm 1

SA

Optimal Value

Probability

Optimal Value

Probability

[4 1] [1 1] [4 1] [1 1] [4 1] [1 1]

8.44 7.42 7.86 6.77 6.72 5.92

8.44 7.42 8.18 6.77 6.72 5.92

0.10 0.13 0.47 0.13 0.02 0.10

8.65 7.42 7.86 6.77 6.72 5.92

0.02 0.06 0.05 0.27 0.15 0.36

[4 1] [1 1] [4 1] [1 1] [4 1] [1 1]

8.40 7.61 7.97 7.10 6.94 5.54

8.40 7.61 7.97 7.10 6.94 5.54

0.22 0.22 0.41 0.27 0.01 0.06

8.78 7.61 7.97 7.10 6.94 5.54

0.68 0.80 0.14 0.80 0.13 0.55

[4 1] [1 1] [4 1] [1 1] [4 1] [1 1]

8.88 7.64 8.82 7.43 5.85 5.84

8.88 7.64 8.82 7.43 6.74 5.84

0.63 0.17 0.68 0.41 0.01 0.01

8.88 7.64 8.82 7.43 6.74 5.84

0.10 0.08 0.35 0.49 0.06 0.13

[4 1] [1 1] [4 1] [1 1] [4 1] [1 1]

9.12 6.91 7.67 5.51 6.91 4.89

9.12 6.91 7.67 5.68 6.91 4.89

0.88 0.17 0.03 0.04 0.05 0.01

9.12 6.91 7.67 5.61 6.91 4.89

0.68 0.07 0.08 0.24 0.08 0.15

5.3. Discussion The following conclusions can be obtained from the numerical simulation. (1)

(2)

Algorithm 1 offers fast, polynomial approximation iteration strategies to solve the PPCP. The scope of the objective function value of the PPCP can be determined with only a few iterations. The iterations number depends on the required accuracy ε and the characteristic of the objective function. As a result, Algorithm 1 can not only provide the exact solution for PPCP, but also produce quite a few suboptimal solutions in the rapid decision-making process. The robustness of Algorithm 3 needs to be enhanced. The reason is that the reliability of the algorithm is closely related to the approximation of the convex relaxation model. In the process of the model conversion, the approximation degree of the convex relaxation model may be affected by the multiband network topology, the transmit power distribution of the multiband PRN, the propagation prediction accuracy and many other aspects. Fortunately, Algorithm 3 exhibits sufficient robustness in this simulation while solving both two conversion models. However, the theoretical performance analysis of the model (13) and (15) requires further research.

Sensors 2017, 17, 1378

18 of 20

In addition, the existence of a more suitable convex relaxation method for PSCP is worth being paid attention to. (3) It is worth pointing out that the iterative reliability of Algorithm 1 relies on the accuracy of the PSCP solution, such that we have to solve both the model (11) and (13). Under such operation, Algorithm 1 almost invariably obtain the optimal solution in the present finite simulation. In terms Sensors 2017, 17, 1378 19 of 21 of the time consuming and the performance of finding the optimal solution, Algorithm 1 has a strong advantage over the GA and SA. 6. Conclusions Our research is driven by the practical application of the PRN. Two basic problems in 6. Conclusions constructing PRN are discussed in this paper: the placement of receivers and the selection of Our research is driven by the practical application of the PRN. Two basic problems in constructing illuminators, from which a joint optimization model is established. To the best of the authors’ PRN are discussed in this paper: the placement of receivers and the selection of illuminators, from knowledge, the established model has not been reported in literature. In the model, a bisection which a joint optimization model To the bestresults of the authors’ the established algorithm is proposed to solveis established. the PPCP. Simulation indicate knowledge, the effectiveness of the model has not been reported in literature. In the model, a bisection algorithm is proposed to solve proposed algorithm. Future work must consider receivers with different costs. Moreover, theimprovement PPCP. Simulation indicate theon effectiveness of the proposed Futuredirection. work must in the results PPCP model based the actual situations is also aalgorithm. valuable research consider receivers with different costs. Moreover, improvement in the PPCP model based on the actual Acknowledgments: This work was supported by the National Key Research and Development Plan of China situations is also a valuable research direction. (2016YFB0502403), National Natural Science Foundation of China (61331012, 61371197, U1333106, 61271400, Acknowledgments: This work was supported by the National Key Research and Development Plan of China 61661032), the fundamental research funds for the central universities (2015212020202). (2016YFB0502403), National Natural Science Foundation of China (61331012, 61371197, U1333106, 61271400, Author Contributions: The idea was proposed bycentral Xianrong Wan; Rui (2015212020202). Xie simulated the algorithm; Rui Xie and 61661032), the fundamental research funds for the universities Jianxin Yi designed the experiments; Sheng Hong and Jianxin Yi polish the English; Rui Xie wrote the paper. Author Contributions: The idea was proposed by Xianrong Wan; Rui Xie simulated the algorithm; Rui Xie and Thanks are given to Sheng Hong for the suggested corrections. Jianxin Yi designed the experiments; Sheng Hong and Jianxin Yi polish the English; Rui Xie wrote the paper. Thanks are given to Sheng fordeclare the suggested corrections. Conflicts of Interest: TheHong authors no conflict of interest.

Conflicts of Interest: The authors declare no conflict of interest.

Appendix A. A Special Network Topology Appendix A Special Network We A. describe a very specialTopology network topology for taking the equality mark as follows. If * w *We ( p ) describe w * ( p  1) ,a then ( p )  w (network p  1| p )  topology w * ( p  1) . for very wspecial theindicates equalitythat mark follows. This taking equation theasoptimal

∗ ( p) = w ∗ ( p + 1)value If wobjective , thenwill w∗ (be p) unchanged = w( p + 1| pfollowing ) = w∗ ( pthe + 1addition ). This equation indicates function of a receiver, suchthat thatthe theoptimal new configuration will also produce an optimal configuration. objective function value will be unchanged following the addition of a receiver, such that the new s1 and r3 lie on the perpendicular bisector of the two configuration willA1, alsothe produce optimal configuration. the receiver In Figure target an In Figure A1, the target s and the receiver r perpendicular the rtwo 3 lie on the 1 2 4 are r5 ofand transmitters, and the perpendicular bisector symmetrical point of r1 and rbisector transmitters, and the perpendicular bisector symmetrical point of r and r are r and r respectively. 2 5 1 4 respectively. In this special topology, r3 was determined as the optimal placement. Therefore, it can In this special topology,* r3 was* determined as the optimal placement. Therefore, it can be concluded w (1)  w (2)    w * (5) . bewconcluded ∗ (1) = w∗that that (2) = · · · = w ∗ (5). We further assume that the presence of two symmetric targets to deduce We further assume that the presence of two symmetric targets to deduce w∗ (2) = w∗ (3) = · · · = w∗ (5). w * (2)  w * (3)    w * (5) . The possibility of the inequality taking mark of the quality is minimized The possibility of the inequality taking mark of the quality is minimized with a rise in the number of a rise in the number in of the the network targets. Since symmetry in the network topology is necessary, it is thewith targets. Since symmetry topology is necessary, it is difficult to produce this special difficult to produce this special network topology in practice. network topology in practice.

s1

t1

t2

r1

r2

r3

r4

r5

FigureA1. A1.An Anexample example of of aa special special network Figure networktopology topologydiagram. diagram.

References 1. 2.

Kuschel, H.; O’Hagan, D. Passive radar from history to future. In Proceedings of the 11th International Radar Symposium, Vilnius, Lithuania, 16–18 June 2010; pp. 1–4. Griffiths, H.D.; Baker, C.J. Passive coherent location radar systems. Part 1: Performance prediction. IEE

Sensors 2017, 17, 1378

19 of 20

References 1. 2. 3. 4.

5. 6. 7. 8. 9. 10. 11. 12. 13.

14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.

Kuschel, H.; O’Hagan, D. Passive radar from history to future. In Proceedings of the 11th International Radar Symposium, Vilnius, Lithuania, 16–18 June 2010; pp. 1–4. Griffiths, H.D.; Baker, C.J. Passive coherent location radar systems. Part 1: Performance prediction. IEE Proc. Radar Sonar Navig. 2005, 152, 153–159. [CrossRef] Baker, C.J.; Griffiths, H.D.; Papoutsis, I. Passive coherent location radar systems. Part 2: Waveform properties. IEE Proc. Radar Sonar Navig. 2005, 152, 160–168. [CrossRef] Malanowski, M.; Kulpa, K.; Misiurewicz, J. PaRaDe-PAssive RAdar DEmonstrator family development at Warsaw University of Technology. In Proceedings of the Microwaves, Radar and Remote Sensing Symposium, Kiev, Ukraine, 22–24 September 2008; pp. 75–78. Zhao, Z.; Wan, X.; Zhang, D.; Cheng, F. An Experimental Study of HF Passive Bistatic Radar via Hybrid Sky-Surface Wave Mode. IEEE Trans. Antennas Propag. 2013, 61, 415–424. [CrossRef] Wan, X.; Yi, J.; Zhao, Z.; Ke, H. Experimental Research for CMMB-Based Passive Radar under a Multipath Environment. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 70–85. Griffiths, H. Multistatic, MIMO and networked radar: The future of radar sensors? In Proceedings of the 7th European Radar Conference, Paris, France, 30 September–1 October 2010; pp. 81–84. Hack, D.E.; Patton, L.K.; Himed, B.; Saville, M.A. Detection in Passive MIMO Radar Networks. IEEE Trans. Signal Process. 2014, 62, 2999–3012. [CrossRef] Edrich, M.; Meyer, F.; Schroeder, A. Design and performance evaluation of a mature FM/DAB/DVB-T multi-illuminator passive radar system. IET Radar Sonar Navig. 2014, 8, 114–122. [CrossRef] Godrich, H.; Haimovich, A.M.; Blum, R.S. Target Localization Accuracy Gain in MIMO Radar-Based Systems. IEEE Trans. Inf. Theory 2010, 56, 2783–2803. [CrossRef] Zhou, J.; Wang, F.; Shi, C. Cramér-Rao bound analysis for joint target location and velocity estimation in frequency modulation based passive radar networks. IET Signal Process. 2016, 10, 780–790. Shi, C.G.; Salous, S.; Wang, F.; Zhou, J.J. Modified Cramér-Rao lower bounds for joint position and velocity estimation of a Rician target in OFDM-based passive radar networks. Radio Sci. 2017, 52, 15–33. [CrossRef] Majd, M.N.; Chitgarha, M.M.; Radmard, M.; Nayebi, M.M. Probability of missed detection as a criterion for receiver placement in MIMO PCL. In Proceedings of the Radar Conference, Atlanta, GA, USA, 7–11 May 2012; pp. 0924–0927. Radmard, M.; Nayebi, M.M.; Karbasi, S.M. Diversity-Based Geometry Optimization in MIMO Passive Coherent Location. Radioengineering 2014, 23, 41–49. Godrich, H.; Petropulu, A.P.; Poor, H.V. Sensor Selection in Distributed Multiple-Radar Architectures for Localization: A Knapsack Problem Formulation. IEEE Trans. Signal Process. 2012, 60, 247–260. [CrossRef] Yi, J.; Wan, X.; Leung, H.; Lu, M. Joint Placement of Transmitters and Receivers for Distributed MIMO Radars. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 122–134. [CrossRef] Radmard, M.; Nayebi, M.M. Target tracking and receiver placement in mimo DVB-T based PCL. Iran. J. Sci. Technol. Trans. Electr. Eng. 2015, 39, 23–37. Gong, X.; Zhang, J.; Cochran, D.; Xing, K. Optimal Placement for Barrier Coverage in Bistatic Radar Sensor Networks. IEEE/ACM Trans. Netw. 2016, 24, 259–271. [CrossRef] Tang, L.; Gong, X.; Wu, J.; Zhang, J. Target Detection in Bistatic Radar Networks: Node Placement and Repeated Security Game. IEEE Trans. Wirel. Commun. 2013, 12, 1279–1289. [CrossRef] Shi, C.; Wang, F.; Sellathurai, M.; Zhou, J. Transmitter Subset Selection in FM-Based Passive Radar Networks for Joint Target Parameter Estimation. IEEE Sens. J. 2016, 16, 6043–6052. [CrossRef] Xie, R.; Wan, X.; Hong, S.; Yi, J. Optimal placement for K-coverage in passive radar network. Signal Process. 2017. submitted for publication. Kariv, O.; Hakimi, S.L. An Algorithmic Approach to Network Location Problems. I: The p-Centers. SIAM J. Appl. Math. 1979, 37, 513–538. [CrossRef] Mladenovi´c, N.; Labbé, M.; Hansen, P. Solving the p-Center problem with Tabu Search and Variable Neighborhood Search. Networks 2003, 42, 48–64. [CrossRef] Pacheco, J.A.; Casado, S. Solving two location models with few facilities by using a hybrid heuristic: A real health resources case. Comput. Oper. Res. 2005, 32, 3075–3091. [CrossRef]

Sensors 2017, 17, 1378

25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36.

20 of 20

Albareda-Sambola, M.; Díaz, J.A.; Fernández, E. Lagrangean duals and exact solution to the capacitated p-center problem. Eur. J. Oper. Res. 2010, 201, 71–81. [CrossRef] Aykut Özsoy, F.; Pınar, M.Ç. An exact algorithm for the capacitated vertex p-center problem. Comput. Oper. Res. 2006, 33, 1420–1436. [CrossRef] Vasko, F.J.; Lu, Y.; Zyma, K. What is the best greedy-like heuristic for the weighted set covering problem? Oper. Res. Lett. 2016, 44, 366–369. [CrossRef] Ren, Z.-G.; Feng, Z.-R.; Ke, L.-J.; Zhang, Z.-J. New ideas for applying ant colony optimization to the set covering problem. Comput. Ind. Eng. 2010, 58, 774–784. [CrossRef] Gao, C.; Yao, X.; Weise, T.; Li, J. An efficient local search heuristic with row weighting for the unicost set covering problem. Eur. J. Oper. Res. 2015, 246, 750–761. [CrossRef] Balas, E.; Carrera, M.C. A Dynamic Subgradient-Based Branch-and-Bound Procedure for Set Covering. Oper. Res. 1996, 44, 875–890. [CrossRef] Joshi, S.; Boyd, S. Sensor Selection via Convex Optimization. IEEE Trans. Signal Process. 2009, 57, 451–462. [CrossRef] Chepuri, S.P.; Leus, G. Sparsity-Promoting Sensor Selection for Non-Linear Measurement Models. IEEE Trans. Signal Process. 2015, 63, 684–698. [CrossRef] Chepuri, S.P.; Leus, G. Continuous Sensor Placement. IEEE Signal Process. Lett. 2015, 22, 544–548. [CrossRef] Ma, B.; Chen, H.; Sun, B.; Xiao, H. A Joint Scheme of Antenna Selection and Power Allocation for Localization in MIMO Radar Sensor Networks. IEEE Commun. Lett. 2014, 18, 2225–2228. [CrossRef] Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing Sparsity by Reweighted `1 Minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [CrossRef] Pasandideh, S.H.R.; Niaki, S.T.A. Genetic application in a facility location problem with random demand within queuing framework. J. Intell. Manuf. 2012, 23, 651–659. [CrossRef] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents