Increasing Sensor Measurements to Reduce Detection Complexity in

0 downloads 0 Views 268KB Size Report
Increasing Sensor Measurements to. Reduce Detection Complexity in Large-Scale. Detection Applications. Yaron Rachlin∗, Narayanaswamy Balakrishnan∗, ...
1

Increasing Sensor Measurements to Reduce Detection Complexity in Large-Scale Detection Applications Yaron Rachlin∗ , Narayanaswamy Balakrishnan∗ , Rohit Negi∗ , John Dolan† , and Pradeep Khosla∗† ∗ Department of Electrical and Computer Engineering † The Robotics Institute Carnegie Mellon University, Pittsburgh, PA 15213 Email: {rachlin@ece, muralib@cs, negi@ece, jmd@cs, pkk@ece}.cmu.edu

Abstract— Large-scale detection problems, where the number of hypotheses is exponentially large, characterize many important sensor network applications. In such applications, sensors whose output is simultaneously affected by multiple target locations in the environment pose a significant computational challenge. Conditioned on such sensor measurements, separate target locations become dependent, requiring computationally expensive joint detection. Therefore there exists a tradeoff between the computational complexity and accuracy of detection. In this paper we demonstrate that this tradeoff can be altered by collecting additional sensor measurements, enabling algorithms that are both accurate and computationally efficient. We draw the insight for this tradeoff from our work on the sensing capacity of sensor networks, a quantity analogous to the channel capacity in communications. To demonstrate this tradeoff, we apply sequential decoding algorithms to a large-scale detection problem using a realistic infrared temperature sensor model and real experimental data. We explore the tradeoff between the number of sensor measurements, accuracy, and computational complexity. For a sufficient number of sensor measurements, we demonstrate that sequential decoding algorithms have sharp empirical performance transitions, becoming both computationally efficient and accurate. We provide extensive comparisons with belief propagation and a simple heuristic algorithm. For a temperature sensing application, we empirically demonstrate that given sufficient sensor measurements, belief propagation has exponential complexity and sequential decoding has linear complexity in sensor field of view. Despite this disparity in complexity, sequential decoding was significantly more accurate.

I. I NTRODUCTION Large-scale detection problems characterize sensor network applications where the number of hypotheses is exponentially large. One example of such an application is vehicle detection and classification using seismic sensors, as described in [1]. In such an application, we can model the area being monitored as a discrete grid. Each grid location can contain one of several types of vehicles. The number of possible vehicle configurations is therefore exponentially large. Seismic sensors can sense vibrations over a large area, and are thus affected by multiple vehicles simultaneously. As a result measurements from a single sensor cannot distinguish among multiple vehicles configurations. Therefore one must fuse multiple sensor measurements to detect the state of the environment. The fact that sensors are affected by multiple parts of the environment

at the same time is a common thread in many large-scale detection applications. In another example, [2] describes the use of wide-angle sonars to map an indoor environment in a robotic mapping application. Such sonar sensors emit a wide pulse and experience multi-path effects. Thus, their timeof-flight measurements are affected by multiple obstacles in the environment. Finally, [3] uses an array of Infrared (IR) temperature sensors to detect pedestrians. An IR temperature sensor absorbs IR radiation from multiple objects in its field of view simultaneously. In all of the preceding examples, sensors are affected by multiple parts of the environment at the same time. Detecting the state of the environment based on the measurements generated by such sensors poses a significant computational challenge. The reason for this difficulty is that, conditioned on such sensor measurements, multiple parts of the environment become dependent and thus must be detected jointly. Given the computational complexity of jointly detecting multiple parts of the environment, there is a tradeoff between accurate algorithms with high computational complexity, and inaccurate algorithms that are less computationally expensive. For largescale detection applications, this dilemma can be avoided by choosing sensors with a narrow field of view and low noise. However, such sensors can be more expensive and necessitate a large number of sensor measurements (due to their narrow field of view). For example, consider this tradeoff in the application of robot mapping. In practice, inaccurate but computationally efficient algorithms (e.g. occupancy grids [4]) are combined with high-quality, expensive sensors (e.g. SICK laser range finders1 ) to achieve good mapping results. Accurate but computationally expensive algorithms can enable accurate sensing with cheap, large field of view sensors such as Senscomp sonars2 , but can be too computationally intensive for real-time implementation (e.g. [5],[6]). In [7], we demonstrate a novel relationship between the number of sensor measurements and the computational complexity of detection in a sensor network. In preliminary simulations, we present a detection algorithm, sequential decoding, whose complexity dropped sharply above a certain number 1 SICK

website: http://www.sick.com/ecatalog/en.html 600 Series Instrument Transducer: www.senscomp.com

2 Senscomp

2

of sensor measurements. In this paper, we build on previous results to demonstrate the promise of sequential decoding for a more realistic sensor model and real sensor data. We focus on the problem of using a cheap IR temperature sensor to accurately detect multiple hot targets in a discrete field. This problem is important in applications such as pedestrian detection, autonomous search and rescue operations, and home monitoring. In contrast to a point temperature sensor, an IR temperature sensor absorbs incoming IR radiation from a wide area. Thus, when sensing the temperature of an object that partially occupies its field of view, the sensor’s output is also affected by the temperature of other nearby objects. How can we detect efficiently and accurately using such sensors? We approach this problem using a connection between error correcting codes and sensor networks, developed in prior work ([8],[9],[10]). Drawing on a connection between largescale detection problems and the problem of communicating over a noise channel, we defined and bounded the ‘sensing capacity’ of a sensor network, a quantity that is analogous to the channel capacity. While the channel capacity bounds the number of bits required to transmit a set of messages with arbitrarily low error over a communication channel, the sensing capacity of a sensor network bounds the number of sensor measurements required to detect the state of an environment in a large-scale detection application to within a desired accuracy. In [7] we describe the similarity between the finite memory of convolutional codes [11] and the range of a sensor in a sensor network. For large memory convolutional codes accurate maximum likelihood (ML) decoding is computationally intractable. However, at communication rates sufficiently below the channel capacity, large memory convolutional codes can be decoded efficiently using sequential decoding algorithms (first introduced in [12]). The communication rate below which sequential decoding is both accurate and efficient is called the computational cutoff rate. In [7] we apply the insight of a computational cutoff rate to the context of detection in sensor networks. By extending the idea of sequential decoding to sensor networks we demonstrated that for a sufficiently large number of sensor measurements, that is, at rates sufficiently below the sensing capacity, efficient and accurate detection is feasible even for sensors of larger range. The claims in the previous paper [7] are supported by preliminary simulation results using a simple range sensor model. In this paper, we test our sequential decoding algorithm using a realistic IR temperature sensor model and real experimental data. We explore the tradeoff between the number of sensor measurements, accuracy, and computational complexity of detection using sequential decoding in sensor networks. For our realistic sensor model, we show that for a sufficient number of sensor measurements, sequential decoding exhibits a sharp empirical performance transition, becoming both computationally efficient and accurate. To better accommodate the large fields of view of the sensors used in this paper, we introduce an approximation step in the sequential decoding algorithm that speeds up its execution. We provide extensive comparisons with other algorithms, such as belief propagation (BP) [13], a well-known algorithm used

for inference in graphical models. For a temperature sensing application, we empirically demonstrate that while BP has exponential complexity in sensor field of view, sequential decoding has linear complexity in sensor field of view (for a sufficient number of sensor measurements). Despite this disparity in complexity, sequential decoding was found to be much more accurate for these large field of view sensors. We also compare with a fast and simple greedy bit-flipping (BF) algorithm, inspired by a simple decoding algorithms used to decode LDPC codes [14]. Finally, we show that the behavior of sequential decoding across different simulations provides interesting insights regarding sensor selection. We begin with an overview of related problems and approaches in Section II. In Section III we discuss the idea of sequential decoding as applied to distributed sensing. In Section IV we show simulation results that characterize sequential decoding performance and computational costs, compare sequential decoding to other algorithms, and empirically demonstrate a computational cutoff rate. In Section V we demonstrate detection performance using data collected in an experiment using large field of view temperature sensors. We conclude and discuss future work in Section VI. II. R ELATED W ORK ML or maximum a posteriori (MAP) detection using multiple measurements can be very computationally expensive, due to the large scale dependencies that many types of sensors induce in the environment. Researchers have proposed capturing the structure of these statistical dependencies using the formalism of graphical models. In a graphical model the sensors and the environment are modeled as a graph, e.g. with one node for every sensor and every discrete sensed region. Edges connect sensors to the parts of the environment which they observe. [15],[16],[17] propose message passing algorithms in such models as a means for fusing sensor measurements. Such algorithms are promising for sensors that sense a small number of random quantities, such as temperature and bias in a point temperature sensor [15]. These algorithms suffer from exponential complexity in the largest degree of the graph or the largest clique in an induced graph. However, many sensors induce precisely such large degrees and cliques. For example, the IR temperature sensors used in this paper have a large field of view and are therefore simultaneously connected to a large number of nodes in the graphical model. We review a number of related approaches to using sensor measurements for detecting the state of an environment modeled as a set of random variables. Since such sensor networks can be modeled as graphical models, we examine related work according to the type of inference it represents in a graphical model. Inference in graphical models can be broken into three large categories: exact, sampling, and variational [18]. Exact algorithms compute the exact probabilities of variables in a graphical model by exploiting graph structure. An example of such a method, the junction tree algorithm, is applied by [15] to various problems in sensor networks. Algorithms such as the junction tree algorithm have a computational complexity that is exponential in the largest clique size induced while

3

Fig. 1.

Illustration of the operation of sequential decoding.

constructing a junction tree from the original graph. Sampling approaches use ideas such as importance sampling and Markov Chain Monte Carlo(MCMC) methods to conduct inference in graphical models [18]. [19] applies an MCMC method to the problem of answering queries in a sensor network modeled as a Bayesian network. Finally, the variational approach recasts inference as an optimization problem for a simplified model [18]. One example of this approach is loopy belief propagation, which [17] applied to inference in sensor networks. BP has complexity that is exponential in the largest degree of in the graph. [20] uses sampling-based approximations of belief propagation messages to apply belief propagation to the problem of sensor self-calibration in sensor networks. In addition to these graphical model approaches, occupancy grids [4] are commonly used in robotics for detecting the state of environments modeled as discrete fields. Occupancy grids solve the detection problem by assuming a simple model that eliminates many of the true dependencies in the original graphical model. While this results in a loss of accuracy, it also yields low computational complexity.

to the MAP probability of a path for the graph associated with a sensor network. Based on this metric, the algorithm removes the path with the highest path metric from the list, extends that path to form two new paths, and inserts the new paths into the list. A path is extended by setting the value of the next undetermined variable to 0 and 1, thus generating two new paths. For example, path h4 in Fig. 1 would be extended to F1 = 1, F2 = 0 and F1 = 1, F2 = 1. The new list is sorted according to the value of the path metric associated with each path in the list. The algorithm extends the path with the highest path metric in the new list. This process terminates when a path of length k has the highest path metric in the list. We note that although the algorithm greedily chooses the path with the highest path metric, sorting the list at each turn allows for backtracking while searching the tree. The path metric is derived by evaluating the posterior probability of a path hp through the tree, conditioned on the sensor deployment D, and the noisy sensor observations y. We denote the path metric as µ and we focus on environments in which the variables Fi are i.i.d. with PF (Fi = 1) = t and PF (Fi = 0) = 1 − t. Using Bayes rule, the posterior probability that the first kp variables, denoted F kp , equal hp given D and y is written as, P (F kp = hp |D, y) =

P (y|D, hp )P (hp |D) P (y|D)

(1)

For any path hp the set of sensors is split into the sets {Y1 , Y2 , Y3 }. A sensor’s scope is defined as the set of discrete variables in the environment which it observes. Y1 represents the set of sensors whose scope is completely specified by the path, Y2 represents the set of sensors whose scope is partially specified by the path, and Y3 corresponds to the set of sensors whose scopes are completely not specified in the current path. Using these definitions [7] approximates (1) and uses it to define the path metric as follows,

III. S EQUENTIAL D ECODING FOR S ENSOR N ETWORKS We briefly review a sequential decoding algorithm adapted for use in sensor networks. A detailed description and derivation can be found in [7]. We represent the environment by a set of binary random variables F1 , F2 , ...Fk , e.g. one variable for each discrete location in a sensed field. Extensions to non-binary, discrete random variables requires a simple modification of the algorithm. Sequential decoding can be viewed as a search on a tree as shown in Fig. 1. Each path of length i along this tree corresponds to a hypothesis which assigns either a 1 or a 0, to each of the field variables F1 , F2 , . . . , Fi . From any node in the tree (except leaf nodes), there are two possible paths, corresponding to assigning a 1 or a 0 to the variable associated with that node. For example in Fig. 1, the path h1 corresponds to assigning F1 = 0, F2 = 0, F3 = 0 and the path h2 corresponds to F1 = 0, F2 = 0, F3 = 1. Sequential decoding maintains a list of active paths, and at every step chooses to extend the path with the highest path metric. The path metric used in sequential decoding algorithms for convolutional codes is the MAP probability of the bits received thus far, and is known as the Fano metric [11]. [7] derives an approximation

PY |X (yi1 |xi1 ) . X + log µ(hp , D, y) = PY (yi1 |D) i1 ∈Y1 X P (yi2 |hp , D) log + kp0 log(1 − t) + kp1 log(t) (2) PY (yi2 |D) i2 ∈Y2

P P (yi2 |hp ,D) The second term in (2), i2 ∈Y2 log PY (yi2 |D) , is a sum over the set of sensors with partially specified inputs. P (yi2 |hp , D) in this sum is computed by marginalizing over the sensor inputs not specified by the current path, but in the scope of sensor i2 . In the worst case, computing this term is exponential in the largest degree of the graph. This worst case does not occur in some sensor types (e.g. range sensors) such as those used in [7]. However, this marginalization becomes expensive for other sensor types, such as the temperature sensors used in this paper. To overcome this, rather than carrying out the exact marginalization, we compute the term by using a simple approximation. Using the fast bit-flipping algorithm (described in Section IV-C), we generate a small number of assignments to the unspecified field locations, compute the probabilities of sensor observations using these

4

assignments, and average these probabilities to obtain an estimate of P (yi2 |hp , D). Aperture

Fig. 2. An illustration of the sensor application. An IR temperature sensor senses a discrete temperature field. The sensor field of view is limited using an aperture. White grid blocks are at ambient temperature and black grid blocks are at a higher target temperature.

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 −15

The IR temperature sensor modeled in our simulation and used in our experiments is the Melexis MLX90601KZABKA3 . To motivate our sensor model we briefly discuss the physics associated with this type of sensor [21]. A common way to understand the radiation characteristics of an object is to model it as a black body. A black body is an object that absorbs and emits radiation in all wavelengths, and whose emission curves are completely determined by its temperature. The Stefan-Boltzmann law governs the total power per unit area radiated by a black body, J, which is given by, J = σT 4

(3)

where σ is the Stefan-Boltzmann constant and T is the object temperature. The distribution of radiation power emitted as a function of wavelength depends on its temperature. Objects at 400◦ C emit visible radiation. Lower temperature objects emit more power over longer wavelengths. Hence, to measure the temperature of objects at lower temperatures, we have to use sensors that are sensitive to much longer wavelengths. IR sensors respond to incident radiation in the IR spectrum, and hence can sense radiation from objects with lower temperatures. For example, the MLX90601KZA-BKA sensor is sensitive to temperatures from -40◦ C to +125◦ C. The amount of energy that arrives at the sensor is a function of its field of view. In our simulation, we vary the sensor’s field of view. The sensor receives different fractions of energy based on the angle between the target and the centerline of the

3.5 mm

8 mm

A. Sensor Model

3 http://www.melexis.com

26q FOV

Fraction of Energy Received by Sensor

In this section we empirically demonstrate the concept of a computational cutoff rate and evaluate our sequential decoding algorithm in a simulated large-scale detection application. We also compare the performance of sequential decoding to other algorithms. In our simulations, we use an IR temperature sensor to sense a binary field, as shown in Fig. 2. The field consists of grid blocks that are either at ambient temperature or at some target temperature. We set the ambient temperature at 27◦ C and targets at 127◦ C. Using our simulated IR sensor, we take measurements of the field from different locations and orientations. Detecting the state of the grid using these sensor measurements poses a significant computational challenge due to the large number of grid blocks that affect each sensor measurement. Since the number of grid states is exponentially large, this is a large scale detection problem. For example, let us consider a discrete 250mm ×250mm field composed of 25mm×25mm blocks. A sensor with field of view 26◦ at a distance of 250mm from the plane is affected by as many as 50 grid blocks at a time. Further, the number of possible grid states is 2100 . As a result, detecting the state of the binary field is a computationally difficult problem.

IR sensor

IV. S IMULATIONS

Fig. 3. angle.

−10

−5 0 5 Angle from Center ( degrees )

10

15

Fraction of energy received by sensor as a function of incidence

sensor’s field of view. An example of the amount of energy sensed as a function of incidence angle is shown in Fig. 3 for a sensor with a field of view 26◦ . A target that is directly in front of the sensor generates a higher reading than a target at the same temperature not centered in the field of view. We model the sensor as a set of weights, one for each grid block in its field of view. The weights depend on the orientation of the sensor and the angle between its center and each grid block. The sensor weights are normalized, and are used to average the temperatures of the objects in the sensor’s field of view to produce a weighted average temperature Tav . Thus, the power 4 reaching the IR sensor is proportional to Tav . However, the sensor itself generates radiation. Assuming the sensor is at the ambient temperature of the environment Tam , the sensor 4 emits radiation with power proportional to Tam . Therefore, we model the output signal of the sensor as proportional to 4 4 Tav − Tam . The sensor’s output is noisy, and to model this we corrupted the sensor model’s output with zero-mean Gaussian noise, whose variance is estimated from real sensor data. We do not model effects such as diffusion of heat from the target to the environment and we assume that the targets are ideal Lambertian sources that radiate energy equally in all directions.

5

0.35

2000 Field of View = 5 degrees Field of View = 10 degrees Field of View = 15 degrees

1800

Field of View = 5 degrees Field of View = 10 degrees Field of View = 15 degrees

0.3 Fraction of Grid Blocks Misclassified

Steps Until Convergence

1600 1400 1200 1000 800 600

0.25

0.2

0.15

0.1

400 0.05

200 0 0

100

200 300 400 Number of Measurements

500

600

Fig. 4. Average number of steps until convergence of sequential decoding as a function of the number of sensor measurements for sensors with different fields of view.

B. Complexity Transition of Sequential Decoding We ran simulations using the sensor model described in Section IV-A. For all graphs of simulation results presented in this paper, the points represent average values. For each point in a graph of simulation results, we generated 10 random sensor deployments and measured performance on 10 randomly generated target configurations for each deployment, resulting in 100 trials per data point. Each target configuration consisted of a 10 × 10 grid, and was generated by placing a target in each grid block with probability 0.3. Fig. 4 demonstrates the computational complexity of sequential decoding as a function of the number of sensor measurements, for sensors with different fields of view. For a small number of sensor measurements, the algorithm does not converge on average within the allotted number of maximum steps, which was set to two thousand (the convergence criterion is described in Section III). Above a certain number of measurements, we observe a sharp drop in the average number of steps required for the algorithm to converge. The number at which this drop occurs depends on the sensor field of view. Thus, the algorithm’s complexity decreases with an increase in the number of sensor measurements. This behavior is similar to the computational cutoff phenomenon observed when using sequential decoding for convolutional codes in communications. For a sufficiently large number of sensor measurements, the average number of steps required for convergence approaches 100, which is equal to the number of grid blocks in the field. This indicates that the algorithm does not backtrack often for a sufficient number of sensor measurements. Interestingly, sensors with a ten degree field of view experience this transition earlier than sensors with either a five degree or a fifteen degree field of view. The detection accuracy of sequential decoding for these simulations is shown in Fig. 5. This graph also shows an interesting relationship, with sensors with a field of view of ten degrees achieving the lowest error rate as we vary the number of sensor measurements. A priori, one might conjecture that sensors with the smallest field of view would have an advantage by enabling low estimation complexity. One might also conjecture that sensors with a large field of view would have an advantage in error rate by

0 0

100

200

300

400

500

600

Number of Measurements

Fig. 5. Empirical detection error of sequential decoding as a function of the number of sensor measurements for sensors with different fields of view.

observing grid blocks multiple times due to overlapping sensor observations. Our simulation reveals that an intermediate field of view could have both a lower computational complexity and a higher accuracy. C. Comparison with Other Algorithms To provide a context for the performance of sequential decoding, we study the effect of different parameters on the running time and accuracy of bit-flipping, loopy belief propagation, and sequential decoding. We implemented all algorithms in Matlab. The first algorithm, bit-flipping, is a simple and fast algorithm. This algorithm starts with a random estimate of the field and progressively improves this estimate by flipping one grid block at a time. For each field location, we compare the likelihoods of that location having a target versus the likelihood of it being unoccupied, assuming that our estimate of all other field locations is correct. We then set that field location to the value that gave us a higher likelihood. We cycle through all the locations in the field based on a random schedule. This bit-flipping can be repeated iteratively until convergence or until a fixed maximum number of iterations is reached. The second algorithm, loopy belief propagation, is an algorithm for approximate inference in graphical models. We first convert the sensor network into a graph, with one node for each sensor and one node for each block in the field. The graph has directed edges from each field node to every sensor that observes that field location. We also specify Gaussian probability distributions for each sensor, for every configuration of sensed nodes. We use the Bayes Net Toolbox [22], which is free to download and has a fast implementation of BP in Matlab, one version of which is optimized for the conditional Gaussian nodes required for this simulation. Another standard algorithm for inference in sensor network mapping algorithms is the junction tree algorithm. However, we found that we were unable to run this algorithm using the Bayes Net Toolbox for even modest fields of view due to memory problems. Hence, we do not compare junction tree with sequential decoding. We study the effect of changing the field of view and the number of sensor measurements to compare the various algo-

6

250 Belief Propagation Sequential Decoding

1400

150 Sensor Output

Running Time (seconds)

200

100

1350 1300 1250 1200 30

50

60

20

izo

0 5

7.5

10

12.5

15

nta

20

0

l an

gle

0

−10

(de

gre

Field of View (degrees)

Fig. 6. Running times of belief propagation and sequential decoding as a function of the field of view.

40

10

hor

−20

−20

es)

−30

cal erti

−40

le (

ang

v

Fig. 8. 3D plot of measurements obtained by scanning the IR temperature sensor. Higher sensor output corresponds to a higher temperature.

50

1340

0.25 Belief Propagation Bit Flipping Sequential Decoding

40 1320

30 vertical angle (degrees)

0.2 Fraction of Grid Blocks Misclassified

) ees

r deg

0.15

1300

20 1280

10 1260

0

0.1

0.05

−10

1240

−20

1220

−30 0 5

−20

−10

0

10

20

30

horizontal angle (degrees) 7.5

10

12.5

15

Field of View (degrees)

Fig. 7. Empirical error rates of belief propagation, bit flipping, and sequential decoding as a function of field of view.

Fig. 9. sensor.

2D plot of measurements obtained by scanning the IR temperature

V. E XPERIMENTS rithms described earlier. When the field of view is increased, the number of grid blocks sensed by the sensor with each measurement increases. Fig. 6 shows that while the running time of belief propagation increases exponentially with the field of view of the sensor, the time taken by sequential decoding increases only linearly. Bit-flipping was limited to ten iterations and was therefore ran in constant time (15 seconds). In these experiments the number of measurements was fixed at 400. We also analyze the error rates of the two algorithms. Significantly, despite having a much smaller computational complexity, sequential decoding achieves lower error rates than belief propagation. Similarly, sequential decoding outperforms the bit-flipping algorithm. The disparity in accuracy between sequential decoding and the other two algorithms increases with increasing field of view. As in Section IV-B, we observe that the relationship between field of view and error rate is not monotonic. As shown in Fig. 7, sensors with a field of view of 9 degrees achieved the lowest error among the fields of view tested when using sequential decoding. Thus, choosing sensors with either the smallest or largest field of view may not be optimal.

We conduct an experiment using the sensor modeled in our simulations. We set the sensor field of view at 26◦ C by adding an aperture as shown in Fig. 2. We used the model shown in Fig. 3 for the sensor’s sensitivity to radiation at different incidence angles. We obtained this model by fitting a function to the points provided in the sensor data sheet corresponding to the aperture we used. In our experiments we used chemical heat pads as targets and pinned them on a flat surface. These pads achieved a temperature of about 50◦ C for an extended period of time, while the ambient temperature in the room was 23.7◦ C. The sensor was placed at a distance of 250mm from the plane and collected 1600 measurements of a 500mm×500mm region. The sensor was mounted on two servo motors that panned and tilted the sensor. An example scan is shown in Fig. 8 and Fig. 9. Due to heat diffusion from the hot targets to nearby regions of the board, which is absent from our model, we apply a threshold to the data. We processed data using sequential decoding to obtain the detected field shown in Fig. 10. The ground truth is shown in Fig. 11. Using sequential decoding, only 2.7% of the grid blocks were misclassified with a sensor whose field of view

7

good results, we intend to study more principled approximations using Monte Carlo techniques. In another direction, we observe that sequential decoding algorithms used in this paper are related to branch-and-bound methods [23]. We would like to explore this connection. Finally, we intend to apply this algorithm to different sensing modalities and other large-scale detection problems.

50 100 150

distance (mm)

200 250 300 350 400

ACKNOWLEDGMENT

450 500 50

100

150

200

250 300 distance (mm)

350

400

450

500

Fig. 10. Sequential decoding estimate of target configuration using experimental data.

The authors would like to thank Luis Navarro-Serment and Ehud Halberstam for their invaluable help in setting up our experiments and obtaining the sensor model. We would also like to thank Prof. Raj Reddy for helpful discussions. R EFERENCES

50 100 150

distance (mm)

200 250 300 350 400 450 500 50

Fig. 11.

100

150

200

250 300 distance (mm)

350

400

450

500

Ground truth from our experimental setup.

included up to 100 grid blocks in each measurement. VI. C ONCLUSION AND F UTURE W ORK In this paper, we demonstrated an interesting tradeoff between sensor measurements and the computational complexity of detection in large-scale detection applications through both realistic simulations and real sensor data. Using the sequential decoding algorithm, we observed a sharp transition in the complexity of detection. When only a few sensor measurements are available, the sequential decoding algorithm is slow and inaccurate. However, above a certain number of sensor measurements, it becomes both fast and accurate. This behavior was observed for a hard large-scale detection problem, where IR temperature sensors simultaneously sensed many parts of the environment. For such distributed sensing problems and for sufficient sensor measurements, loopy belief propagation was both significantly slower and less accurate. In addition, our simulations demonstrate that the problem of sensor selection presents interesting tradeoffs that may be counter-intuitive. For example, sensors with an intermediate field of view enabled faster and more accurate estimation than sensors with either larger or smaller fields of view. We also showed that with a sufficient number of measurements even simple algorithms like bit-flipping can perform as well as more computationally expensive algorithms such as belief propagation. The paper suggests a number of directions for future work. While the approximation we introduced in this paper gives

[1] D. Li, K. Wong, Y. Hu, and A. Sayeed, “Detection, classification and tracking of targets in distributed sensor networks,” IEEE Signal Processing Magazine, pp. 17–29, March 2002. [2] H. Moravec and A. Elfes, “High resolution maps from wide angle sonar,” in IEEE International Conference on Robotics & Automation, 1985. [3] D. Linzmeier, M. Mekhaiel, J. Dickmann, and K. Dietmayer, “Pedestrian detection with thermopiles using an occupancy grid,” in IEEE Int. Transportation Sys. Conf., 2004. [4] A. Elfes, “Occupancy grids: a probabilistic framework for mobile robot perception and navigation,” Ph.D. dissertation, Electrical and Computer Eng. Dept., Carnegie Mellon University, 1989. [5] M. Paskin and S. Thrun, “Robotic mapping with polygonal random fields,” in 21st Conf. on Uncertainty in Artificial Intelligence, 2005. [6] S. Thrun, “Learning occupancy grids with forward sensor models,” Autonomous Robots, vol. 15, pp. 111–127, 2003. [7] Y. Rachlin, R. Negi, and P. Khosla, “On the interdependence of sensing and estimation complexity in sensor networks,” in Proc. Fifth Int. Conf. on Information Processing in Sensor Networks, April 19-21 2006. [8] ——, “Sensing capacity for target detection,” in Proc. IEEE Inform. Theory Wksp., Oct. 24-29 2004. [9] ——, “Sensing capacity for discrete sensor network applications,” in Proc. Fourth Int. Symp. on Information Processing in Sensor Networks, April 25-27 2005. [10] ——, “Sensing capacity for markov random fields,” in Proc. Int. Symp. on Information Theory, 2005. [11] R. Johannesson and K. Zigangirov, Fundamentals of Convolutional Coding. Wiley-IEEE Press, 1999. [12] J. M. Wozencraft, “Sequential decoding for reliable communications,” IRE Convention Record, vol. 5, pp. 11–25, 1957. [13] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. [14] R. Gallager, Low-Density Parity-Check Codes. M.I.T. Press, 1963. [15] M. Paskin, C. Guestrin, and J. McFadden, “A robust architecture for inference in sensor networks,” in Fourth International Symposium on Information Processing in Sensor Networks, 2005. [16] J. Moura, J. Liu, and M. Kleiner, “Intelligent sensor fusion: A graphical model approach,” in IEEE Int. Conf. on Sig. Proc., Apr. 2003. [17] C. Crick and A. Pfeffer, “Loopy belief propagation as a basis for communication in sensor networks,” in Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, 2003. [18] M. J. Jordan, “Graphical models,” Statistical Science, Special Issue on Bayesian Statistics, 2004. [19] R. Biswas, S. Thrun, and L. Guibas, “A probabilistic approach to inference with limited information in sensor networks,” in Third Int. Symp. Info. Proc. in Sensor Networks, Apr. 2004. [20] A. Ihler, J. F. III, R. Moses, and A. Willsky, “Nonparametric belief propagation for self-calibration in sensor networks,” in Fourth Int. Symposium on Information Processing in Sensor Networks, 2004. [21] R. W. Boyd, Radiometry and the Detection of Optical Radiation. John Wiley & Sons, Inc., 1983. [22] K. Murphy, “The bayes net toolbox for matlab,” Computing Science and Statistics, vol. 33, 2001. [23] E. L. Lawler and D. E. Wood, “Branch-and-bound methods: A survey,” Operations Research, vol. 14, no. 4, pp. 699–719, 1966.

Suggest Documents