12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009
Trust Management for Distributed Decision Fusion in Sensor Networks Ladan Gharai Kyle Guan BAE Systems, Wayne, NJ 07470 BAE Systems, Wayne, NJ 07470
[email protected] [email protected] Reza Ghanadan Sintayehu Dehnie BAE Systems, Wayne, NJ 07470 BAE Systems, Wayne, NJ 07470
[email protected] [email protected] Srikanta Kumar BAE Systems, Reston, VA 20190
[email protected] Abstract – In this paper we investigate how trust, with its various notions and forms, impacts the gathering and fusing of raw data, and its subsequent conversion to knowledge in sensor network. In particular, we study how the presence of compromised or faulty sensors affects the performance of the decision fusion, by using metrics of quality of information (QoI). We first provide analytical frameworks to model the malicious behavior of compromised sensors. We then introduce reputation mechanisms to address the threat of compromised sensors and the management of the trust. Via analysis and simulation, we evaluate how the reputation mechanisms can improve the performance of the fusion process. The end results provide valuable insight into the dynamics of the interplay between information networks, communication networks, as well as decision making support in social-cognitive networks. Keywords: trust models, quality of information, information fusion, hypotheses test, Bayesian game, reputation system, sensor network
1
Introduction
Trust, with its various components and derivatives, plays a critical role in the collection, synthesis, and interpretation of data, along the process of transforming data to knowledge for decision making [1]. The components of trust are derived from different contexts, which often reflect the intricacies of the interactions among social-cognitive, information, and communication aspects. As such, there is much complexity in a fully understanding of these contexts as well as the interdependency among the social, information, and communication networks. Notwithstanding, developing an endto-end notion of composite trust and evaluating its impact are fundamental as the ability to achieve trust can greatly enhance the effectiveness of the decision making process [1].
978-0-9824438-0-4 ©2009 ISIF
We investigate the role that the trust plays in information networks, within the Tasking, Collection, Processing, Exploitation, and Dissemination (TCPED) chain. In these contexts, we interpret the composite notion of the trust as that a trust component is derived at and thus tied to a particular stage of the chain. Upon a result of the TCPED chain is presented to a human or an automatic decision maker, decisions are then made based on the level of trust related to the received information. Due to the sweeping complexities of evaluating composite trust, we turn our attention first to the data collection and decision fusion aspects of the TCPED chain. The first stage in the TCPED chain is tasking. At this stage, information sources are mapped according to information needs and tasked with data collection. Information sources include sensors, people, or database queries. It is important to ascertain the authenticity of the data sources. Crucial questions to ask include: are the sensors or nodes that generate data trustworthy? What has been their track record so far? Has there been a major deviation in their behavior in the past? Any of these indicators, if measured correctly, enable us to determine the degree that the data can be trusted. With the completion of tasking, data is collected, transported, and fused. At this phase, malicious entities can impact the validity of the data by replacing data entirely or inserting faulty information. In addition, the complexity of the data collection process is affected by the underlying communication infrastructure of the sensor networks. Normally data is fused through a hierarchy of fusion subsystems. As such, the trust of information also includes the trust of these fusion subsystems as well as the communication links that connect them. In summary, the trust of information can be derived from the trust in the tasking, data sources, fusion hierarchy, and communication links, etc. There-
1933
fore trust management for information fusion must take into account these components and contexts. To theoretically understand how the elements of trust manifest themselves in the distributed decision making and how the raw data evolve into knowledge, we incorporate the notion of trust into an existing analytical framework of data and decision fusion and develop formalisms that describe, analyze, and estimate role of trust in sensor networks. In particular, we investigate the reliable decision fusion in the presence of legitimate, faulty, and malicious sensors, as well as fading and noisy communication channel. We look into how the fusion rules can take into consideration the “trustworthiness” of the report from a sensor to perform robust and accurate decision making. To this end, we design two tightly coupled entities: a reputation system and a fusion algorithm. By assigning and updating a weight, the reputation system quantifies the legitimate (or faulty) behavior of a sensor with respect to the desired operation. Based on these reputations, the reports from sensors deemed “untrustworthy” are not used in the fusion process. Via analysis and simulation, we evaluate how the reputation mechanisms can improve the quality of information (QoI) [2]-[4] for the fusion process. We also develop a Bayesian game framework to evaluate the effects of fusion rules as well as adversary strategies. The results show interesting insights. For example, if the fusion process is designed to tolerate some level of malicious behavior, significant performance improvement can be attained. The rest of this paper is organized as follows. In Section 2, we survey the current research on information fusion, trust management, and QoI. In Section 3, we first set up a system model that takes into account of both local detections and communication channels. We next provide frameworks to model the malicious behavior of compromised sensors. We also provide reputation mechanisms to address the threat of compromised sensors and evaluate how the mechanisms can improve the QoI of the fusion process. In Section 4, we summarize our main contributions and discuss topics for future research.
2
Related Work
As our work extends current researches in information/data fusion, trust management, reputation systems, and QoI, we briefly survey the related results in these research areas. Information/data fusion exploits the synergy among the raw data and converts it to knowledge to facilitate the decision making. The current state-of-the-art for sensor applications, including methods, algorithms, models, and architectures, is reviewed in [5][6]. In this paper, we use the Chair-Varshney fusion model/rule and its variants [7]-[9] due to their broad applicability and analytical tractablility. Chair-Varsheny fusion
rule, in its essence, is a weighted sum approach where each sensor’s local decision of a binary hypothesis test is weighted according to the probability of miss and false alarm. Many researches, such as in [10], also consider the decision fusion of multiple hypothesis. In [7]-[10], all sensors are assumed to be “well-behaved”. The effect of compromised senors on the fusion process is not considered. The definitions of trust vary over different disciplines, e.g., computer science vs. social sciences. From the perspective of social science, the concept of trust centers on expectation of benevolent behaviors from others [11] [12]. The expectation comes from interactions with trustees (ones to be trusted) over time. Such interactions allow the assessment of the consistency or discrepancy between expected and observed behaviors [13]. Many recent works, such as [14]-[17], draw inspirations from the social aspects of trust and apply trustrelated concepts and methodologies to enhance the system integrity of peer-to-peer (P2P), mobile ad-hoc networks, sensor networks, and pervasive computing networks. [14] focuses on the evaluation of trust evidence in ad-hoc models. The evaluation process is modeled as a path problem on a directed graph, with nodes representing entities and edges representing trust relations. The authors show that two nodes can establish an indirect trust relation without previous direct interaction. [15] analyzes the impact of distributed trust management – local voting rule on the structure and behavior of autonomous network. Both [16] and [17] adopt a reputation system approach to mitigate the performance degradation caused by malicious nodes. In [17], the authors developed a reputation assignment mechanism for multi-object tracking application. In this paper, we study the decision fusion in the presence of compromised, malicious, and unreliable sensors. We follow the similar approaches adopted by the works [14]-[17]. Compared to these works, our reputation assignment mechanism is designed for decision fusion application. In addition, we develop a Bayesian game framework to evaluate the effects of fusion rules as well as adversary strategies. In the context of detection and fusion of local decisions, we use the probability of detection and false alarm as quantifiable QoI attributes, as in [2]-[4].
3 3.1
Incorporating Trust in Decision Fusion System Model
The system model in this work consists of a detection subsystem, communication channels, and a fusion subsystem, as shown in Fig. 1. The detection subsystem has M sensors. Each sensor performs binary hypothesis testing and sends its local decision to the fusion subsystem over channels subjected to possible fading and
1934
S1
Event H0/H1
u1
y1=h1u1+nc1
y1
n1 ni
The threshold η is calculated from the apriori probabilities for the two hypotheses, P0 and P1 . When the noise is described by zero mean, additive and stationary Gaussian process with covariance matrix Ci = E{nTi ni }, the test reduces to the following:
Communication Channel
Detection
Si
ui
yi=hiui+nci
yi
Fusion Subsytem
1 T −1 l = rTi C−1 i si ≷ η ln(η) + si Ci si . 2
nM
yM SM
uM
yM=hMuM+ncM
Figure 1: The system model used in this work consists of a detection subsystem, communication channels, and a fusion subsystem.
noise. The outcome of the fusion process supports a decision maker and determines whether the event has occurred or not. For the clarity of our discussion, in Section 3.1-3.3, we assume that the local decisions can be transmitted to fusion subsystem without errors. We will address the effect of channel fading and noise in Section 3.4. 3.1.1
The parameter l is referred to as the sufficient statistics and represents the summary operation performed by the fusion subsystem on the measurements of ri . Finally, we have ψ 2 sTi C−1 i si
• H0 : event does not occur;
ln(η) ψ − ); ψ 2 ln(η) ψ Pf = Pr(l ≥ η|H0 ) = 1 − Φ( + ), ψ 2
Pd = Pr(l ≥ η|H1 ) = 1 − Φ(
For our work, we assume a zero mean, additive, and white Gaussian (AWG) noise process. The hypotheses test takes the following form: H0 : ri
= ni ;
(2)
H1 : ri
= si + ni ,
(3)
ψ2 = 3.1.2
(4)
(8)
1 T s si . σ2 i
(9)
Optimal Data Fusion
We consider the case where all sensors feed their local decisions to a single fusion subsystem, as shown in Fig. 1. Here we assume perfect channel conditions such that all the local decisions are transmitted without errors. Data fusion rules, as denoted by f (u1 , . . . , uM ), often take the form of “k out of n” or “majority vote” logical functions. Chair-Varshney rule, a weighted sum fusion rule, is proved to be Bayesian optimal [7]. That is: 1, if a0 + M i=1 ai ui > 0; f (u1 , . . . , uM ) = (10) −1, otherwise. The optimum weights are given by: a0 = log
where si represents the signal vector at the sensor i under hypothesis H1 . Under both hypotheses, ni represents an additive noise component that is added to the sensor i. A decision is made at sensor i, based on the likelihood ratio test (LRT) [2]: fri |H1 (ri ) ≷ η. fri |H0 (ri )
(7)
where Φ(·) is the cumulative distribution function of a N (0, 1) random variable. The square of the parameter ψ is reflective of the SNR and has the following form:
• H1 : event occurs. The apriori probabilities of the two hypotheses are denoted by Pr(H0 ) = P0 and Pr(H1 ) = P1 . We also denote the observation at senor i as ri . We assume that the observations at the individual sensors are statistically independent, and the conditional probability density function is denoted by f (ri |Hj ), with i = 1, . . . , M and j = 0, 1. At each sensor, a rule is employed to make a decision ui , i = 1, . . . , M , where −1, if H0 is declared; ui = (1) 1, if H1 is declared.
(6)
as signal-to-noise ratio (SNR) for the sensor i. As mentioned in Section 2, we use the probability of detection Pd and the probability of false alarm Pf as quantifiable attributes in QoI. For a special case in which AWG noise process has a zero mean and a variance of σ 2 , they can be derived, respectively, as:
Single Sensor Detection – Hypothesis Testing
Each sensor performs a binary hypothesis testing problem with the following two hypotheses:
(5)
ai =
log log
1−Pmi Pfi , 1−Pfi Pmi ,
P1 ; P0 if ui = 1; if ui = −1.
(11)
(12)
The fusion rule weights every local decision based on the sensor’s QoI – the weights are functions of Pm i
1935
(Pmi = 1 − Pdi ) and Pf i . The fusion subsystem then forms a weighted sum, which is compared to a threshold to decide whether an event occurs or not. Extending upon the results provided in [7], we analyze the QoI of the fusion subsystems as a functions of parameters such as Pd i and M . We first evaluate the probability of detection of the fusion system, denoted by PdFus : = =
Pr(f (u1 , ..., un ) = 1|H1 ) Pr(a0 +
M
Probability of Detection for the Fusion Subsystem
PdFus
1
(13)
ai ui > 0|H1 ),
i=1
where ui , under hypothesis H1 , is a discrete random variable with the following probability mass function (PMF): 1, with probability Pd i ; P (ui |H1 ) = (14) −1, with probability 1 − Pd i . The complex expressions of both (13) and (14) do not, in general, lend to an analytical form. However, with the following simplifications: • P0 =P1 • Pdi = Pd , we can express PdFus as: M M Fus Pd = I1−PD M − ,1 + , 2 2
(15)
where Ix (a, b) denotes regularized incomplete beta function. In other words, if we assume that 1) both events are equally probable and 2) sensors are homogeneous in their QoI and sensing environment, the fusion rule described in (12) reduces to a rule of majority vote. In Fig. 2, we plot PdFus as a function of number of sensors M , for different Pd . From the figure, we observe that the increasing of the number of sensors enhances the QoI of the fusion subsystems, though the gain exhibits a diminishing return. The figure also shows that, to achieve a given PdFus , it requires significantly more sensors with low QoI (e.g., Pd = 0.7) than those with high QoI (e.g., Pd = 0.9) . 3.1.3
Model of Compromised Sensors
A compromised sensor exhibits malicious behavior. Such a sensor can either intentionally increase the error of the measurement or falsely report the local decision to the fusion center. In this work, we first take a deterministic approach for modeling a sensor’s malicious behavior. A sensor misbehaves as it falsely reports the local decision with probability of 1 1 : −1, if ui = 1; f ui = (16) 1, if ui = −1, 1 In Section 3.3, we will consider the cases where compromised sensor exhibits malicious behavior with probability less than 1.
0.95
Pd=0.9
0.9
Pd=0.8 0.85 0.8
Pd=0.7
0.75 0.7 0.65
P =0.6 d
0.6 0.55 0.5 1
3
5
7
9
11
13
15
17
19
21
Number of Sensors
Figure 2: The probability of detection of the fusion subsystem as a function of number of sensors. where ufi denotes the false report by sensor i. To quantify how the presence of compromised sensors affects the QoI of the fusion process, we first evaluate the probability of detection. Without loss of generality, we assume that senors 1 to MFau are compromised. In this case, we have: PdFus = Pr(f (uf1 , ..., ufMFau , uMFau +1 , . . . , uM ) = 1|H1 ) (17) = Pr(a0 +
M Fau i=1
ai ufi +
M
ai ui > 0|H1 ).
i=MFau +1
In the above equation, ufi , under hypothesis H1 , is a discrete random variable with the following PMF: 1, with probability 1 − Pd i ; P (ufi |H1 ) = (18) −1, with probability Pdi . Even with the simplifications such as equally probable events and homogeneous senors, an analytical form of (17) is hard to obtain. Therefore, we numerically calculate the PdFus and plot it as a function of the number of compromised sensors in Fig. 3, with M = 15 and Pd = 0.9. As we observe from the figure, the QoI of the fusion subsystem degrades significantly when the percentage of compromised sensors increases. As also demonstrated in the figure, if the malicious sensors can be identified and thus their reports are discarded in the fusion process, the QoI of the fusion subsystem can be greatly improved. In the next section, we discuss the the design of reputation systems and fusion algorithms for identifying the compromised sensors.
3.2
Incorporating Trust in Fusion Subsystem
As our goal is to reliably determine the occurrence of an event, given measurements in the presence of le-
1936
incorporated into the fusion process at time j + 1 as follows: M 1, if a0 + Wi,j ai ui > 0; (19) fj+1 (u1 , . . . , uM ) = i=1 −1, otherwise.
Num. of Non−compromised Nodes 15 1
13
11
9
7
5
3
Prob. of Detection for Fusion Subsystem
0.9
1 1
0.9
0.8
0.8
Fusion without compromised nodes
0.7
0.7
0.6
0.6
Fusion with compromised codes
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 0
2
4
6
8
10
12
Algorithm 1 Reputation (Weight) Update 1: Assign T timesteps 2: Assign each sensor an equal weight, Wi,0 = 1, with i = 1, . . . , M 3: for j = 1 : T do 4: Use the fusion rule:
0 14
Num. of Compromised Nodes
fj (u1 , . . . , uM ) =
Figure 3: The probability of detection for the fusion decreases as the number of compromised nodes increases. The following parameters are used: M = 15 and Pd = 0.7. If the malicious sensors can be identified and thus their reports are discarded in the fusion process, the QoI of the fusion subsystem can be greatly improved.
gitimate, faulty and malicious sensors, we focus on designing fusion rules that take the “trustworthiness” of a sensor into consideration. As such, we focus on two tightly coupled components: (1) the reputation system and the trust (weight) that drives from it; (2) the fusion algorithms. The design of the reputation system concentrates on accurately quantifying the legitimate (or faulty) behavior of sensors with respect to the desired operation; while the design of the fusion algorithm involves maintaining reliable, accurate and efficient event detection in the presence of adversaries by using the reputation system. For the detection and fusion model used in this paper, we iteratively assign a weight Wi,j for a decision reported by sensor i at time j. The weight reflects the trustworthiness as well as reliability of the report from sensor i. The fusion of local decisions can then be performed based on these weighted data readings, thereby reducing the impact of untrustworthy sensors. Algorithm 1 summarizes the weight update and the fusion rule. The weight Wi,0 is initialized to be 1 at time 0. At time j (j ≥ 1), the fusion subsystem compares its “fused” decision with local ones reported by each sensor and accordingly updates the weight for each sensor. In particular, if the (local) report from sensor i conflicts with the decision of the fusion subsystem, the weight of the sensor is decreased by a factor of θ (0 ≤ θ < 1). If the (local) report from sensor i tallies with the decision of the fusion subsystem, the weight of the sensor is unchanged (if Wi,j−1 = 1) or increased by a factor of 1/θ (if Wi,j−1 ≤ 1 ). The updated weights will be
5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
1,
−1,
if a0 +
M
Wi,j−1 ai ui > 0;
i=1
otherwise.
for i = 1 : M do if f (u1 , . . . , uM ) = ui then if Wi,j−1 = 1 then Wi,j = Wi,j−1 else W Wi,j = i,j−1 θ end if else Wi,j = θWi,j−1 end if end for end for
To evaluate the performance, we simulate how Wi,j evolves with time step j for both non-compromised and compromised sensors. Fig. 4 shows the results of a scenario that consists of 15 sensors (M = 15). Among them, 3 sensors are compromised (MFau =3). For simplicity, we assume that si = 1 and each sensor experiences the same variance of measurement errors: σ = 0.5. Fig. 4 demonstrates that the weights of compromised and legitimate sensors evolve quite differently. The weight of a compromised sensor decreases as time progresses and eventually converges to 0. On the other hand, the weight of a legitimate node fluctuates from 1 with certain deviation. In practice, we can assign a weight threshold WThr . Once a sensor’s weight is below this threshold, it is declared as compromised. The report from such a compromised sensor will not be included in the fusion process. Obviously the values of both θ and WThr affect the time required as well as the accuracy to identify the compromised nodes. A small θ and a large WThr will shorten the time required to detect compromised nodes, albeit at the possible cost of detection accuracy. At this stage, we should also be cautioned that the validity of the proposed approach lies upon the assumption that only a small fraction of sensors are compromised. As the percentage of the malicious nodes increases, the false local reports will weight over the legit-
1937
1
0.9
0.8
0.8
0.7
i,j
i,j
Compromised Sensors Non−compromised Sensors
Reputation Weight W
Reputation Weight W
1
0.9
0.6
0.5
0.4
0.3
0.7
Compromised Sensors Non−compromised Sensors
0.6
0.5
0.4
0.3
0.2
0.2 0.1
0.1 0 0
50
100
150
200
250
300
350
400
450
500
Time Step
0 0
50
100
150
200
250
300
350
400
450
500
Time Step
Figure 4: The evolution of weights Wi,j for legitimate and compromised sensors. The following parameters are used: M = 15, MFau = 3, θ = 0.9, si = 1, and σi = 1.
imate ones. As a result, the performance of the reputation mechanism and the fusion algorithm degrades. Fig. 5 shows such an example, in which 8 out of 15 modes are compromised (MFau = 8). Compared to Fig. 4, the evolution trends of the weights of non-compromised and compromised sensors reverse, respectively. The weight of a non-compromised sensor decreases and eventually converges to 0; while the weight of a compromised sensor fluctuates from 1 with certain deviation.
3.3
Bayesian Game Framework
To broaden the scope of this work, we formulate the communications between each sensor and fusion subsystem as a dynamic Bayesian game. Our goal remains at developing reputation-based fusion system which can mitigate the effects of adversarial sensors by confining communication with trustworthy sensors. We assume that the adversary and fusion center are rational players that are intent on maximizing their individual payoffs. We also assume that the malicious behavior of a sensor is a private information. We consider stage games that occur in time periods tk , k = 0, 1, . . .. Within each stage game tk , fusion subsystem and sensor i interact repeatedly for a duration of T seconds. The assumption of multiple interactions within a stage game allows the fusion subsystem to infer about the behavior of sensor i. We denote hist(tk ) as the history of the game available at the fusion subsystem at the beginning of stage game tk . The adversary aims at degrading reliable event detection while evading detection or minimize probability of being detected. Thus, the payoff to the adversary is disrupting communication between sensors and the fusion subsystem. We assume that an adversary adopts probabilistic attack model to minimize probability of being detected (maximize the detection delay). In other
Figure 5: The evolution of weights Wi,j for legitimate and compromised sensors. The following parameters are used: M = 15, MFau = 8, θ = 0.9, si = 1, and σi = 1.
words, a compromised sensor exhibits malicious behavior with probability less than 1. The attack model may be represented as,
−ui , with prob. Pmal ; f ui = (20) otherwise, ui , where ufi is maliciously modified sensor decision; Pmal is the probability of malicious behavior (adversary transmits correct decision with probability 1 − Pmal ). It is important to note that by adopting a probabilistic attack model, an adversary attempts to mimic the inherent uncertainty of the measurement environment and communication channel. The fusion center, on the other hand, aims at providing reliable and efficient event detection in the presence of malicious adversary. That is, the goal at the fusion center is maximizing the probability of event detection PdF us . The event detection probability at the fusion center is determined by the detection capability of the sensor and also by the behavior of the sensor. The detection problem at the fusion center can be formulated as a maxmin problem, max min (Pdi , Pmal )
PdF us Pmal
subject to 0 < Pdi < 1;
(21)
0 < Pmal < 1, where Pdi is detection probability at sensor i; Pmal captures the behavior of the sensor. Based on game theoretic arguments, the adversary plays mixed strategy where it exhibits malicious behavior in a probabilistic manner. Since the behavior of the adversarial sensor is a private information, it is difficult to obtain a solution to the optimization problem
1938
in (21). Thus, a rational and intelligent fusion subsystem conditions communications with each sensor, based on an established level of trust.
15 1
13
Num. of Non−compromised Nodes
11
9
7
5
3
P =0 mal
0.9
The Reputation Game
The private information of the adversarial sensor corresponds to the notion of type in Bayesian games. We denote the type of sensor i by θi , where θi ∈ {θ0 = malicious, θ1 = nonmalicious}. The set Θi = {θ0 , θ1 } defines the type space of the sensor i, which is also the global type space, since a sensor belongs to either the set of compromised sensors or the set of non-compromised sensors. Although the behavior of each sensor is a private information, the fusion subsystem assigns a probability measure over the type space of each sensor. This probability defines the fusion subsystem’s belief about the trustworthiness of each sensor. The belief of the fusion center at stage game tk about the behavior of sensor i is denoted by µFus i . That is, µFus i (tk ) = P (θi |hist(tk )),
(22)
where hist(tk ) is the history of the game at the beginning of stage game tk . At the end of stage game tk−1 , the fusion center updates its belief using Bayes rule to determine the belief for the next stage, µFus i (tk ) =
p(hist(tk )|θi )p(θi ) , p(hist(tk )|θ˜i )p(θ˜i ) ˜
(23)
θi ∈Θi
where p(θi ) is prior belief. Note that µFus i (tk ) will be used as prior belief for the next stage game. It is intuitive to assume that after a finite stage of the game, the belief converges to a value which characterizes the trustworthiness or the reputation of the sensor. We denote this value by µi . The optimal data fusion rule may be modified to incorporate the reputation of sensor i,
1, if a0 + M i=1 µi ai ui ; f (u1 , . . . , uM ) = (24) −1, otherwise. Hence, maximizing PdFus amounts to selecting sensors with high reputation. In other words, the fusion center maximizes the event detection probability by limiting its communication to sensors with high reputation. In the event where reputation of some sensors falls below a certain threshold, the fusion subsystem may trigger an alarm requesting for deployment of additional sensors. in Fig. 6, we evaluate how the probability of malicious behavior affects the fusion rules and the probability of detection of the fusion system. The figure shows that in the case where there are no adversarial sensors (Pmal = 0), the detection capability of the fusion center may be limited by the QoI of the sensors (Pd i ). For a given Pdi , the performance of PdFus degrades as the level of maliciousness increases. For instance, at Pmal = 0.2 and M = 8, the performance of PdFus degrades by about
Prob. of Detection for Fusion Subsystem
3.3.1
1 1
Pmal=0.2
0.8
0.9 0.8
Fusion only with non−compromised sensors
0.7
0.7
0.6
0.6
Pmal=0.5 0.5
0.5
P =0.7
0.4
0.4
mal
P =1 mal
0.3
0.3
0.2
0.2
A total of 15 sensors, M=15 0.1 0 0
0.1
2
4
6
8
10
12
0 14
Num. of Compromised Nodes
Figure 6: The probability of detection for the fusion as functions of the number of compromised sensors and the probability of malicious behavior. The following parameters are used: M = 15 and Pd = 0.7.
5% with respect to the case where Pmal = 0. However, PdFus exhibits performance gain of about 2% with respect to the case where only non-compromised sensors contribute to the fusion process. In other words, if the fusion subsystem is designed to tolerate some level of malicious behavior, significant performance improvement can be attained. Indeed, tolerating some level of misbehavior may account for sensors which may intermittently exhibit non-malicious but faulty behavior; it may also account for variations in channel conditions. It is important to note that detecting low probability of malicious behavior may incur longer detection delay which increases the overhead in the fusion subsystem. It is easy to see that for values of Pmal ≥ 0.5, the detection capability of the fusion center significantly degrades. For M = 8 and Pmal = 0.5, the amount of incurred degradation is about 18%, with respect to the case where there are no malicious sensors (Pmal = 0). It is evident that in the presence of adversarial sensors with deterministic malicious behavior, PdFus exhibits severe performance degradation. That is, at Pmal = 1.0 the performance degradation is about 52%. Since effects of deterministic malicious behavior can easily be detected, an adversary may refrain from such strategy.
3.4
Effect of the Communication Channels
In previous sections, we assume perfect channel conditions such that local reports can be transmitted to fusion subsystem without error. In this section, we take into account channel fading as well as channel noise. As in Fig. 1, we assume that each local decision ui is
1939
1
1
0.9 0.8
0.8
Reputation Weight Wi,j
Reputation Weight Wi,j
0.9
0.7 0.6 0.5
Compromised Sensors Non−compromised Sensors FontSize
0.4 0.3
Channel SNR −10 dB
0.7 0.6
0.4
0.2
0.1
0.1
50
100
150
200
250
300
350
400
450
0 0
500
Channel SNR 0 dB
0.3
0.2
0 0
Compromised Sensors Non−compromised Sensors FontSize
0.5
50
100
150
Time Step
200
250
300
350
400
450
500
Time Step
Figure 7: The evolution of weights Wi,j for noncompromised and compromised sensors. The following parameters are used: M = 15, MFau = 3, si = 1, and σi = 1. The channel SNR is set at -10dB.
Figure 8: The evolution of weights Wi,j for noncompromised and compromised sensors. The following parameters are used: M = 15, MFau = 3, si = 1, and σi = 1. The channel SNR is set at 0dB.
transmitted through a fading channel and the output of the channel for sensor i is
• Case 2: σc2 → ∞: the fusion rule as stated in (28) can be simplified to:
yi = hi ui + nci .
M
(25)
Wi,j (Pdi − Pf i )hi yi ≷ 0.
(29)
i=1
That is, hi denotes a real valued fading envelop with hi > 0 (phase coherent reception); nci denotes zero mean Gaussian noise with variance σc 2 . Assuming a complete knowledge regarding the fading channel and the QoI metrics of local sensors, the optimal fusion rule has the following form [8]:
For identical sensors, the above equation can further reduced to a form analogous to maximal ratio combining (MRC): M
Wi,j hi yi ≷ 0.
(30)
i=1
M Pd i e
i=1
Pf i e
−(yi −hi )2 2 2σc −(yi −hi )2 2 2σc
+ (1 − Pd i )e + (1 − Pf i )e
−(yi +hi )2 2 2σc −(yi +hi )2 2 2σc
≷ 1.
(26)
Note that the fusion rule jointly considers the effects of the fading channels and the the QoI of sensors to achieve optimal performance. Accordingly, the updated weights will be incorporated into the fusion process at time j + 1 as follows: Wi,j −(yi +hi )2 −(yi −hi )2 M + (1 − Pd i )e 2σc2 Pd i e 2σc2 ≷ 1, (27) 2 2 −(yi −hi ) −(yi +hi ) i=1 Pf k e 2σc2 + (1 − Pf i )e 2σc2 or equivalently as:
M
i=1
Wi,j log
−(yi −hi )2 2 2σc
−(yi +hi )2 2 2σc
Pdi e + (1 − Pdi )e ≷ 0. (28) −(yi −hi )2 −(yi +hi )2 2 2 P e 2σc 2σc + (1 − P )e fi fi
As in [8], we also consider the following two special cases:
• Case 1: σc2 → 0: the fusion rule as stated in (28) reduces to that in (19).
We simulate and plot how Wi,j evolves with time step j for both non-compromised and compromised sensors under the update rule described by equation (28). For our simulation, there are 15 sensors (M = 15). Among them, 3 sensors are compromised (MFau =3). For simplicity, we assume that si = 1 and each sensor experiences the same variance of measurement errors: σ = 0.5. In Fig. 7 the SNR is set at -10dB; while in Fig. 8 the SNR is set at 0dB. For SNR=-10dB, the Wi,j of a compromised sensor converges to 0 at a slower rate, in comparison to that at SNR=0dB.
4
Summary and Future Work
The overarching theme of our research is to investigate the role that trust plays in information networks within the TCPED chain and to understand its interaction with the communication networks and the human users. As a first step, we investigate reliable decision fusion in the presence of legitimate, faulty, and malicious sensors, as well as fading and noisy communication channel. In particular, we focus on designing fusion rules that take the “trustworthiness” of a sensor report into consideration to perform robust and accurate data aggregation. As such, we look into the design
1940
of two tightly coupled entities: the reputation system and the fusion algorithms. The design of the reputation system concentrates on accurately quantifying the legitimate or malicious behavior; while the design of the fusion algorithm involves maintaining reliable, accurate and efficient event detection in the presence of adversaries by using the reputation system. Via analysis and simulation, we evaluate how the mechanisms can improve the QoI for the fusion process. Moreover, the results provide valuable insight into dynamics of the interplay between information networks, communications networks, as well as decision making support in cognitive-social networks. Our future work involves looking into multi-level hypothesis testing where a sensor has to make a local decision out of multiple declarations, instead of performing a binary hypothesis test. Within the context of multilevel hypothesis testing, a more complex model is required to capture behavior of malicious sensors. Trust evidence collection and reputation weight updating is another area where we further our investigation. Currently the update uses only first-hand information – the direct observation of sensor by the fusion subsystem. We plan to study the effects of using second-hand information – reports obtained from other sensors about the observed sensor’s actions – in the decision fusion.
References [1] “Networ Science Collaborative Technology Alliance (NSCTA) Program Announcement,” U.S. Army Research Lab (ARL), Feburary 2009. [2] S. Zahedi, M. B. Srivastava, and C. Bisdikian, “A Computational Framework for Quality of Information Analysis for Detection-oriented Sensor Networks,” MILCOM 2008, November 2008. [3] E. Gelenbe and L. Hey, “Quality of Information: an Empirical Approach,” First IEEE Workshop on QoI (QoISN 2008), Atlanta, GA, USA, October 2008. [4] S. Zahedi and C. Bisdikian, “A framework for QoIinspired analysis for sensor network deployment planning,” Annal Conference of ITA, September 2007. [5] C. Chong and S. P. Kumar,“Sensor Networks: Evolution, Opportunities, and Challenges,” Proceedings of IEEE, vol.91, no.8, pp.1247-1256, August 2003. [6] E. Nakamura, A. Loureiro, and A. Frery, “Information Fusion for Wireless Sensor Networks: Methods, Models, and Classifications,”ACM Computing Survey, Vol.39, No.3, Article 9, pp.1-53, August 2007.
Transactions on Aerospace and Electronic Systems, vol. 22, no. 1, January 1986. [8] B. Chen, R. Jiang, T. Kasetkasem, and P. K. Varshney, “Fusion of Decisons Transmitted Over Fading Channels in Wireless Sensor Networks,” IEEE Transactions on Signal Processing, vol. 54, no.3, pp. 1018 - 1027, March 2006. [9] R. Niu, B. Chen, and P. Varshney, “Decision Fusion Rules in Wireless Sensor Networks Using Fading Channel Statistics,” Conference on Information Sciences and Systems (CISS), March 2003. [10] B. Yu and K. Sycara, “Learning the Quality of Sensor Data in Distributed Decision Fusion,” 9th International Conference on Information Fusion, Florence, Italy, July 2006. [11] K. T. Dirks and D. L. Ferrin, “The Role of Trust in Organizational Setting,” Organization Science, vol. 12, no. 4, pp. 450-467, July-August 2001. [12] D. Rousseau, S. Sitkin, R. Burt, and C. Camerer, “Not So Differenct At All – a Cross Discipline View of Trust,” Academy of Management Review, vol. 23, no. 3, pp.393-404, 1998. [13] R. Kramer, “Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions,” Annual Review of Psychology, vol. 50, pp. 569-598, Feburary 1999. [14] G. Theodorakopoulos and J. S. Baras, “On Trust Models and Trust Evaluation Metrics for Ad-Hoc Networks,” Journal of Selected Areas in Communications, Security in Wireless Ad-Hoc Networks, Vol. 24, Number 2, pp. 318-328, February 2006. [15] T. Jiang and J. Baras, “Trust Evaluation in Anarchy: A Case Study on Automomous Networks,” Proceedings of the 25th IEEE Conference on Computer Communications (Infocom06), Barcelona, Spain, April 25-27, 2006. [16] S. Ganeriwal, and M. B. Srivastava, “Reputationbased Framework for High Integrity Sensor Networks,” ACM Transactions on Sensor Networks (TOSN) archive Volume 4 , Issue 3, pp. 66-77, May 2008. [17] T. Roosta, M. Meingast, and S. Sastry, “Distributed Reputation System for Tracking Applications in Sensor Networks,” International Workshop on Advances in Sensor Networks (IWASN) 2006, April 2006.
[7] Z. Chair and P. K. Varshney, “Optimal Data Fusion in Multiple Sensor Detection Systems,” IEEE
1941