Dynamic Trust Applied to Ad Hoc Network Resources - CiteSeerX

1 downloads 0 Views 369KB Size Report
[thughes, jdenny, pmuckelb]@atl.lmco.com. ABSTRACT ..... 444. 3. 1. 2. 124%. 111%. 58. 355. 569. 669. 3. 2. 1. 160%. 85%. 59. 325. 443. 497. 3. 2. 1. 136%.
Dynamic Trust Applied to Ad Hoc Network Resources Todd Hughes, Ph.D., James Denny, and P. Andy Muckelbauer, Ph.D., Julius Etzl Lockheed Martin Advanced Technology Laboratories 3 Executive Campus Cherry Hill, NJ 08002 USA

[thughes, jdenny, pmuckelb]@atl.lmco.com

ABSTRACT The Dynamic Trust-based Resources (DyTR) system applies a dynamic notion of trust to ad hoc network resources. DyTR continuously assesses the trustworthiness of entities over time based on system events and controls network resources according to current levels of trust. For dynamic trust assessment, DyTR utilizes a socio-cognitive model of trust, a formal model of the essential concepts and characteristics of trust in human society, and subjective logic for reasoning about trust-relevant system events. We describe a DyTR experiment in which trust assessment is coupled with resource delegation mechanisms in a simulated dynamic network environment.

Categories and Subject Descriptors C.2.3 [Computer Communication Networks]: Network Operations – Network Management; C.4 [Performance of Systems]: Fault Tolerance; I.2.0 [Artificial Intelligence]: General – Cognitive Simulation.

General Terms Algorithms, Management, Experimentation.

Performance,

Design,

Keywords Trust, socio-cognitive model, subjective logic, resource allocation, simulation, ad hoc network.

1. INTRODUCTION Cooperation and sharing of resources on a computer network require some degree of trust between the entities involved. Computer network administrators establish trust as a static configuration of authentication and access control mechanisms, often a simple mapping of credentials to access rights. This approach requires a great deal of human effort t o configure networks initially and additional effort to modify them in response to significant events. In enterprise networks this trust administration is typically well worth the effort. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Autonomous Agents & Multi-Agent Systems Conference ’03, July 14-15, 2003, Melbourne, Australia. Copyright 2003 ACM 1-58113-000-0/00/0000…$5.00.

However, advances in ad hoc networking technology make i t possible to assemble and deploy dynamic collaborative networks rapidly from disparate computing components. Dynamic network environments do not have the resources t o establish proper authentication and access control mechanisms at configuration time. As the time to configure such systems decreases and the interactions between their constituents become more complex, it becomes increasingly unlikely the proper degree of trust can be determined before deployment. In our view, the nature of dynamic network environments suggests the need for a dynamic notion of trust. More specifically, dynamic network environments would benefit from an adaptive assessment of trust that is integrated with resource allocation mechanisms. As trust in an entity degrades, so would the resources the entity is permitted to use. Such trust-based resource allocation mechanisms would limit and ultimately restrict undesirable behavior of entities. The Dynamic Trust-based Resources (DyTR) system is an initial attempt to provide such a trust solution for network delegation within dynamic network environments. DyTR is an active trust assessment capability that establishes initial trust levels for components of the network, continually assesses trust, and adaptively delegates resources in accordance with changes in perceived trust. We believe the key benefit of a system enhanced with a dynamic notion of trust such as DyTR is fault avoidance: systems with a dynamic notion of trust are more capable of adapting resources to avoid entities that are likely to impede system performance, increasing overall performance. In the following sections, we will discuss the method DyTR uses for dynamic trust assessment, namely, a socio-cognitive model of trust and subjective logic. We will also describe an experiment in which DyTR is used to delegate tasks and allocate resources in a simulated dynamic network environment.

2. DyTR BACKGROUND DyTR builds upon two other bodies of work: a socio-cognitive model of trust from Cristiano Castelfranchi and Rino Falcone’s socio-cognitive model of trust and Audun Jøsang’s subjective logic. The former provides the framework for reasoning about trust while the latter provides the computable representation and logic for implementing the trust model.

2.1 Socio-Cognitive Trust Model A socio-cognitive trust model is a formalization of concepts and processes derived from the study of human social interaction as it relates to trust. To develop such a model, researchers in cognitive science synthesize ideas from

artificial intelligence, psychology, sociology, and other fields. Those fields have examined the behavior of deceptive entities, the behavior of cooperative entities, and the resultant group interaction. The findings provide an empirical basis to define the elements of assessing trust, the process for making a trust decision, and how trust evolves over time. According to Castelfranchi and Falcone [1] [2], trust is a complex assessment by a trusting entity (truster) of a potentially trusted entity (trustee) regarding the trustee’s behavior relative to the truster’s objectives. The truster is the relying/permitting entity that decides whether to trust; it is a cognitive entity endowed with explicit goals and opinions. The trustee is the entity that is trusted, but it is not necessarily cognitive or autonomous; it might be an object or a tool involved in an action on behalf of a truster, a natural force or event, or a computational process.

and subjective logic to perform task delegation and resource allocation in a simulated dynamic network environment.

3.1 Concept of Operation We evaluated DyTR using a simulation of a coalition naval fleet (Figure 1). It was a simulation of a wireless network environment having thirty mobile entities with no defined configuration-time authentication or access control mechanisms. The simulated environment also contained other mobile entities that were not part of the network and periodically become hostile. Each of the networked entities (or nodes) could function as a sensor to track the hostile target or a router to forward communications between nodes. Two nodes also could function as weapon launchers.

For network resource management we chose to implement Castelfranchi and Falcone’s model for delegation-based trust. For this model, in which the truster decides whether to assign a trustee some task, three core opinions have been identified: •

Competence: trustee has sufficient information and functionality for the task.



Opportunity: trustee is able to attempt the task in its current context.



Disposition: trustee will perform as agreed, not obstructing, abusing, or deceiving.

The core opinions form the basis for trust assessment, i.e., the determination of the degree of trust. The core opinions can derive from a multitude of possible sources such as reputation, observation, and affiliation. The truster’s opinions about the competence, opportunity, and disposition of a trustee are formed from primitive beliefs. Which primitive beliefs are relevant with respect to a core opinion depends on contextual factors, including the truster’s goal. Inputs such as observation of the trustee’s behavior and recommendations from others contribute to the justification of these primitive beliefs. Such inputs are never certain; determining the trustworthiness of an entity requires that one take into account the uncertainty associated with one’s beliefs.

2.2 Subjective Logic

Figure 1. A simulated naval fleet was used to evaluate DyTR. When hostile entities appeared, a commander component of the simulation assigned a sensor node to track the target, a launcher node to shoot down the target, and a communication pathway between the sensor and launcher nodes. The commander’s delegation task for each engagement is done o n the basis of its knowledge of the nodes in the network. Prior to launching the simulation, we defined a probability for success for each node for each type of task. For example, a node might be given a probability of 0.95 for tracking a target, 0.75 for hitting a target, and 0.75 for forwarding a packet.

Audun Jøsang [3] [4] has established subjective logic with a metric and a set of operators for logical reasoning on uncertain propositions. Subjective logic uses a representation extended from the Dempster-Shafer theory of evidence and operators extended from both binary logic and probability calculus. The metric—called an opinion—is expressed as a tuple of values for belief, disbelief, uncertainty, and atomicity. An opinion can be interpreted as a belief with secondary uncertainty, whereby ignorance is distinguished from disbelief.

We also we defined three commanders depending on how they determined and made their respective engagement assignments:

Subjective logic includes operators for conjunction and discounting. Conjunction evaluates the belief that multiple propositions will hold, and discounting integrates a belief formed by another source adjusted by an opinion of that source, as with recommendation.

3. DyTR EXPERIMENT DESIGN We designed an experiment to evaluate DyTR. For the experiment, we integrated the socio-cognitive model of trust



Naïve: made engagement assignments based on the shortest path between the nearest sensor and launcher t o the threat.



Omniscient: made engagement assignments based the predefined probabilities of success for individual nodes.



Savvy: made engagement assignments based on trust assessments of individual nodes.

In other words, only the Omniscient commander had access t o the nodes’ predefined probabilities of success, which is the ideal metric to use. The Naïve and Savvy commanders had t o use sub-optimal metrics for their engagement decisions. In the simulation the Naïve, Savvy, and Omniscient commanders operated in separate simulated environments initialized with the same random seed for initial node entity position. This

technique allowed us to compare the performance metrics for each decision process relative to each other.

3.2 Trust Assessment This concept of operations required some modification to the original socio-cognitive model of trust. In this context, the Savvy commander C’s trust assessment is an assessment of the ability for a given sensor-launcher pair R to track and shoot a target T for the goal of killing it. We made this trust assessment—denoted DoTCR—for R a function of the degree of confidence—denoted DoCC—for two core opinions, namely, competence and opportunity: DoTCR = Fx [[DoCC[CompetenceR(TrackT Ÿ ShootT)], DoCC[OpportunityR((TrackT Ÿ ShootT), KillT)]]

The trust assessment was recursive because the opinion about R’s opportunity with respect to tracking and shooting T depended on an opinion of the communication path P between the sensor S and launcher L that compose R: DoCC[OpportunityR((TrackT Ÿ ShootT), KillT)] = Fx [DoCC[CompetenceP(S,L)]]

Thus, the trust assessment of the sensor-launcher pair in this scenario reduced the degree of confidence the Savvy commander had in the competence in the sensor-launcher pair and the communication path between the sensor and the launcher.

3.3 Subjective Logic Implementation of the Socio-Cognitive Model We used the subjective logic operators of discounting and conjunction to calculate the core competence opinions for our trust model. The sources for the formation of these competence opinions were observations of prior performance. When the Savvy commander delegated a sensor-launcher pair and a communication path to engage a target, it observed the result of the engagement. For each engagement, the Savvy commander recorded whether it was successful, i.e., the threat was killed. If the engagement failed, the failure was attributed to the responsible sensor, launcher, or path. If the path was responsible for the failure, it was attributed individually to the routers that constituted it. If the engagement succeeded, then the sensor, launcher, and (routers constituting) the path involved were attributed equal responsibility. The events that counted as success or failure for a sensor, launcher, and path are listed in Table 1. Table 1. Success and failure events for sensors, launchers, and communication paths. Success

Failure

È ˘ Í ˙ s s 2 ˙ [b, d , u ] = Í , , Ís + f + 2 s + f + 2 s + f + 2˙ Í ˙ Î ˚

Application of the discounting operator over a series of engagements eventually allowed the Savvy commander t o form opinions about all the nodes in the simulation. When a new engagement was initiated, the Savvy commander had to choose the most trustworthy sensor and launcher pairing, taking into account the path between them. This involved the use of the conjunction operator, which calculated a single opinion tuple for each sensor and launcher combination. The conjunction operator took the opinions OS and OL for a sensor and launcher and reduced them to a single opinion based on the equation:

[b

OS Ÿ O L

] [(

)(

)(

, d OS ŸOL , uOS ŸOL = bOS bOL , d OS + d OL - d OS d OL , bOS uOL + uOS bOL + uOS uOL

)]

The conjunction operator was also applied to form opinions about the paths between sensors and launchers. The resulting path opinion was conjoined with the sensor-launcher pairing opinion to form a single opinion for a sensor, launcher, and path combination. Finally, the resulting opinion was reduced to an expectation e that represented the Savvy commander’s degree of trust in that sensor and launcher pairing given a path. The following equation computed a value for e between 0 and 1: e = 1- d When a threat entered the simulation, the Savvy commander computed an e for each sensor and launcher pairing given a path for those sensors and launchers within range of the target. The decision on which sensor, launcher, and path to use t o engage the target was made by finding the sensor and launcher pairing given its path had the greatest value for e. In this way the Savvy commander made engagement decisions based o n trust.

4. DyTR EXPERIMENT For the experiment, we conducted a total of 180 trials over the combination of two independent variables: 3 commander types (Naïve, Savvy, and Omniscient) and 60 scenarios (random threat and network configurations). Each trial terminated after 1000 engagements. For each engagement, we recorded the target, duration, and outcome (target killed, failure due to sensor, failure due to communications, failure due to range, or failure due to ordnance). We hypothesized that in the typical case the simulation using the Savvy commander would have a greater proportion of kills to launches than simulations using the Naïve commander.

Sensor

Maintained Track

Lost Track

4.1 Experiment Results

Launcher

Ordnance Exploded

Ordnance Misfired

Path

Relayed Packet

Delayed Packet

The results for each trial are listed in Tables 2 and 3. Table rows and columns are organized according to the independent variables: scenario and commander. The cell values show dependent variables: number of kills (Table 2) and duration from threat onset to target kill (Table 3). Additionally, the tables expose two metrics that compare conditions: rank and ratio. For number of kills, higher is better (greater rank). For duration of kill, lower is better (greater rank). The ratios quantify more precisely how the Savvy commander performed relative to Naïve and Omniscient commanders.

The Savvy commander then applied the discounting operator on the number of observed occurrences of success (denoted s) and failure (denoted f) to form an opinion of the sensor, launcher, and router. The opinion consisted of belief (b), disbelief (d), and uncertainty (u) tuple of according to the equation:

Table 2. Total number of kills observed in each trial of 1000 engagements. SCENARIO 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55

NUMBER OF KILLS RANK RATIO Naïve Savvy Omni Naïve Savvy Omni Savvy/Naïve Savvy/Omni 275 337 327 3 1 2 123% 103% 461 515 474 3 1 2 112% 109% 200 441 342 3 1 2 221% 129% 400 439 411 3 1 2 110% 107% 422 391 526 2 3 1 93% 74% 502 627 608 3 1 2 125% 103% 397 534 613 3 2 1 135% 87% 422 480 550 3 2 1 114% 87% 396 383 564 2 3 1 97% 68% 489 549 637 3 2 1 112% 86% 460 485 510 3 2 1 105% 95% 366 621 577 3 1 2 170% 108% 272 490 452 3 1 2 180% 108% 373 436 527 3 2 1 117% 83% 188 211 313 3 2 1 112% 67% 572 646 738 3 2 1 113% 88% 422 583 355 2 1 3 138% 164% 263 296 400 3 2 1 113% 74% 215 262 519 3 2 1 122% 50% 419 519 512 3 1 2 124% 101% 557 514 694 2 3 1 92% 74% 327 415 465 3 2 1 127% 89% 524 534 605 3 2 1 102% 88% 595 669 378 2 1 3 112% 177% 409 468 460 3 1 2 114% 102% 306 526 381 3 1 2 172% 138% 516 478 642 2 3 1 93% 74% 390 511 520 3 2 1 131% 98% 409 486 488 3 2 1 119% 100% 265 554 293 3 1 2 209% 189% 475 603 582 3 1 2 127% 104% 277 451 625 3 2 1 163% 72% 380 623 389 3 1 2 164% 160% 312 437 409 3 1 2 140% 107% 326 410 392 3 1 2 126% 105% 561 642 741 3 2 1 114% 87% 482 572 429 2 1 3 119% 133% 293 645 678 3 2 1 220% 95% 446 479 564 3 2 1 107% 85% 481 495 599 3 2 1 103% 83% 136 139 192 3 2 1 102% 72% 377 547 629 3 2 1 145% 87% 161 333 345 3 2 1 207% 97% 445 583 667 3 2 1 131% 87% 450 514 497 3 1 2 114% 103% 455 574 524 3 1 2 126% 110% 409 591 629 3 2 1 144% 94% 301 382 512 3 2 1 127% 75% 311 488 501 3 2 1 157% 97% 528 592 610 3 2 1 112% 97% 213 273 317 3 2 1 128% 86% 406 587 685 3 2 1 145% 86% 594 536 679 2 3 1 90% 79% 200 284 130 2 1 3 142% 218% 568 515 646 2 3 1 91% 80%

SCENARIO 56 57 58 59 60 MEDIAN MEAN STD. DEV. CONFIDENCE

NUMBER OF KILLS RANK RATIO Naïve Savvy Omni Naïve Savvy Omni Savvy/Naïve Savvy/Omni 464 564 529 3 1 2 122% 107% 396 492 444 3 1 2 124% 111% 355 569 669 3 2 1 160% 85% 325 443 497 3 2 1 136% 89% 334 456 534 3 2 1 137% 85% 399 503 520 3 2 1 124% 95% 387.88 486.98 508.75 2.83 1.72 1.45 130.47% 99.95% 113.31 113.02 132.35 0.38 0.64 0.62 30.66% 30.03% 28.67 28.60 33.49 0.10 0.16 0.16 7.76% 7.60%

Table 3. Average duration from threat onset to target kill in each trial SCENARIO 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43

DURATION OF KILLS RANK RATIO Naïve Savvy Omni Naïve Savvy Omni Savvy/Naïve Savvy/Omni 111.662 113.211 118.557 1 2 3 101% 95% 93.191 113.000 126.612 1 2 3 121% 89% 111.255 102.138 113.877 2 1 3 92% 90% 98.940 105.071 124.109 1 2 3 106% 85% 81.313 99.706 79.506 2 3 1 123% 125% 88.833 95.858 110.535 1 2 3 108% 87% 94.688 124.227 100.925 1 3 2 131% 123% 82.187 86.892 83.555 1 3 2 106% 104% 109.785 112.037 113.459 1 2 3 102% 99% 83.787 109.566 108.754 1 3 2 131% 101% 80.709 101.045 116.735 1 2 3 125% 87% 102.456 99.673 114.556 2 1 3 97% 87% 108.529 112.522 134.343 1 2 3 104% 84% 90.665 103.438 88.435 2 3 1 114% 117% 95.537 102.687 124.569 1 2 3 107% 82% 83.273 84.746 80.531 2 3 1 102% 105% 92.502 99.991 111.375 1 2 3 108% 90% 109.532 123.037 135.045 1 2 3 112% 91% 109.893 114.752 104.127 2 3 1 104% 110% 85.239 91.081 89.240 1 3 2 107% 102% 88.808 106.488 85.716 2 3 1 120% 124% 99.954 111.265 122.034 1 2 3 111% 91% 87.034 92.197 88.255 1 3 2 106% 104% 78.582 107.281 107.646 1 2 3 137% 100% 106.922 105.083 109.670 2 1 3 98% 96% 86.448 99.200 122.593 1 2 3 115% 81% 78.184 102.322 99.308 1 3 2 131% 103% 82.751 86.106 83.537 1 3 2 104% 103% 87.154 101.665 116.090 1 2 3 117% 88% 97.419 90.868 130.509 2 1 3 93% 70% 83.562 93.761 114.691 1 2 3 112% 82% 103.422 109.918 94.302 2 3 1 106% 117% 93.897 88.225 103.051 2 1 3 94% 86% 114.981 106.474 122.572 2 1 3 93% 87% 90.939 108.900 110.518 1 2 3 120% 99% 91.795 94.520 91.433 2 3 1 103% 103% 84.805 92.233 102.732 1 2 3 109% 90% 88.573 90.848 87.237 2 3 1 103% 104% 78.408 91.950 79.963 1 3 2 117% 115% 85.925 94.616 91.294 1 3 2 110% 104% 115.882 124.806 130.083 1 2 3 108% 96% 78.910 95.199 107.758 1 2 3 121% 88% 116.509 133.511 137.270 1 2 3 115% 97%

DURATION OF KILLS RANK RATIO Naïve Savvy Omni Naïve Savvy Omni Savvy/Naïve Savvy/Omni 44 89.627 91.065 98.342 1 2 3 102% 93% 45 86.236 86.946 88.630 1 2 3 101% 98% 46 96.947 103.786 108.719 1 2 3 107% 95% 47 87.137 82.672 84.676 3 1 2 95% 98% 48 88.701 82.856 100.186 2 1 3 93% 83% 49 90.254 107.359 106.711 1 3 2 119% 101% 50 90.091 99.427 107.208 1 2 3 110% 93% 51 97.925 104.824 124.707 1 2 3 107% 84% 52 98.557 112.833 99.937 1 3 2 114% 113% 53 86.933 123.465 94.817 1 3 2 142% 130% 54 129.220 129.049 127.031 3 2 1 100% 102% 55 91.174 102.697 106.034 1 2 3 113% 97% 56 82.776 93.865 95.433 1 2 3 113% 98% 57 108.929 113.783 123.446 1 2 3 104% 92% 58 86.887 86.807 85.792 3 2 1 100% 101% 59 88.062 91.201 91.054 1 3 2 104% 100% 60 93.222 97.594 89.036 2 3 1 105% 110% MEDIAN 90.459 101.901 106.959 1 2 3 107% 97% MEAN 93.79 102.21 105.81 1.37 2.23 2.40 109.54% 97.77% STD. DEV. 11.45 11.89 15.92 0.58 0.67 0.79 10.97% 12.15% CONFIDENCE 2.90 3.01 4.03 0.15 0.17 0.20 2.78% 3.07% SCENARIO

Results across scenarios are best summarized by the median value. The mean, standard deviation, and confidence are provided for reference as common indicators of distribution variability and skew. In most scenarios – 50 of 60 (83%) – the Naïve commander performed the worst, with the lowest number of kills (see Table 2), in the remainder of scenarios, the Naïve commander performed in the middle; the Naïve commander never performed the best. Along the same measure, the Omniscient commander performed best in 37 scenarios (62%), mid-rank i n 19 scenarios (32%), and worst in 4 scenarios (7%). By comparison, the Savvy commander performed best in 2 3 scenarios (38%), mid-rank in 31 scenarios (52%), and worst i n 6 scenarios (10%). A two-factor analysis of variance indicates a significant effect (Fobs ≥ Fcrit, _ = 0.05) upon number of kills both by commander (df = 2, Fobs = 49.37, Fcrit = 3.07) and b y scenario (df = 59, Fobs = 6.55, Fcrit = 1.43). Figure 2 graphically compares the means of kill counts observed for the three commander types. Table 4 reveals t-test computations for our three directional hypotheses about the relation of those means: Naïve < Savvy < Omniscient. We cannot establish statistical significance that the mean number of kills by Savvy is less than the mean number of kills b y Omniscient. However, our results do allow us to conclude that Savvy as well as Omniscient has a mean greater than Naïve with statistical significance.

4.1.2 Duration of Kills

The Naïve commander performed the best, with the shortest average duration of kills (see Table 3), in 41 of 60 scenarios (68%), mid-rank in 16 scenarios (27%), and worst in just three scenarios (5%). Along the same measure, the Omniscient commander performed best in 11 scenarios (18%), mid-rank i n 14 scenarios (23%), and worst in 35 scenarios (58%). By com-

900 800

MEAN OF KILLS PER TRIAL

4.1.1 Number of Kills

1000

700 600

400

508.75

486.98

500 387.88

300 200 100 0 Naïve

Savvy

Omni

COMMANDER

Figure 2. Mean number of kills by commander type, with confidence intervals. Table 4. Comparison of mean number of kills. Hypothesis tobs (matched pairs, df = 59) tcrit (one-tailed, a = 0.05) Conclusion (tobs ≤ - tcrit, tobs ≥ tcrit)

Omni > Naïve

Savvy > Naïve

Savvy < Omni

9.106

8.918

1.521

1.671

1.671

1.671

Yes

Yes

No

parison, the Savvy commander performed best in eight scenarios (13%), mid-rank in 30 scenarios (50%), and worst i n 22 scenarios (37%). A two-factor analysis of variance indicates a significant effect (Fobs ≥ Fcrit, a = 0.05) upon number of kills both by commander (df = 2, Fobs = 33.39, Fcrit = 3.07) and b y scenario (df = 59, Fobs = 5.69, Fcrit = 1.43).

Figure 3 graphically depicts the means and confidence intervals of kill durations observed for the three commander types. Table 5 reveals t-test computations for our three nondirectional hypotheses about the difference of those means: Naïve ≠ Savvy ≠ Omniscient. We can establish statistical significance that the mean number of kills by Naïve, Savvy, and Omniscient are all different.

MEAN OF KILL DURATIONS PER TRIAL

120

110 105.81

significant minority of cases the Savvy commander performed better than the Omniscient commander. This was an unexpected result. We believe the success of the Savvy commander was attributable to its ability to learn through successive engagements which nodes were most trustworthy for sensing, launching, and routing. For example, we observed that over time the Savvy commander routed sensor-launcher communication around routing nodes with low predefined success probabilities. However, more analysis of the data i s needed to confirm this explanation.

102.21

5. FUTURE WORK

100 93.79

While these results are encouraging, much more work remains to be done in the area of dynamic trust. In the future we hope t o develop versions of DyTR that incorporate more trust sources. This would allow us to use a socio-cognitive model of trust that captures all the core opinions. For example, the coalition identification of an entity and its proximity to a threat could be evidence of disposition and vulnerability assessment data could be evidence of opportunity. We also hope to examine versions of DyTR that take peer recommendations as a source for trust assessments.

90

80

70

60 Naïve

Savvy

Omni

COMMANDER

Figure 3. Means kill durations by commander type, with confidence intervals. Table 5. Comparison of mean kill durations. Hypothesis

Omni ≠ Naïve

Savvy ≠ Naïve

Savvy ≠ Omni

tobs (matched pairs, df = 59)

7.512

6.821

2.173

tcrit (two-tailed, a = 0.05)

2.001

2.001

2.001

Conclusion (tobs ≤ - tcrit, tobs ≥ tcrit)

Yes

Yes

Yes

6. ACKNOWLEDGMENTS We gratefully acknowledge the significant contributions of Julius Etzl, Michael Junod, and Richard McClain to the DyTR project. This research was funded by the DARPA Fault Tolerant Networks program, ARFL contract number F30602-02-C-0109.

7. REFERENCES [1] Castelfranchi C., Falcone R. Principles of trust for MAS:

4.1 Experiment Discussion The results of the experiment confirmed our hypothesis yet were surprising. In the typical case the simulation using the Savvy commander did have a greater proportion of kills compared to simulations using the Naïve commander. Also, i n the typical case the simulation using the Savvy commander did have a lesser proportion of kills to launches and longer duration of kill than the Omniscient commander. However, in a

cognitive anatomy, social importance, and quantification. Proceedings of the International Conference on MultiAgent Systems (Paris, 1998), 72-79.

[2] Castelfranchi C., Falcone R. Social Trust: cognitive

anatomy, social importance, quantification and dynamics. Autonomous Agents Workshop on “Deception, Fraud, and Trust in Agent Societies” (St. Paul, Minnesota, 1998), 3549.

[3] Jøsang, A. A logic for uncertain probabilities.

International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems 9, 3 (June 2001), 279-311.

[4] Jøsang A. and Ismail, R. The beta reputation system.

Proceedings of the 15th Bled Conference on Electronic Commerce (Bled, Slovenia, 2002).

Suggest Documents