A Decision Support System for Secure Information Sharing Achille Fokoue
Mudhakar Srivatsa
Pankaj Rohatgi
IBM Watson Research Center
IBM Watson Research Center
IBM Watson Research Center
[email protected] [email protected] [email protected] Peter Wrobel John Yesberg CESG, UK
[email protected]
DSTL, UK
[email protected]
ABSTRACT
1.
In both the commercial and defense sectors a compelling need is emerging for highly dynamic, yet risk optimized, sharing of information across traditional organizational boundaries. Risk optimal decisions to disseminate mission critical tactical intelligence information to the pertinent actors in a timely manner is critical for a mission’s success. In this paper1 , we argue that traditionally decision support mechanisms for information sharing (such as Multi-Level Security (MLS)) besides being rigid and situation agnostic, do not offer explanations and diagnostics for non-shareability. This paper exploits rich security metadata and semantic knowledgebase that captures domain specific concepts and relationships to build a logic for risk optimized information sharing. We show that the proposed approach is: (i) flexible: e.g., sensitivity of tactical information decays with space, time and external events, (ii) situation-aware: e.g., encodes need-to-know based access control policies, and more importantly (iii) supports explanations for non-shareability; these explanations in conjunction with rich security metadata and domain ontology allows a sender to intelligently transform information (e.g., downgrade information, say, by deleting participant list in a meeting) with the goal of making transformed information shareable with the recipient. In this paper, we will describe an architecture for secure information sharing using a publicly available hybrid semantic reasoner and present several illustrative examples that highlight the benefits of our proposal over traditional approaches.
In both the commercial and defense sectors a compelling need is emerging for highly dynamic, yet risk optimized, sharing of information across traditional organizational boundaries. In the commercial sector, large corporations are slowly being transformed from monolithic, vertically integrated entities, into globally disaggregated value networks, where each organization focuses on its core competencies and relies on external partners, suppliers and integrators to develop and deliver goods and services. The ability of multiple partners to come together, share sensitive business information and coordinate activities to rapidly respond to business opportunities, is fast becoming a key driver for success. In the defense sector, traditional wars between armies of nation-states have been replaced by highly dynamic missions where teams of soldiers, strategists, logisticians, and support staff, drawn from a coalition of military organizations as well as local authorities (military, civilian and NGOs), fight against elusive enemies that easily blend into the civilian population [21] and perform humanitarian missions to win popular support. For example, in order for homeland security to respond to a particular perceived threat, it may be necessary to share and fuse information from multiple government and commercial entities, in a manner that respects the privacy rights of individuals. Risk optimal decisions to disseminate mission critical tactical intelligence information to the pertinent actors in a timely manner is a critical factor in a mission’s success. Conflicting objectives make information sharing a challenging problem. On one hand, the information consumer wants the most complete, up-to-date and high quality information to make informed decisions. Decisions made using low quality or obsolete information can result in major losses (financial and human life alike). On the other hand, to share high quality but highly sensitive information, the sender needs assurance from the recipient that the shared information will be appropriately protected from the threat of misuse (e.g., both intentional and unintentional information disclosure). Unauthorized information disclosure may endanger ongoing tactical operations and put strategic assets (e.g., trade secrets, technical capabilities, etc.) at risk. This sets up the following risk related tradeoff between the benefit of information sharing and the threat of information misuse. Overestimating information disclosure risk by the sender can severely constrict information flow, thereby either preventing or delaying the delivery of vital information to the right people at the right time. Underestimating the disclosure risk can result in information misuse and leakage,
Categories and Subject Descriptors: D.4.6 Security and Protection − Access Control and Information Flow Controls General Terms: Design, Languages, Security Keywords: Flexible Information Sharing, Justification for Non-Shareability, Description Logics, Semantic Reasoning 1 This work was done when John Yesberg was on secondment from C3I Division, Defense Science and Technology Organization, Australia
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SACMAT’09, June 3–5, 2009, Stresa, Italy. Copyright 2009 ACM 978-1-60558-537-6/09/06 ...$5.00.
INTRODUCTION
thereby exposing sensitive information to the wrong people. Traditional approaches to access control for information sharing (e.g., Multi-Level Security (MLS [18]), Decentralized Label Management (DLM [19, 28]), etc.) provide a rigid and coarse-grained approach to information sharing that is not well suited for dynamic tactical settings [20] for the following reasons. P1 − static labels: Traditional approaches use fairly static security labels (e.g., MLS labels: unclassified ≺ confidential ≺ secret ≺ topsecret) to tag information and fail to capture dynamic attributes of tactical information such as time sensitivity, precision, etc. Recent extensions to MLS labels and the Bell-LaPadula model [5] introduce the notion of a current security level, which may be used by an entity to manually ‘downgrade’ the sensitivity level of information. However, there is no explicit support for automated decay in secrecy label based on dynamic object attributes and situational attributes (e.g., sensitivity decay with space, time and external events). P2 − conservative labeling of derived information: Traditional approaches to information sharing use a highly conservative approach to deduce the metadata for derived data (e.g., maximum over secrecy label of all input data or even more due to aggregation). Such approaches do not account for functions that explicitly downgrade information; consequently as information is aggregated, without manual intervention, the secrecy label on aggregated information objects rise. Ultimately, an overwhelming majority of the derived objects end up receiving the highest security label (e.g., topsecret) making them inaccessible to most users even in the sender’s organization. Hence, many legitimate requests to access information result in a security exception (i.e., deny), which in turn may trigger a slower and onerous manual exception mechanism to explicitly downgrade the object. P3 − rigid, situation-independent access control decisions: Traditional approaches encode rigid and inflexible access control decisions into the security labels, e.g., MLS preencodes the access decisions (clearance ≥ sensitivity) into the clearance level of the entity and the sensitivity level of the object. Such an approach lacks situational awareness; for example, let us consider the following need-to-know based access control policy. Let us suppose that in a military scenario, coalition partner A has information about landmine locations in a geographic zone Z. When a coalition partner B has to traverse Z, B’s need-to-know the landmine locations is very high; one the other hand, coalition partner C operating at a distant geographical zone has a low need-to-know. Hence, A may decide to share the landmine information with B but not with C even if C was determined to present a lower risk of information disclosure than B. P4 − lack of explanations for non-shareablity: Traditional approaches to information sharing do not support explanations and diagnosis over shareability. The access control system returns a binary 0/1 decision on whether (or not) an object is shareable with a given recipient. However, the result neither provides the sender with any insights into what makes the object not shareable nor offers insights into what form of the object may be shareable with the recipient. For instance, an object may not be shareable because it is too sensitive; in which case, the sender could manually apply a downgrade transform to make the object less sensitive and thus shareable with the recipient.
In related work [23, 22], we explored a rich set of security metadata and a metadata calculus as a first step towards addressing issues P1 and P2 . In this paper, we will present an initial approach towards addressing issues P3 and P4 . The key idea here is to perform late binding of decisions. This is accomplished by retaining rich information semantics as metadata and specifying logic rules that operate on this metadata and other situational information to enable more flexible and dynamic information sharing decisions. Our general approach is based on the following ideas: (i) instead of bluntly tagging an object with a static security label, we define shareability as logic statements over enriched security metadata [23] and a semantic knowledgebase that captures domain specific concepts and relationships (such as NIMD/KANI [9] in the domain of tactical intelligence); (ii) we will integrate policies with a knowledgebase that captures the current state of the world, thereby allowing us to define policies that are dependent not only on the sensitivity of shared information but also on the current situation (e.g., recipient’s current state may determine need-to-know); and (iii) instead of providing a Boolean (0/1) decision on shareability, our logic based framework will provide reasons (e.g., proof tree) showing why an object is not shareable, thereby allowing the sender to appropriately downgrade the object before sharing it with the recipient. Here we would like to differentiate our goal with prior work on using automata to model [27, 26] information release policies. Our goal is to provide semantic level information that attempts to explain shareability, rather than a sequence of steps to be followed as part of a formal information release procedure. In particular, we would like our system to be able to aid humans in identifying suitable information downgraders to facilitate maximal (e.g., perform minimum information downgrade) and yet secure information sharing. In this work, we have explored using a Description Logic (DL) based approach towards achieving such a dynamic information sharing capability. Using DL we show how to construct the following: (i) encode need-to-know aware access control policies using simple notions of discrete time periods and geographical locations, (ii) encode automated sensitivity decay with space, time and external events, and (iii) reason over shareability using an extension of DL [8]. Our investigation has also uncovered some of key limitations of our DL based approach for this task, in particular, in handling issues such as continuous (non-discrete) notions of time and locations, handling policies based on risk computation [6], handling inconsistencies and retractions in the knowledgebase. The insights gained from this exercise provide us with several future research directions to develop a more comprehensive and robust approach for dynamic and flexible information sharing. This paper is organized as follows. In Section 2 we will describe a highly dynamic and situation dependent information sharing scenario drawn from coalition military operations to motivate and validate our approach. Section 3 provides the architecture and the logic based underpinnings of our proposed system. In Section 4, we describe how the scenario in Section 2 was encoded into our system, how some of challenges in doing so were resolved and a sampling of the results obtained. Finally in Section 5 we describe some of the limitations of our current approach and future research directions that can address these limitations.
2. SCENARIO In this section, we will describe a sample scenario that illustrates the need for dynamic and flexible information sharing in tactical military settings. We will first introduce some key concepts. Coalition: An alliance among individual organizations, during which they cooperate in a joint action, each in his/her own self-interest. Geo Location: Geographical locations are categorized hierarchically into grids (or sectors), sub-grids and coordinates (ordered in increasing order of precision). Enemy: In tactical warfare, the enemy is highly distributed and decentralized. In our scenarios, we will consider two simple types of enemies: insurgents and drug lords (ordered in decreasing order of their lethality). Mission: We will consider four types of missions: high value target missions, disruption of enemy supply chains, interdiction of drug lords, and humanitarian missions (ordered in decreasing order of their utility). Forces: A team of soldiers that are working towards a common mission objective. We will consider two types of forces: special forces (highly trained military units that conduct specialized operations such as reconnaissance, unconventional warfare, and counter-terrorism) and regular forces. Fighting: Fighting denotes an event wherein the forces and the enemy are engaged in a combat. Zero or more instances of fighting can take place in the course of one mission; e.g., a high value target mission may involve one instance of fighting with insurgents and another with the local drug lords. Information Source: We will consider three types of information sources: spy, coalition partner, and public information sources such as BBC, CNN, etc. (in decreasing order of secrecy). Tactical information (e.g., insurgent location, inventory of ammunition, mobility pattern, etc.) provided by these information sources are time sensitive. Map: An integrated knowledgebase that contains a detailed map of the physical terrain, weather conditions, geo location of all forces, coalition partners, insurgents and landmines, mission details, etc. The knowledgebase continually evolves over time. Having introduced the basic concepts, we will describe a sample scenario. The scenario includes two coalition partners A and B; a geo location grid G with sub-grids G1 , G2 , G3 and G4 . A is conducting its operations in grid G. Figure 1 summarizes the operations carried out by A, the actions carried out by B at various spatial-temporal coordinates, and highlights what pieces of information is shareable between A and B.
3. ARCHITECTURE Our architecture for fine-grained information sharing with explanations is illustrated at Figure 2. A Global Awareness Module continually maintains and updates a central knowledgebase encoding, in a Description Logics language, the relevant state of the world for our application (e.g., locations of allied and enemy forces). A hybrid reasoner (Description Logics and Rule reasoner) is then responsible for deciding, based on the current state of the world as reflected by the knowledgebase and a set of security policies and rules, which information items may be safely shared with a given entity. This decision is made transparent by the ability of the hybrid reasoner to provide an explanation as to why a given
information item or document has a certain classification level and/or why it can (or cannot) be shared with a given entity.
Figure 2: Architecture Our system does not assume a centralized policy repository and/or knowledge base. Each participant will maintain a private policy set and a knowledgebase representing its view of the relevant state of the world. The architecture shown in Figure 2 is replicated at every decision center. The decision centers may use an information dissemination mechanism to propagate state information across different decision centers; however, such information dissemination algorithms are outside the scope of this paper. In the remainder of this section, we describe in more details the main components of our architecture.
3.1
Description Logics Knowledge Base
Central to our approach is the ability to update, maintain, reason and query a knowledge base representing the state of the world (e.g., locations of allied and enemy forces, information about known terrorists, etc). In this paper, we explore the extent to which Description Logics [2] are a suitable knowledge representation paradigm for the encoding of the state of the world relevant to our application. Description Logics (DL) are a family of knowledge representation formalisms with several interesting properties such as high expressive power (i.e., the ability to express complex relationships between concepts and entities in a particular domain), well-defined semantics (i.e., unambiguous mathematical formalization of the meaning of relationships between concepts and entities in a particular domain), decidable inference problems, and practicable inference algorithms for these problems. Description Logics have recently gained a lot of attention from a broader audience as the combination of high expressive power, well-defined semantics, and decidable inference problems have made them the theoretical underpinning of the Web Ontology Language (OWL), the
G1 Mission: high value target Forces: special
G2
G3 Fact: A has planted landmines
T2: B enters grid G4 in G
Mission: high value target Forces: special
Info: insurgent location Source: spy
Fact: A has planted landmines
T3: A’s special forces unexpectedly run into a drug lord
Sub-Mission: Interdiction of drug lord Forces: special (sub-mission is less sensitive) Mission: Interdiction of drug lord Forces: regular
Event: fighting insurgents Forces: regular
Fact: A has planted landmines
Mission: humanitarian Forces: regular
Event: fighting insurgents Forces: regular
Fact: A has planted landmines
Mission: humanitarian Forces: regular
Same as T3
Event: Insurgent fighting has ended
Fact: A has planted landmines
Mission: humanitarian Forces: regular
Same as T2 + coordinates of landmines in grid G3
Fact: A has planted landmines
Mission: humanitarian Forces: regular
Same as T2 + coordinates of fighting in G1 (without revealing the high value target)
T4: A calls for regular forces to engage in the fighting with drug lord
T5: B enters grid G3.
T6: B enters grid G1
Mission: high value target Forces: special Event: drug lord interdiction is complete Mission: high value target Forces: special Event: fighting as a part of high value target mission
G4 Mission: humanitarian Forces: regular Mission: humanitarian Forces: regular
share A B A’s presence in G No additional info
Time T1: B is outside grid G.
Insurgents in G2 and landmines in G3 (without specific coordinates and info source) + humanitarian mission in G4 Same as T2 + fighting in G2 + interdiction of drug lords in G1 (without the nature of forces involved)
Figure 1: Shareability Decisions in a Dynamic Setting standard language for the inference layer of the Semantic Web.
3.1.1
SHIN Semantics
The specific Description Logics Language used in this work is SHIN [10], which can be viewed as an expressive decidable subset of First Order Logic (FOL). In this section, we briefly present the semantics of SHIN as shown in Tables 1 and 2. Informally, the basic building blocks of a DL language are concepts and roles. A concept, which corresponds to a unary predicate in FOL, represents a set of individuals in the domain of discourse who share some common characteristics. A role, which corresponds to a binary predicate in FOL, represents a binary relation between individuals in the domain of discourse. New concepts and roles can be built from existing ones using concept and role constructors available in the considered DL languages. Constructors for SHIN Description Logics are shown in the first column of Table 1. Formally, as for First Order Logic, a model theoretical semantic is adopted here. In the definition of the semantics of SHIN, I= (∆I , .I ) refers to an interpretation where ∆I is a non-empty set (the domain of the interpretation), and .I , the interpretation function, maps every atomic concept C to a set C I ⊆ ∆I , every atomic role R to a binary relation RI ⊆ ∆I X∆I , and every individual a to aI ∈ ∆I . The interpretation function, .I , is further extended to complex concepts and complex roles as shown in Table 1. Trans(R) in the table refers to a transitive role R. A SHIN Knowledge Base consists of three parts: a Rbox, a Tbox and an Abox. An Rbox R is a finite set of tran-
Definitions CuD CtD ¬C ∃R.C ∀R.C ≤ nR ≥ nR R−
Semantics C I ∩ DI C I ∪ DI ∆I \C I {x|∃y. < x, y >∈ RI , y ∈ C I } {x|∀y. < x, y >∈ RI ⇒ y ∈ C I } {x| |{< x, y >∈ RI }| ≤ n} {x| |{< x, y >∈ RI }| ≥ n} {< x, y > | < y, x >∈ RI }
Table 1: Semantics of SHIN Constructs
sitivity axioms of the form Trans(R) (i.e., R is a transitive role) and role inclusion axioms of the form R v P where R and P are roles. v∗ denotes the reflexive transitive closure of the v relation on roles. A Tbox T is a set of concept inclusion axioms of the form C v D where C and D are concept expressions. An Abox A is a set of axioms of the ˙ b. Informally, the Rbox and the form a : C, R(a, b), and a6= Tbox together define the ontology (i.e., the vocabulary and the semantics constraints) of a particular domain, whereas the Abox specifies facts about particular individuals in the considered domain. An interpretation I is a model of an Abox A w.r.t. a Tbox T and a Rbox R iff it satisfies all the axioms in A, R, and T (see Table 2 for the satisfiability conditions for each type of axioms). An Abox A is said to be consistent w.r.t. a Tbox T and a Rbox R iff there is a model of A w.r.t. T and R. If there is no ambiguity from the context, we simply say that A is consistent.
Axioms Trans(R) RvP CvD a:C R(a, b) ˙b a6=
Satisfiability conditions (RI )+ = RI < x, y >∈ RI ⇒< x, y >∈ P I C I ⊆ DI aI ∈ C I < aI , bI >∈ RI aI 6= bI
Table 2: Semantics of SHIN Axioms
3.1.2
Scalability of SHIN Reasoning
A standard technique for checking the consistency of a SHIN Abox is to use a tableau algorithm [10], which executes a set of non-deterministic expansion rules to satisfy constraints in A until either no rule is applicable or an obvious inconsistency (clash) is detected. Consistency checking is critical reasoning task in SHIN because other reasoning tasks can be reduced to consistency checking. In our previous work [7], we have shown that a summarization and refinement approach enables tableau-based SHIN reasoners to scale to very large knowledge base such as one encoding the current state of the world for our security application. In SHER, scalability is achieved by avoiding to directly reason over the large knowledge. Instead, SHER reasons on a dramatically reduced summary of the knowledge base, which enables it to effectively determine, through a process of refinement, the exact portion of the knowledge base that contains answers to a particular query.
3.2 Domain Ontology: Extended KANI Ontology The domain ontology used for our intelligence and military scenario is an extension of the NIMD/KANI ontology2 [9]. This approach has the advantage of leveraging an existing ontology that already models, in a fairly extensive manner, important intelligence and military concepts as well as taking into account the temporal dimension of the modeled data. In particular, it provides, through the use of OWL Time Ontology3 , a vocabulary for expressing relations between instants, intervals, duration and date-time information. Unfortunately, the temporal vocabulary is not sufficiently constrained in SHIN itself to enable temporal reasoning with an off-the-shelf SHIN reasoner. A separate temporal reasoner is required. To the best of our knowledge, there is no sound and complete integration of a SHIN reasoner and an OWL Time temporal reasoner capable of scaling to very large knowledge bases. As a work-around that limitation, we discretized time by considering only discrete time instants and by introducing the next role to represent the relation between an time instant t and its successor t+1.
3.3 Integration of DL Knowledge Base with security rules Once the state of the world is encoded using the extended KANI Ontology, it is tempting to rely on the same description logics formalism to express security rules and policies. Unfortunately, the decidability of most DLs stems, in part, from a severe restriction on the types of relations that can be expressed between individuals in the domain of discourse. 2 3
http://ksl.stanford.edu/projects/NIMD/Kani-dl-v1.owl http://www.w3.org/TR/owl-time/
In particular, most DLs, including SHIN, exhibit some form of tree model property, i.e., every satisfiable KB has a tree shape model. This means that rules that express cyclic relationships between individuals cannot be encoded without any additional restrictions to SHIN. For example, it is well known that the trivial uncle rule (if (P has child A) and (P has child B) and (B has child C) then (C has uncle A)) cannot be expressed in SHIN. Furthermore, SHIN is limited to the expression of unary (concept) and binary (role) predicates. In our application, it is critical to express tertiary relations such as needToKnowLevel relating an entity to its need-to-know level for a particular piece of information. These limitations of SHIN are addressed by expressing security rules as a set of simple procedural rules. Before defining the precise syntax and semantics of these rules, we briefly introduce the standard notation used for rules. A term is either a constant (a, b, c) or a variable (x, y, z). An Atom is an expression of the form P (t1 , · · · , tn ) where ti are terms and P is a predicate symbol. A predicate symbol P is a DL-predicate if there is a concept or a role with the same name in the extended KANI ontology; otherwise, it is a nonDL-predicate. A nonDL-atom is an atom whose predicate symbol is a nonDL-predicate. DL-predicates are either unary or binary whereas there is no restriction on the arity of nonDL-predicates. A variable binding τ is a mapping from the set V ar of variables to the set of individuals in the knowledge base and literal values. For an atom P (t1 , · · · , tn ), τ.P (t1 , · · · , tn ) denotes the atom obtained by replacing all occurrences of a variable x in P (t1 , · · · , tn ) by τ (x). Formally, τ.P (t1 , · · · , tn ) = P (τ.t1 , · · · , τ.tn ), where, if ti is a variable, then τ.ti = τ (ti ); otherwise (i.e. ti is a constant), τ.ti = ti . Security rules are expressed as a set of procedural rules of the form: P1 ∧ · · · ∧ Pn ⇒ C 4 , where, for each 1 ≤ i ≤ n Pi is atom, and C is nonDL-atom. The standard procedural semantics is applied: if there is a variable binding τ such that, for each 1 ≤ i ≤ n, τ.Pi is entailed by the knowledge base (i.e., DL-Knowledge base entailment for DL-atom, or simply checking that the nonDL-atom τ.Pi is already known to hold) then the variable free nonDL-atom τ.C is entailed. The restriction of the conclusion of a rule to a nonDLatom, on the one hand, ensures that security rules are not used to alter the meaning of terms defined in the ontology, and, on the other hand, it enables simple and efficient integration of DL reasoning and rule execution since the execution of security rules does not affect the state of the DL-knowledge base, but simply reads it. For our prototype implementation, we extended SHER reasoner [1] to support this limited form of procedural rules. The body of a rule is considered as a grounded conjunctive query to evaluate against the knowledge base (both the DL-Knowledge base and the set of already inferred variable free nonDL-atoms). Solutions to the grounded conjunctive query are then used to instantiate the inferred nonDL-atom conclusion of the rules. This process is repeated until no new nonDL-atoms can be inferred. The scalability of the rule evaluation relies to a large extent on the scalability of grounded conjunctive query answering in SHER [8]. We use the same summarization and refinement approach (see section 3.1.2), which is responsible for SHER unique scalability over large knowledge bases as established in [1, 7, 8]. 4 We further require that if x is a variable appearing in C, then it must also appear in one of the Pi
3.4 Supporting Explanations A major feature of our prototype implementation of the integration of a DL Knowledge Base and security rules is the ability to provide explanations for all the entailed facts (e.g., explanations for why a document is topsecret). An explanation is given in the form of a justification [12], which is a minimum set of axioms that together imply the entailed fact. This paper focuses on providing justifications that enable information downgrade and secure information sharing. We assume that a participant ’s model of the world (i.e., its ontology) and its policies are given and cannot be altered (as a part of the downgrade process). Hence, only Abox facts and variable free nonDL-atoms can be altered before exchanging information with other participants. A justification consists only of Abox facts and variable free nonDLatoms that hold before the application of any security rule. Justifications allow us to understand what information objects may be safely shared without compromising a target mission. For more details on how explanations are used, please see section 4. SHER reasoner already provides the tracing capability necessary to compute justifications for solutions of conjunctive query answering. We extended it to keep track of explanations during the execution of security rules.
Fighting ⊆ kani:Violence ∩ kani:Meeting Fighting ⊆ ∀ kani:hasPlace.FightingSite Terrorist ⊆ kani:Person Spy ⊆ kani:Person TerroristOrganization = kani:Organization ∩ ∀kani:memberOf.Terrorist LandMine ⊆ ExplosiveWeapon ⊆ kani:Weapon Coordinate ⊆ subgrid ⊆ grid ⊆ kani:GeoPlace HazardousGeoPlace ⊆ kani:GeoPlace LandMineSite = HazardousGeoPlace ∩ ∃contains.LandMine Table 3: Sample Extensions to KANI/NIMD Ontology InformationEnvelope IE { about(IE, meeting06) kani:Meeting(meeting06) kani:hasPlace(meeting06, Kaboul) kani:participant(meeting06, BenLaden) provenance(IE, JamesBond) knownTo(IE, MI6) } Table 4: Enriched Security Metadata
3.5 Global Awareness Module A Global Awareness Module is responsible for updating the knowledgebase as conditions on the ground change. This module will be equipped to read event feeds from multiple sources of information (e.g., spy, coalition partners, local and foreign media, etc.) and update the current state of the world (e.g., insurgent location, landmine sites, weather conditions, terrain information, etc.). In doing so, we note that some types of information are mostly static (e.g., terrain information, landmine sites); other types of information may be dynamic but amenable to real-time monitoring (e.g., weather conditions); and the rest are not only highly dynamic but also amenable only to partial, erroneous and delayed monitoring (e.g., insurgent locations). One could use sophisticated models to predict weather conditions and track insurgent locations in such dynamic tactical settings; however, such models are outside the scope of this paper. In this paper we assume that the latest information (on say, weather conditions or insurgent locations) continues to hold until and unless the information is explicitly updated (say, by a weather satellite or a spy).
4. EVALUATION: CASE STUDY 4.1 Information Ontology and Metadata In the proposed approach, two key ingredients in supporting flexible and dynamic information sharing policies are domain specific information ontology and enriched security metadata. In particular, we use an extended version of the KANI ontology (Knowledge Associates for Novel Intelligence [9]) for building a knowledgebase that captures the current state of the world. Tables 3 describes sample extensions to NIMD/KANI ontology. For example, we have introduced a new concept Fighting that is defined as a conjunction over two concepts defined under the KANI ontology, namely, Meeting and Violence (i.e., fighting is defined as a violent meeting). Further, the concept Fighting is associated with a geo location FightingSite. Similarly, one may
define a TerroristOrganization as a specialization of an Organization all whose members are Terrorists. In addition to defining ontological relationships between various concepts, we tag information with rich metadata. Table 4 describes metadata associated with Meeting. All information items are by default associated with a TimeSlice; in addition, Meeting includes descriptive attributes such as a meeting location and participants and security attributes such as provenance. Meeting may also be related to (e.g., knownTo) to other information objects (e.g., an Organization named MI6). For a detailed description on rich security metadata for information sharing in coalition settings please refer to [23].
4.2
Information Sharing Policies
In this section, we will describe several illustrative information sharing policies that demonstrate the benefits of our proposal over the traditional approaches. In particular we demonstrate the following capabilities of our approach: (i) encode automated sensitivity decay with space, time and external events, (ii) encode need-to-know based access control policies, and (iii) reasoning over shareability with the goal of identifying suitable information downgraders. In general our information sharing policies are structured as follows. First, the policies examine the sensitivity of the shared object in a dynamic setting and map it to a current sensitivity level. Second the policies examine the confirmed state of the recipient (e.g., spatial coordinates) to determine a need-to-know level. Shareability decisions take into account both the current sensitivity level of the shared object and the need-to-know level of the target recipient. In the examples below, we will start with simpler policies that examine only the sensitivity level of the shared object; later examples will delve into more sophisticated need-to-know based policies, conditional access control policies (e.g., Chinese wall policy), and explanations & diagnostics on shareability.
Simple policy: We first describe very simple policies that exploit the richness of our vocabulary (using the extended NIMD/KANI ontology) to construct expressive access control policies. The policies described below deduce the sensitivity of events based on the mission that generated the events and the participants in the mission. /* All events generated by a high value target (HVT) mission is secret */ InformationEnvelope(IE) ∧ about(IE, m) ∧ HVTMission(m) ∧ hasEvent(m, e) ⇒ secret(e);
/* Further, if a special force is conducting the HVT mission, then the events are top secret */ InformationEnvelope(IE) ∧ about(IE, m) ∧ HVTMission(m) ∧ hasEvent(m, e) ∧ conductor(m, f ) ∧ SpecialForce(f ) ⇒ topsecret(e);
Sensitivity decay over space and time: As argued earlier in the paper, static MLS-like labels do not adequately capture the dynamic nature of tactical information. Examples below illustrate constructs that capture sensitivity decay with space and time: insurgent location at the time of report is secret; after a week its sensitivity is downgraded to confidential; on the spatial dimension, insurgent locations specified at the granularity of coordinates are more sensitive than those specified at coarse-grained granularity (such as subgrid and grid: see Table 3). We note that while the information envelope IE may remain the same, our semantic reasoner interprets the information envelope at various time instances as different entities. This helps us retract statements without resorting to a complex non-monotonic reasoner, that is, an information envelope may be topsecret at the time of creation and may cease to remain topsecret over time. /* All fresh (less than one week) insurgent location information is secret */
downgraded. We note that the insurgent location is tactical information which could be decoupled from the initial information source. The key intuition here is that the objective which required the information to be protected at topsecret level have already been achieved by the time fighting starts; had the information leaked, the insurgents may have relocated or prepared themselves better to engage in combat operations. Extending the argument, the sensitivity level may be further downgraded after the fighting is successfully completed and the forces perform a humanitarian mission in what was previously an insurgent location. For notational convenience, in the policies described below, we drop the timestamp associated with information envelope IE. /* Insurgent location l is top secret when it is reported by a spy */ InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ provenance(s, IE) ∧ spy(s) ⇒ topsecret(IE);
/* Insurgent location is downgraded to secret when forces are deployed to fight insurgents in l */ InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ Fighting(f ) ∧ locatedAt(f , l) ⇒ secret(l);
/* When a humanitarian mission operates in l after the fight, then information about location l is further downgraded to unclassified */ InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ humanitarianMission(m) ∧ locatedAt(m, l) ⇒ unclassified(l);
InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ reportedAt(IE, treport ) ∧ underOneWeek(treport , tnow ) ⇒ secret(IE, tnow );
Need-To-Know policy: So far, the policies have only examined the sensitivity level of the shared information. The following examples examine the state of the target recipient to determine his/her need-to-know. For the sake of simplicity, we categorize need-to-know into three discrete levels ntkHigh, nthMedium, and ntkLow. /* landmine site information is by default topsecret */
/* All insurgent locations are downgraded to confidential level in a week */
InformationEnvelope(IE) ∧ about(IE, l) ∧ LandmineSite(l) ⇒ topsecret(IE);
InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ reportedAt(IE, treport ) ∧ overOneWeek(treport , tnow ) ⇒ confidential(IE, tnow );
/* Need to know level depends on the geo location of the target entity */
/* Insurgent locations at the granularity of coordinates, subgrids and grids are top secret, secret and confidential respectively */ InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ isCoordinates(l) ⇒ topsecret(IE); InformationEnvelope(IE) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ isSubgrid(l) ⇒ secret(IE, t); InformationEnvelope(IE, t) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ isGrid(l) ⇒ confidential(IE, t);
/* Insurgent locations at the granularity of grid and over a week is unclassified */ InformationEnvelope(IE, tnow ) ∧ about(IE, l) ∧ InsurgentLocation(l) ∧ reportedAt(IE, treport ) ∧ overOneWeek(treport , tnow ) ∧ isGrid(l) ⇒ unclassified(IE, tnow );
Situation-aware sensitivity decay: In several cases, sensitivity of information may be influenced by the situation. In the example shown below, an insurgent location is topsecret when it is freshly reported by a spy (because a spy is a strategic asset). The moment the coalition forces start fighting the insurgents (in an attempt to sanitize the insurgent location), the sensitivity level of insurgent location may be
Entity(r) ∧ isSubgrid(l) Entity(r) ∧ isGrid(l) ⇒
locatedAt(r, l) ∧ LandmineSite(l) ∧ ⇒ ntkHigh(IE, r); locatedAt(r, l, t) ∧ LandmineSite(l) ∧ ntkMedium(IE, r);
/* Rules for shareability depend upon sensitivity and needto-know*/ topsecret(IE) ∧ ntkHigh(IE, r) ⇒ shareable(IE, r); topsecret(IE) ∧ ntkMedium(IE, r) ⇒ notshareable(IE, r);
Chinese wall policy: In several cases, information sharing decisions may be conditioned on what information is already known to the recipient. For example, if the recipient knows the location of a spy at time tpast then we may not want to reveal subsequent locations of the spy to prevent the recipient from using location tracking techniques to infer possible destination and/or sensitive targets. /* Does recipient r know location information about a spy in the past? */ InformationEnvelope(IE 0 , tpast ) ∧ about(IE 0 , s) ∧ spy(s) ∧ knownTo(IE 0 , r) ⇒ pastLoc;
/* If yes, hold back subsequent location information about the spy for one month */
/* Meeting’s sensitivity level may depend upon the nationality of the participants and meeting location */
pastLoc ∧ InformationEnvelope(IE, tnow ) ∧ about(IE, s) ∧ lessOneMonth(tpast , tnow ) ⇒ notshareable(IE, r);
There may be other cases wherein the recipient may obtain information from other third parties. For instance, the identities of spies that are co-located (say, within the same grid) with a suspicious meeting may be topsecret. /* A meeting is suspicious if one of the participants is a known terrorist */
InformationEnvelope(IE) ∧ about(IE, m) ∧ Meeting(m) ∧ Participant(m, p) ∧ Nationality(p, f oe) ∧ locatedAt(m, l) ∧ hazardousGeoPlace(l) ⇒ secret(IE); InformationEnvelope(IE) ∧ about(IE, m) ∧ Meeting(m) ∧ Participant(m, p) ∧ Nationality(p, f oe) ⇒ confidential(IE); InformationEnvelope(IE) ∧ about(IE, m) ∧ Meeting(m) ∧ locatedAt(m, l) ∧ hazardousGeoPlace(l) ⇒ confidential(IE);
Meeting(m) ∧ Participant(m, p) ∧ Terrorist(p) ⇒ SuspiciousMeeting(m);
/* Meeting m (and its location) is known to entity r0 , who is a known colluder with r */
/* All spy information co-located with the meeting is topsecret */
InformationEnvelope(IE) ∧ about(IE, m) ∧ Meeting(m) ∧ locatedAt(m, l) ∧ knownTo(IE, r 0 ); Colluders(r, r 0 );
InformationEnvelope(IE 0 ) ∧ about(IE 0 , m) ∧ SuspiciousMeeting(m) ∧ knownTo(IE 0 , r) ⇒ knownMeeting; knownMeeting ∧ InformationEnvelope(IE) ∧ about(IE, s) ∧ locatedAt(s, l) ∧ locatedAt(m, l) ∧ isGrid(l) ⇒ topsecret(IE);
Further, the fact that the meeting information is known to the recipient r may be deduced indirectly. For example, knownTo(IE 0 , r) may be inferred from the following facts in knowledgebase: /* Meeting m is reported by CNN and CNN is a public source */ InformationEnvelope(IE) ∧ about(IE, m) ∧ Meeting(m) ∧ provenance(IE, CN N ); PublicSource(CN N );
And accompanying rules: /* All statements by a public source are assumed to be known to all entities */ Provenance(IE, src) ⇒ says(IE, src); PublicSource(src) ∧ says(IE, src) ⇒ ∀r, knownTo(IE, r);
Explanations for non-shareability: In a simple example shown below, the sensitivity level of a meeting is defined in relationship to its participants and location. Supposing that the recipient were only secret cleared and the meeting were topsecret, then our approach provides the sender a reason for non-shareability (namely, the participant list includes a terrorist). Now, the sender may downgrade meeting information (and encase the downgraded version in a new envelope IE 0 ) by removing the participant list from the meeting. Such an operation is meaningful assuming that the recipient has no means of inferring the participants given other details about the meeting (e.g., location, time, etc). /* Meeting m is top secret if its participant is a terrorist */ InformationEnvelope(IE) ∧ about(IE, m) ∧ Meeting(m) ∧ Participant(m, p) ∧ Terrorist(p) ⇒ topsecret(IE);
To illustrate this argument better, let us examine a meeting whose sensitivity is secret (see example below), namely, meeting whose participant is from an evil country and the meeting’s location is deemed hazardous. If the recipient were only confidential cleared, then the sender has two choices: remove the participant list or remove the meeting location. Now, if the meeting location were otherwise known to the recipient (say, the information is known to some colluder r0 ) then, the latter option is not feasible. Indeed even on deleting the meeting location from IE, our reasoner would conclude that sensitivity level of IE is secret citing the following reasons: the meeting location is known to r0 and the target recipient r & entity r0 are known colluders.
/* Collusion is a symmetric relationship; any information known to an entity is assumed to be known to all her colluders */ InformationEnvelope(IE) ∧ knownTo(IE, r 0 ) ∧ Colluders(r, r 0 ) ⇒ knownTo(IE, r);
5.
RELATED WORK
Several papers have examined logic based information sharing systems, typically using constraint datalog rules [4, 3, 16, 29]. Such systems have the ability to make access control decisions based upon the state of the world. For instance, one could use constraint datalog to grant access based on user clearance levels and document classification levels as follows: grant(U , D) :- user(U ), document(D), clearance(U , L1 ), level(D, L2 ), L1 ≥ L2
However, such systems use rigid and opaque classification of documents to drive access control decision (e.g., clearance and level in the example above). Our approach focuses on making the document classification step itself more flexible and transparent to the access control system. The novelty of our approach resides in the combination of expressive description logics to provide finer grain descriptions of the relevant state of the world and documents with a standard constraint datalog system to drive automatic classification of potentially sensitive data and access control decisions. In particular our approach addresses two limitations (structural and semantic limitation) of the datalog approach. Structural limitation may be partly alleviated in the datalog approach by specifying relationships between objects as a predicate (see example below). However, the semantics of these relationships are limited to those expressed in constraint datalog rules. On the other hand, a Description Logic based approach allows us to exploit vast semantics embedded in various ontologies and thus significantly advance expressiveness to that provided by constraint datalog rules. level(doc, secret) :- created(doc, T ), lessThanOneWeekAgo(T ) level(doc, confidential) :- created(doc, T ), moreThanOneWeekAgo(T )
Kapadia et. al [13] investigated the problem of determining why an access may be denied. The key idea here is to develop a framework using ordered binary decision diagrams (OBDD) to explain denied accesses. The paper focuses on a cost function that tradeoffs the confidentiality of the policies and usability of explanations. While our system shares the same goal of explaining access control and/or information shareability decision, our system differs notably from [13]
by its ability to provide finer and expressive description to drive the automatic classification of sensitive information. In addition to tracing justifications through security policies, our hybrid reasoner also traces justifications through the expressive and large DL knowledge base representing the state of the world and rich metadata about potentially sensitive documents. In the past we have used a similar DL based approach in other contexts, such as clinical trials at Columbia medical center and semantic text cleanup [1]. Our past experiences provide ample anecdotal evidence into the scalability and usability of this approach. In particular, the proposed approach can integrate one or more off-the-shelf ontologies (e.g., KANI); further, there are several tools available for modifying and authoring ontologies. Also, our approach can leverage tools [11] that automatically extract metadata from text (using natural language processing techniques). Similar to distributed proof systems[4, 29], our system does not assume a centralized policy repository and/or knowledge base. Each participant maintains a private policy set and a knowledgebase representing its view of the relevant state of the world. The architecture shown in Figure 2 is replicated at every decision center. However, in our current implementation based SHER [1], within a given decision center, the expressive DL knowledge base is not distributed. Since SHER operates on top of a relational database, conceptually, the database layer could be virtualized, and thus it could operate over a distributed database; hence it could support distributed reasoning. However, it is not clear whether SHER would retain its performance characteristics in such a distributed setting due to the non-locality of data access when reasoning over very expressive DL. To the best of our knowledge, there is no scalable distributed solution for sound and complete reasoning over very expressive DLs. We believe that a tradeoff has to be made. In our current solution, we went for more expressive power, soundness and completeness. If one of these dimensions is less of a requirement, then scalable distributed reasoning is possible (for e.g., using DLLite − a light weight version of OWL DL). Finally, we note that our decision support system is sound under incomplete information as a direct consequence of the monotonicity of the combination of OWL-DL and datalog rules, that is, additional information cannot invalidate previously inferred conclusions with the caveat that the system could infer more than one classification level for a single information object; a meta rule will select the more restrictive one. Additional information only downgrades the sensitivity level of an information object; hence, with incomplete information the decision support system makes a more conservative assessment on shareability. However, in the presence of malicious or incorrect information, soundness cannot be guaranteed.
6. CONCLUSION AND FUTURE WORK In this paper, we have examined a description logic (DL) based approach to risk optimized information sharing. The key idea here is to build DL predicates over enriched security metadata and a knowledgebase that semantically encodes the state of the world as observed by the decision maker. In particular, we have used security metadata developed in [23] and an extended version of KANI ontology (Knowledge Associates for Novel Intelligence [9]) for building the knowledgebase. We have shown that using this ap-
proach, one can support scalable DL reasoning that not only encodes sophisticated spatial-temporal sensitivity decay and situation-aware access control policies but also supports explanations [7] indicating why an object is shareable or not. We have shown that such explanations are very useful indicators for suitably downgrading information with the goal of making it shareable with a target recipient. In this paper we have focused exclusively on information sharing from the sender’s perspective. We note that a similar approach (that exploits domain specific information semantics and rich security metadata) may be used by an information consumer. For instance, the information consumer may use DL rules to determine the veracity of an information object based on its provenance and current state of the world (e.g., relevant events from other information sources, spatialtemporal coordinates of the information source, etc.). Such deductions on information veracity may be used by the information consumer for risk optimized decision-making (e.g., mission planning and adaptation) in a tactical setting. However, our experience with DL and KANI has also exposed at least three limitations. First, a DL based approach continues to make a Boolean (0/1) decision on shareability and is thus not directly amenable to making fuzzy decisions on risk-based information sharing [6]. As a next step, we will investigate probabilistic [14][15] and fuzzy logic [24][25] extensions of DL to support risk-based information sharing under uncertainty, vagueness and imprecision. Second, our current approach requires that the knowledgebase under consideration has to be devoid of conflicts. In reality, maintaining consistent and up-to-date state-of-theworld information (e.g., InsurgentLocations, knownTo, colluders, etc.) may be infeasible, especially if the decision maker obtains relevant information from geographically disparate sources with varying degrees of trustworthiness. As a future work, we will address this challenge ground-up by developing policies that operates under uncertainty (e.g., delayed updates or possibly incorrect, biased and malicious updates) over information contained in the knowledgebase. Investigating a probabilistic approach is clearly one way to tackle this problem. Another alternative is to explore paraconsistent logics [17] to support reasoning in the face of apparent inconsistencies. Finally, our current logic based approaches do not support information transforms (downgrades and fusion) as first class entities. Our current approach provides explanations over shareability; it also suggests information transforms that modify/delete information attributes (e.g., participants in a meeting) in order to downgrade information. However, it does not support automated downgrade using more sophisticated functional transforms such as (e.g., decreasing image resolution, statistical aggregators, etc), disinformation (e.g., false information intended to misguide the recipient), etc. We intend developing an information transform logic that allows DL to reason over sophisticated information transforms on the target object.
Acknowledgments The views and conclusions expressed in this document are the personal opinions of the authors and should, in no way, be interpreted as representing the views of their respective organizations (IBM, DSTL, and CESG). This research was sponsored by the U.S. Army Research Laboratory and the U.K. Ministry of Defence and was accomplished un-
[13] A. Kapadia, G. Sampemane, and R. H. Campbell. Know Why Your Access Was Denied: Regulating Feedback for Usable Security. In 11th ACM Conference on Computer and Communication Security (CCS), 2004. [14] D. Koller, A. Y. Levy, and A. Pfeffer. P-classic: A tractable probablistic description logic. In AAAI/IAAI, pages 390–397, 1997. [15] T. Lukasiewicz. Probabilistic description logics for the semantic web. In http://www.kr.tuwien.ac.at/staff/ lukasiew/rr0605.pdf, 2007. [16] C. F. M. Y. Becker and A. D. Gordon. Design and Semantics of a Decentralized Authorization Language. In 20th IEEE Computer Security Foundations Symposium (CSFW), 2007. REFERENCES [17] Y. Ma, P. Hitzler, and Z. Lin. Paraconsistent reasoning for expressive and tractable description SHER: Scalable highly expressive reasoner. logics. In Description Logics, 2008. http://www.alphaworks.ibm.com/tech/sher. [18] C. McCollum and J. M. L. Notargiacomo. Beyond the F. Baader, D. Calvanese, D. McGuinness, D. Nardi, Pale of MAC and DAC-Defining New Forms of Access and P. Patel-Schneider. The Description Logic Control. In Proceedings of the 1990 IEEE Symposium Handbook. Cambridge University Press, 2003. on Security and Privacy (S&P 1990), pages 190–200. L. Bauer, S. Garriss, and M. K. Reiter. Distributed IEEE Computer Society, 1990. Proving in Access Control Systems. In IEEE [19] A. Myers and B. Liskov. Complete Safe Inforamtion Symposium on Security and Privacy, 2005. Flow with Decentralized Labels. In Proceedings of the M. Y. Becker and P. Sewell. Cassandra: Distributed 1998 IEEE Symposium on Security and Privacy (S&P Access Control Policies with Tunable Expressiveness. 1998), pages 186–197. IEEE Computer Society, 2001. In POLICY, 2004. [20] J. P. Office. HORIZONTAL INTEGRATION: Broader D. E. Bell and L. J. LaPadula. Secure Computer Access Models for Realizing Information Dominance. Systems: Mathematical Foundation. Technical Report Special Report JSR-04-13, MITRE Corporation, 2004. 2547, vol 1, MITRE Corporation, 1973. [21] D. Roberts, G. Lock, and D. Verma. Holistan: A P.-C. Cheng, P. Rohatgi, C. Keser, P. Karger, Futuristic Scenario for International Coalition G. Wagner, and A. Reninger. Fuzzy Multi-Level Operations. In In 4th IntlConference on Knowledge Security: An Experiment on Quantified Risk-Adaptive Systems for Coalition Operations (KSCO), 2007. Access Control. In Proceedings of the 2007 IEEE [22] M. Srivatsa, D. Agrawal, and S. Balfe. A metadata Symposium on Security and Privacy (SP 2007), pages calculus for securing information flows. In Proceedings 222–230. IEEE Computer Society, 2007. of 26st Army Science Conference (ASC), 2008. J. Dolby, A. Fokoue, A. Kalyanpur, A. Kershenbaum, [23] M. Srivatsa, P. Rohatgi, S. Balfe, and S. Reidt. E. Schonberg, K. Srinivas, and L. Ma. Scalable Securing information flows: A metadata framework. In semantic retrieval through summarization and Proceedings of 1st IEEE Workshop on Quality of refinement. In AAAI, pages 299–304, 2007. Information for Sensor Networks (QoISN), 2008. J. Dolby, A. Fokoue, A. Kalyanpur, L. Ma, [24] U. Straccia. A fuzzy description logic. In AAAI/IAAI, E. Schonberg, K. Srinivas, and X. Sun. Scalable pages 594–599, 1998. grounded conjunctive query evaluation over large and expressive knowledge bases. In International Semantic [25] U. Straccia. Towards a fuzzy description logic for the Web Conference, pages 403–418, 2008. semantic web. In ESWC, pages 167–181, 2005. R. Fikes, D. Ferrucci, and D. Thurman. Knowledge [26] N. Swamy, B. J. Corcoran, and M. Hicks. Fable: A associates for novel intelligence (kani). In language for enforcing user-defined security policies. In https://analysis.mitre.org/proceedings/Final Papers IEEE Symposium on Security and Privacy, 2008. Files/174 Camera Ready Paper.pdf, 2005. [27] N. Swamy and M. Hicks. Verified enforcement of I. Horrocks, U. Sattler, and S. Tobies. Reasoning with automaton-based information release policies. In individuals for the description logic SHIQ∗ . Proc. of Proceedings of 2008 ACM SIGPLAN Workshop on 17th Int.Conf. on Automated Deduction, pages Programming Languages and Analysis for Security 482–496, 2000. (PLAS), 2008. C. K. J. Karat and C. Brodie. SPARCLE Policy [28] J. Vaughan and S. Zdancewic. A Cryptographic Management Workbench. Decentralized Label Model. In Proceedings of the 2007 http://domino.research.ibm.com/comm/research projects.nsf/ IEEE Symposium on Security and Privacy (S&P pages/sparcle.index.html. 2007), pages 192–206. IEEE Computer Society, 2007. A. Kalyanpur. Debugging and Repair of OWL-DL [29] M. Winslett, C. C. Zhang, and P. A. Bonatti. Ontologies. PhD thesis, University of Maryland, PeerAccess: A Logic for Distributed Authorization. In https://drum.umd.edu/dspace/bitstream/1903/3820/1/umi12th ACM Conference on Computer and umd-3665.pdf, Communication Security (CCS), 2005. 2006.
der Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defense or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. The third author would like to acknowledge Nev Zunic and Morton Swimmer for sharing their experiences in developing a Data Centric Security Model at IBM, which provided some of the motivation for the approach used in this work.
7. [1] [2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]