Run-Time Detection of Covert Channels - Semantic Scholar

1 downloads 1448 Views 115KB Size Report
force information flow policies, which are specified by sys- ... curity policies. he characterized a class of security policies ... general information flow properties.
Run-Time Detection of Covert Channels Naoyuki Nagatou Takuo Watanabe Department of Computer Science, Graduate School of Information Science and Engineering, Tokyo Institute of Technology. 2-12-1 Oookayama, Meguro-ku, Tokyo, 152-8552, Japan. [email protected] [email protected]

ABSTRACT The authors are interested in the characterization of policies which are enforced by execution monitoring mechanisms with an extra structure that is an extension of Schneider’s enforcement mechanism. This paper is a starting point for continuing in this area. We use an emulator as the extra structure, which emulates the behavior of a system by running a subsequence from an interleaved state sequence of processes, in order to detect several covert channels at run time. We then define a security automaton for this extended mechanism and show a class of properties which is enforced by the security automaton. Further, our mechanism can enforce information flow policies, which are specified by system developers, under an information flow property to be defined for the aim of this study. We show that the information flow property include O’Halloran’s Noninference. In the last of this paper, we give a simple example for the policy and an outline of our mechanism.

1

icy specifies access rights that a subject allow for a object. For example, if a subject s have write rights for a object o then an access right is written as a 3-tuple (s, o, write). An access policy is simply defined as a set of access rights. However, it should be emphasized that determining a system’s subjects, objects and access rights is not as trivial as it may be seen. It is difficult to correctly determine subject, object, access rights of system resources such as CPU. In Section 3, we look at this in more detail. Many researchers have suggested Information flow properties, which specify restrictions on a system’s input/output relation to assist in ruling out nonsecure implementation. Such properties deal better with confidentiality for shared storage channels. Also, F. B. Schneider [15] investigated a class of enforcement mechanisms called Execution Monitoring (EM), that work by monitoring each individual execution step of a target at run time and terminating the target if they violate security policies. he characterized a class of security policies enforced by a mechanism form EM and state that the class is a subset of Lamport’s safety properties. Unfortunately, information flow properties cannot be enforced by monitors from EM because they cannot be expressed [11, 17] by Alpern and Schneider’s concept of properties [1]. In light of this we consider the use of monitors extended with an extra structure. We consider monitors extended with an extra structure. In order to detect several covert channels at run time, we use an emulator as the extra structure. The emulator emulates the behavior of a system by running a subsequence from an interleaved state sequence of processes. We define a security automaton for the concept above and analyze characterization of policies enforced by our monitor. We then define an invariant predicate which expresses a relation between information from the emulator and the real system. An outline of this monitor is illustrated in Section 6. The property specified by this predicate includes O’Halloran’s [13] Noninference, and includes Goguen-Meseguer’s [7] Noninterference on a deterministic system if we assume that an output event cannot generated

Introduction

A covert channel is a mechanism that can be used to transfer information from one user of a system to another using means not intended for this purpose by the system developers. It is important to detect covert channels in order to develop secure systems. Under the TCSEC [3], Covert Channel Analysis (CCA) [4, 10] is required of B2 level or higher. Being able to perform a CCA would be beneficial to the development of all secure systems, not just those which require them. Covert channels can be characterized in a variety of ways, based on the mechanisms that they use. For example, shared storage channels use shared storage resources of a system, while timing channels make use of a system resource attribute (e.g., CPU time), modulating its own use of a system resource (in this case, CPU). Channels such as these arise due to the difficulty in mapping all entities in a system to access control primitives. An access pol1

if there is no input event. We show that our monitor is able to enforce a policy specified by developers to observed targets. Then, It is possible to terminate a receiver process leak confidential information using the covert channel, however sender processes are not terminate. The structure of this paper is as follows. Related works are reviewed in Section 2. In Section 3 we illustrate a simple example making a covert channels happen and explain general information flow properties. In Section 4 we describe a class of enforceable security properties according to Schneider’s definition and explain the monitor that is extended with an extra structure. In Section 5 we show that covert channels are detected at run time. In Section 6 we illustrate overview of our monitor. Section 7 concludes this paper and discuss some issues.

Security policies Security properties Safety properties Editing properties

Informationflow Properties

EM-enforceable properties SHA properties

Figure 1. A taxonomy of security policies

2

Related Works following demonstrates how to make covert channels and to look for such channels.

What sorts of security policies can enforcement mechanisms enforce? Schneider [15] proposed an initial approach to answer this question. Schneider consider a class of enforcement mechanisms (e.g. PCC [12], SASI [5] and reference monitors) that work by monitoring each execution step, called Execution Monitoring(EM), and analyzed the characterization of security policies enforceable by execution monitoring. He suggested a class of EMenforceable policies that is a subset of Lamport’s safety property. Schneider also referred to B¨uchi-like security automata. Bauer et al. [2] show the characterization of increasingly general class of policies for mechanisms which transform an execution by insertion, suppression and editing automata. Fong [6] show the enforcement of information flow based policies restricting execution monitor to track only a shallow history of previously granted access events. this class is strictly a subclass of the class of EMenforceable policies. Although our study hope to clarify the characterization of security policies by monitor with an extra structure, the work reported in this paper is to try to enforce the information flow policy by the extended monitor with an extra structure. The relation between these three properties are figured in Figure 1. Information flow policies cannot be enforced using simple monitors because information flow properties are defined as property over a power set of traces, cannot be defined as property over a set of traces [11, 17],

3

3.1

Covert Channels

We illustrate a simple but fundamental example of covert channels which use shared storage in a system. An access policy are in general specified by an access matrix M , which have subjects and objects as column and line respectively, An element of the matrix M (s, o) is access rights where s is a subject, o is a object. User processes, using covert channel, are able to listen to confidential information against a information flow policy. Although an access control policy are correctly enforce on user processes it is possible that processes generate covert channels. For example, let Holly and Lucy be two user processes, an information flow policy restrict information flow from Holly to Lucy, and given an interface similar to operator remove and create similar to those on UNIX. If they can call operators running on trusted computing base (TCB), using a shared storage similar to Upgraded Directory, then a covert channel is involved between them [4], Upgraded Directory is a way to make enrolling easy. Lucy create a upgraded directory in a level of Holly and Holly create or remove a file in the directory. Whenever Lucy try to remove the directory, the remove operator return a result in successful or unsuccessful (show Figure 2 and Figure 3). Using the result, Lucy is able to obtain one bit, 0 or 1, from Holly. This system enforce the access control policies correctly to them, However, a covert channel appears between them. because of the semantics to be similar to UNIX ”rmdir”. As the above example, covert channels are caused by difficulty of mapping an access control primitives to entities of real systems. Information flow policies will be defined as relations between groups of users on a system. The relations signify a direction that the system allows to flow. In the above exam-

Covert Channels And Information Flow Properties

This section present a relation between covert channels and information flow properties. Information flow properties is a approach to look for some covert channels. The 2

5-tuples (S, A, Σ, s0 , D). S is a set of states to be a set of state variables. A is a set of actions to be abstraction from events be involved in a system. Σ is a set of possible action sequences which is observed. s0 is an initial state. D is a domain of security levels.

home Holly Lucy

Holly

remove

Lucy

Upgraded dirrectory

Let σ = a1 , a2 , · · · an action sequence (ai ∈ A), if there a a exists a path s = s0 , s1 , · · · such as s0 →1 s1 →2 s2 · · · ai+1 then we call s a path of σ, where si → si+1 is a basic single step and si , si+1 ∈ S. Given A domain D, then a function dom:A → D and the partial order relation < is defined over D. Let dL , dH ∈ D, dL < dH means that the policy prohibits information flow from users in dH to one in dL . dL < dH is called an noninterference assertion and an information flow policy is a set of these assertions. hT is a function whose a domain is Σ. Let σ = a1 , a2 , · · · ∈ Σ be a possible action sequence, given a set of state variables V and a time T , hT (σ) is the following.

user processes Directory

Figure 2. Successful removal of an empty directory home Holly Lucy

create

remove

Holly

Lucy

Upgraded directory

• For each state variable v ∈ V , the ith entry of hT (σ) is the same value as the one assigned to v by the T th action of σ.

user processes file

Directory

Figure 3. Unsuccessful removal of an nonempty directory

We distinguish between input actions and output actions. A set of input actions express I, a set of output actions express O. A set of input actions of a system is A∩I, A set of output actions of a system is A ∩ O. We correspond the above state variable to channels for input or output actions respectively . We define equivalence between two states in a view of a user. Information is a collection of values of state variables V . We assume that informations obtained from the systems are values of its state variables at a time. If each value of state variables between two states is the same then information from each state is equivalent. Given two states si , sj , and a ∈ A, we say si =dom(a) sj if and only if a value of all state variables, which are visible to one target executing a action a on the states si , sj , are equivalent to a value of the corresponding state variables in sj where i, j are nonnegative integer. Also, we define an equivalence relation on two traces as follows: σ =dom(a) τ iff there exists histories hT 1 (σ), hT 2 (τ ) of σ, τ at times such as T 1, T 2, then

ple, the policy allow Lucy to transfer information for Holly, written in Lucy ; Holly. An information flow property is defined as restrictions of inputs and outputs, for example, given an output function out take a trace and an operation and returns the system behavior to execute operators, and a purge function purge which takes a trace and a user and returns a trace purged input events of the user from the first argument, The property is specified as the following restriction. Assume that there exists a possible trace σ, out(σ, remove) = out(purge(σ, Holly), remove) that must always holds. CCA to discover the violation of the restriction detects covert channels. This work try to detect several covert channels at run time, Enforcing this restriction.

hT 1 (σ)·hai

3.2

hT 2 (τ )·hai

s0 → si , s0 → sj and si =d sj , where hai is a single action sequence of a action a ∈ A. State variables which are restricted by this operator take a special value λ, based on this concept We define the restriction operator ↑ on a set of traces. let Π be a set of histories of executions. Given d ∈ D, α ∈ Π, we define Ad as follows

Information Flow Properties

To define some information flow properties, we begin by defining event systems and operations, according to a formula that Zakinthinos and Lee discuss in [17]. A state machine is a way of describing a computer system in terms of its behavior and is abstraction for a target. A is a universal set of actions, corresponds to a set of all actions monitor can observe. We define a multi level system as follows.

Ad ≡ {a|d ∈ D ∧ dom(a) < d} and define α ↑ Ad as the subsequence restricted to actions in Ad . If α is empty then α ↑ Ad also is empty. We assume

DEFINITION 1 (Multi Level System) A state machine is 3

that ↑ is a transitive function. Thus, let hai be a single action sequence and α = α0 · hai

Security policies Information flow properties

α ↑ Ad = α0 ↑ Ad · hai ↑ Ad .

Information flow properties define secure relations between inputs and outputs on a system and are defined as property over a power set of traces. The properties require that if the history of σ ∈ Σ is in histories(Σ) then the purge of the history also must be included in histories from possible sequences within the system. Now we define Low Level Equivalent Set (is abbreviated to LLES) as follows in order to express the requirement. Given a history h, a set of action sequences of a event system Σ and a set of actions L,

Generalized noninference Noninference Noninterference (input->output)

LLESL (h, Σ) = {s | τ ↑ L = s ↑ L ∧ s ∈ histories(Σ)}, where histories(Σ) is a set of histories from every element of Σ for any times. Noninference was introduced by O’Halloran [13]. the approach is, to keep the requirement, removed all high-level input and output actions from each history and require that each history is included in histories(Σ). This is formally expressed as follows:

Figure 4. A taxonomy of information-flow properties

4.1

Enforceable properties

Let Aω be a infinite action sequences. Security policies must be tautology and are specified by giving a predicate on a set of sequence. A target M = (S, AM , Σ, s0 ) satisfies security policy P if and only if P (ΣM ) is true. Also, by definition, the following three conditions are lead.

∀h ∈ histories(Σ). NONINFERENCE(LLESL (h, Σ)) where NONINFERENCE(A) ≡ ∃t ∈ A.t ↑ H = hi.

H is a set such that A − L and is called high-level actions. Let s be a state of a system, given a action such as a ∈ A, Assuming that the system has exactly one state from a, in other words the system is deterministic and high-level output actions cannot be generated when there is no high-level input actions, Noninfernece is equivalent to Noninterference,due to Goguen and Meseguer [7], on a deterministic system. McLean [11] introduced the formulation that dose not contain the assumption such that high-level output actions cannot be generated when there is no high-level input actions. In other words, histories which is removed only high-level input actions also are included in histories(Σ). This is formally expressed as follows:

ˆ P (Aω ) = ∀σ ∈ Aω .P(σ)

(1)

ˆ ∀σ ∈ Aω . ¬P(σ) ˆ ⇒ ∃i. ∀τ ∈ Aω ¬P(σ[..i]τ ).

(2)

ˆ ˆ ∀σ ∈ Aω . ¬P(σ) ⇒ ∃i. ¬P(σ[..i]).

(3)

ˆ ˆ ∀i. ∃τ ∈ Aω P(σ[..i]τ ) ⇒ ∃σ ∈ Aω . P(σ).

(4)

ˆ is a computable predicate on executions, σ[..i] is a where P prefix of which the length is i. Now we discuss the contrapositive of (2):

the hypothesis of (4) give a way to define that a finite sequence is accepted by B¨uchi automaton . By this and (3), the following set of prefixes is lead:

∀h ∈ histories(Σ). GN(LLESL (h, Σ)) where GN(A) ≡ ∃t ∈ A.t ↑ (H ∩ I) = hi.

ˆ {u ∈ A∗ | P(σ) ∧ u ∈ histories(Σ)},

A taxonomy of these properties is shown in Figure 4.

This set is an equivalent characterization of prefix-closed policies. We call this set prefix-closed set.

4

4.2

Extension of Security Automata

This section presents the concept of enforceable policies according to Schneider’s definition [15], explain a concept of execution monitors with an extra structure and security automata with an extra structure are defined.

Security Automata With An Extra Structure

In order to introduce the extra structure, we extend Schneider’s security automata as follows. Given a set of information M I, Our security automata is defined as 5-tuples (Q, q0 , I, δ, F ), where 4

• Q is a set of automaton states,

property. Let X be a set of states, a be an action and α be an action sequence. We define the following functions.

• Q0 is a set of initial state of the automaton, is Q0 ⊆ Q,

reachable(X, α) next(X, a)

• I is a set of input symbol such as (si , a) ∈ S × A.

We assume that reachable(X, α · hai) = next(reachable(X, α), a).

• δ : I × Q → 2Q is a transition function of the automaton. • F : M I × I → M I is a function,

The information flow property is defined as follows, using the above functions.

What an input symbol is (si , a) means that our monitor hooks an action and a result of the action. Each automaton state is labeled an element of M I and this information is written with qi (si ). To process a state sequence s0 s1 · · · of a target, the security automaton starts with Q0 and then changes Q0 to next state depending on the input symbol (si , a) ∈ I. The next state is determined by ∪q∈Q0 δ(q, si ). The transition function δ is specified by predicate pij with S and M I. Let target change si to sj by a ∈ A, Then the next state of a security automaton is represented as follows: {qj |qi ∈ Q0 ∧ pij (si , qi (si ))}.

DEFINITION 2 (Secure System) Given a policy r and a system (S, A, Σ, s0 , D), the system is secure with respect to the policy r iff, for all a ∈ A, σ ∈ Σ, T ∈ N , next(reachable({s0 }, hT (σ)), a) = next(reachable({s0 }, hT (σ) ↑ Adom(a) ), a).

Since security automata examine a single step to decide whether the security automaton accepts the step or not we define secure communication with respect to a single step. The basic idea in its definition is to use unwinding technique [8]. Let s ∈ S and a, b ∈ A such that hbi ↑ Adom(a) = hi, where hi is the empty sequence. If a system is secure with respect to a policy then

(5)

If pij (si , qi (si )) holds and qi ∈ Q0 then the input symbol is accepted. Otherwise the input symbol is rejected. A set of sequences accepted by this automaton is clearly the prefixclosed set for a policy. In this paper, a set of information M I is given as a set of states of an emulator. The emulator emulates the behavior of the system by running a subsequence that is restricted from a sequence observed by the monitor.

5

next(reachable({s}, hbi), a) = next({s}, a), Now, we define a set reachablesa (X) as follows: reachablesa (X) ≡ {reachable(X, hbi) |s ∈ X ∧ hbi ↑ Adom(a) = hi}. Let sb ∈ reachablesa ({s}), if dom(b) is not interfere with dom(a) then, for the single action, {s} = sb . Given t ∈ sb , then s =dom(a) t. Intuitively, t expresses a system state on which the system executes b and s expresses a system state on which the system dose not execute b respectively. To bring an extra structure to our monitor, the extra structure become an emulator of the system states when σ ↑ Adom(a) is executed. Therefore,

Discovery of Covert Channels

In this section, we state that information flow policies are enforceable on our monitor. Schneider’s class of security policies is limited to capacity of access control 1 . Information flow policies cannot be enforced using simple mechanisms. However, our extended mechanism enforce information flow policies under an information flow property.

5.1

α

≡ ∪s∈X {t | s → t}. a ≡ ∪s∈X {t | s → t}.

∀t0 ∈ next({t}, a).∃s0 ∈ next({qi (si )}, a). qi (si ) =dom(a) t ⇒ s0 =dom(a) t0 .

An Information Flow Property

(6)

If (6) holds then dom(b) is locally noninterference with dom(a). Next, we state that if this invariant (6) always holds for any executions then the system is secure with respect to the policy under our property.

We explain an information flow property with this paper, based on Pinsky’ [14]. Intuitively, the information flow property means that if an effect obtained from the behavior of a entire system equals to an effect obtained from system which is restricted to one group of users. then there exists no information leaks against the policy. We begin by defining functions which will help us to form an information flow

LEMMA 1 Let r be a policy and let (S, A, Σ, s0 , D) be a system. For all T ∈ N, σ ∈ Σ, a, b ∈ A, d ∈ D, (6) is true if and only if the system is secure with respect to the policy r under our property.

1 In

the footnote #6 within [15], he says that there exists a security policy that implies restriction on information flow and is sets that are safety properties. It seems that Schneider’s class is an access control policy which is similar to access matrices. If this is right, there is covert channel.

Proof 1 We will proceed by induction on T . 5

Basis At the time T = 0, By definition of ↑,

5.2

next({h0 (σ)}, hi) = next({h0 (σ)} ↑ Ad , hi) = {s0 }.

We are interested in run-time detection of covert channels on target systems against a given information flow policy. information flows are constrained on the policy by enforcing the invariant (6). Our monitor detects its occurrence.

Thus there is not information leak. Inductive Step At the time T , we shall assume that Lemma 1 holds. Let X be reachable({s0 }, hT (σ)) and Y be reachable({s0 }, hT (σ ↑ Ad )). Then, we shall show that, for any a, b ∈ A, if (6) is true then next(next(X, a), b) = next(next(Y, a), b). Then the system is secure.

THEOREM 1 Information flow policies is enforceable on the mechanism with an extra structure. Proof 2 Given the transition predicate pij as follows; Pij (si , qi (si )) ≡ sj =dom(a) qi (si )(= hj (σ ↑ Adom(a) )),

Case 1: Let d < dom(b) ∈ r. By the inductive hypothesis, next(X, a) = next(Y, a).

Enforcement of Information Flow Policy

then a set of traces recognized by the automaton(in other words the monitor) is clearly prefix closed. Also by the structure of our monitor, it is necessary that hj (σ ↑ Adom(a) ) can be computed from a single execution which the monitor observed. By Lemma 2 qj (sj ) is computed from only σ and hj (σ ↑ Adom(a) ) is also a possible execution. Thus our property is enforceable.

(7)

From d < dom(b) and (7) next(next(X, a), b) = next(next(Y, a).b) = reachable({s0 }, hT (σ) · habi). Case 2: Let d < dom(b) ∈ / r. By d < dom(b), next(next(Y, a), b) = next(Y, a). By the inductive hypothesis,

Lemma 2 make us understand that a subsequence σ ↑ Ad is a result form only single execution. LEMMA 2 Let a system be (A, S, Σ, s0 , D). For every σ ∈ Σ, σ is a possible execution within the target systems iff σ ↑ d also is a possible execution.

next(X, a) = next(Y, a). Now, we shall show that next(next(X, a), b) = next(X, a).

Proof 3 We will proceed by induction on length of σ. where we assume that the length of σ is n.

Assume that (6) is true, next(next(X, a), b) ⊂ next(X, a).

Basis For n = 0, σ is empty, by definition of ↑, σ ↑ Ad also is empty. Since all system accept the empty sequence, clearly σ ↑ Ad is the possible execution within the system.

If b satisfy pij in (5) then b is a secure action and assume that (6) is true. Then by the assumption for the mechanism, for all s ∈ next(A, b), there exists t ∈ next(next(X, a), b) with s =d t. Therefore

Inductive Step For n assume that Lemma 2 holds. And let the sequence be φ · hai, where a ∈ A.

next(X, a) ⊂ next(next(X, a), b). Thus,

Case 1: Let d < dom(a) ∈ r where r is a information flow policy. Then by the definition of ↑, σ ↑ Ad = σ · hai. Therefore σ · hai is a possible execution iff σ ↑ Ad also is a possible execution

next(next(X, a), b) = next(X, a). Therefore, Lemma 1 holds for all possible executions. Also, from the constructing method of this invariant, only-if direction holds clearly.

Case 2: Let d < dom(a) ∈ / r. By the definition of ↑, σ ↑ Ad = σ. From the inductive hypothesis σ · hai ↑ Ad also is a possible execution. We show that if σ · hai ↑ Ad is a possible execution then σ · hai is a possible execution. Let σ be not a possible execution then from the inductive hypothesis σ · hai is not a possible execution. Therefore

Lemma 1 makes us understand that if (6)holds always then the system is secure with respect to the policy r. Also, the property of Definition 2 clearly include O’Halloran’s Noninference. 6

Resource

Enforcementmechanism

Emulator for low-level

Flow control

Access control

Hi-level actions

Low-level actions

(bind read_l "syscall:lucy:sys_read") (bind write_l "syscall:lucy:sys_write") (bind remove_l "syscall:lucy:sys_unlink") ;; the following is an information-flow policy. (dominate Holly Lucy) ;; the following is access policies. (define NI () (nis(x): (if (x=true) (˜allow:Start_policy) (˜abort:Start_policy)))) (define Holly () ((read_h(f): (if (f = "conf.txt") (˜allow:Start_policy) (˜abort:Start_policy)))++ (write_h(f,d): (if ((f = "conf.txt")|| (f = "covertchannel.txt")) (˜allow:Start_policy) (˜abort:Start_policy)))++ (create_h(f):˜allow:Start_policy)++ (remove_h(f):˜allow:Start_policy))) (define Lucy () ((create_l(f,d): (if (f = "covertchannel.txt") (˜allow:NI) (˜abort:Start_policy)))++ (remove_l(f): (if (f = "covertchannel.txt") (˜allow:NI) (˜abort:Start_policy)))++ ;; the following process is an initial process. (define Start_policy () (Rights_of_holly++Right_of_lucy))

Holly Lucy user processes

given Security policy

Figure 5. An aspect of enforcement this holds clearly. Thus, Lemma 2 holds for all possible execution. And whenever M I is computed the monitor uses a single execution.

6

Overview of Our Enforcement Mechanism

We now illustrate an aspect of the mechanism. Information flow is controlled by both Flow-control and Accesscontrol as in Figure 5. Flow-control compares a result of each system call into the resource and the emulator. If the result is different then we consider a covert channel occurred in the system, then the monitor terminates the process that invoked the system call. We describe the policy for the system to the Figure 6. The actions, ”abort” and ”allow”, are special actions. The first action means to forcefully terminate a target invoking the system call. The second action means to allow to execute the system call and return the result to the target. At the same time, the policy transits to the next state. It is always possible to execute the these actions The action ”nis” is also a special action. However It is not always possible to execute the these actions. As far as Flow-control assigns the result to the parameter. It is possible to execute the action.

Figure 6. An example in policies to detect a covert channel

means that the policy move to the next state by observing the system call further the mechanism assign the argument of the system call to the variable ”f”. The operator ”++” make meaning of the selective execution and the operator ”||” make meaning of the composition of two process. Finally, the operator ”define” defines the abstract process. ”Start policy” is the initial process and is fixed. For example, when the mechanism observe a system call ”sys unlink” of Lucy the policy invoke the following transition:

(bind write l ”syscall:lucy:sys unlink”)

remove l Start policy → (if (f=”· · ·”)· · ·) .

means to bind the action ”remove l” to the system call ”sys unlink” called by ”lucy”. System calls which are bounded to actions are observed and the outside of the system calls are ignored.

If the results of both the resource and the emulator is the same then the policy invoke ”allow” and move the initial state.

(dominate Holly Lucy)

7

asserts that information cannot be transmitted from Holly to Lucy. The information flow property dose not make an explicit description in the policy. The property is embedded into Flow-control.

Conclusion

We have shown that information flow policies can be enforced by our monitor, which has an emulator as extra structure under our information flow property. This is able to enforce information flow precisely. The information flow property is expressed as an invariance predicate along with

(read h(f):(if · · ·)) 7

Security policies

[5] U. Erlingsson and F. B. Schneider. SASI enforcement of security policies: A retrospective. In WNSP: New Security Paradigms Workshop, pages 87–95. ACM Press, 2000. [6] P. W. L. Fong. Access control by tracking shallow execution history. In Proceedings of the 2004 IEEE Symposium on Security and Privacy, Oakland, California, USA, May 2004. [7] J. A. Goguen and J. Meseguer. Security policies and security models. In In Proceedings of the 1982 IEEE Computer Society Symposium on Research in Security and Privacy, pages 11–20. IEEE Computer Society Press, 1982. [8] J. A. Goguen and J. Meseguer. Unwinding and inference control. In IEEE Symposium on Security and Privacy, pages 75–86, 1984. [9] D. McCullough. A hookup theorem for multilevel secruity. IEEE Transactions on Software Engineering, 16(6):563– 568, 1990. [10] J. McHugh. Covert channel analysis, 1995. [11] J. McLean. A general theory of composition for trace sets closed under selective interleaving functions. In Proc. IEEE Symposium on Research in Security and Privacy, pages 79– 93, 1994. [12] G. C. Necula and P. Lee. Safe, untrusted agents using proofcarrying code. LNCS 1419, pages 61–91, 1998. [13] C. O’Halloran. A calculus of information flow. In Proceedings of the European Symposium on Research in Computer Security, 1990. [14] S. Pinsky and E. Zieglar. Noninterference equations for nondeterministic systems. In 14th IEEE Computer Security Foundation Workshp, pages 3–14, 2001. [15] F. B. Schneider. Enforceable security policies. ACM Transactions on Information and System Security, 3(1):30–50, February 2000. [16] D. Sutherland. A model of information. In 9th National Computer Security Coference, 1986. [17] A. Zakinthinos and E. S. Lee. A general theory of security properties. In Proceedings of the 18th IEEE Computer Society Symposium on Research in Security and Privacy, 1997.

Information flow properties Generalized noninference Generalized noninterference (Restrictiveness)

Nondeducibility(n)

Noninference

Noninterference (input->output)

Nondeducibility(2)

Figure 7. Enforceable information-flow properties

the transition function for our security automaton. Furthermore, we also presented an outline of our monitor and a simple policy to be used by the mechanism. Our method detects receiver processes of the covert channels Only Noninterference and Noninference are enforceable. In general, Nondeducibility [16] and Generalized Noninterference [9] are not enforced by our monitor because though these properties need to construct a trace from a observed trace and any actions, our monitor cannot construct the trace. It seems that Generalized Noninference is enforceable but we have not cleared how to achieve this yet since sequences, which is removed only high level inputs, may not be accepted by the emulator. the enforceable information flow properties is shown in Figure 7. Also, the monitor dose not scale well. It need to have emulators that equal given security levels and exploits of many system resources.

Acknowledgments We would like to thank the anonymous reviewers for their suggestions. This research has been supported in part by Japanese Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Scientific Research (C), No. 1750017.

References [1] B. Alpern and F. B. Schneider. Recognizing safety and liveness. In Distributd Computing, pages 117–126. SpringerVelag, 1987. [2] L. Bauer, J. Ligatti, and D. Walker. More enforceable security policies. Technical Report TR-649-02, Princeton University, June 2002. [3] Department of Defense. Trusted Computer System Evaluation Criteria, 1985. [4] Department of Defense. A guide to understanding covert channel analysis of trusted systems, 1993.

8

Suggest Documents