Supervisory Control Strategies for Enhancing System Security and

1 downloads 0 Views 139KB Size Report
the security threats that arise due to increasing reliance on common software, public ... Opacity falls in this category and aims at determining whether a given ...
Supervisory Control Strategies for Enhancing System Security and Privacy Christoforos N. Hadjicostis

Abstract— Enhancing the security and reliability of automated systems that control vital national infrastructures, such as energy and water distribution systems, has recently emerged as a critical aspect of maintaining, protecting, and securing such infrastructures against interference or possibly malicious activity. Examples of such automated systems include Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), and networks of embedded sensors and actuators, most of which were designed without anticipating the security threats that arise due to increasing reliance on common software, public telecommunication networks, and the Internet. In this paper, we discuss how state-based notions of opacity in finite automata models can be used to capture security properties of interest in automated systems that can be modeled as controlled finite automata subject to external disturbances. We also describe when and how control objectives can be achieved while enforcing desirable security and/or privacy objectives.

I. I NTRODUCTION Motivated by the increased reliance of many applications on shared cyber-infrastructures (ranging from defense and banking to health care and power distribution systems), various notions of security and privacy have received considerable attention from researchers. In addition to standard functionality (such as routing and end-to-end information transmission), one of the main technological challenges is the development of techniques to efficiently perform various types of distributed tasks in heterogeneous networked systems despite the presence of noisy links or even faulty/malicious components. The interaction between possibly distributed information systems and remote physical components, with sensing and/or computing capabilities, creates complex behavior and multiple dependencies that can easily lead to compromised operation; on the other hand, the distributed nature of these systems and the presence of redundant functionality can potentially be used to enhance their ability to tolerate failures and/or malicious activity. In this paper, we look at systems that can be modeled as controlled finite automata with potential vulnerabilities in certain sets of states; we then precisely characterize the observation capabilities of outside malicious entities, and describe supervisory control strategies to ensure that the This material is based upon work supported in part by the National Science Foundation (NSF), under NSF CNS Award 0834409, and in part by the European Commission (EC) 7th Framework Programme (FP7/2007-2013) under grant agreements INFSO-ICT-223844 and PIRG02-GA-2007-224877. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of NSF or EC. C. N. Hadjicostis is with the Department of Electrical and Computer Engineering, University of Cyprus. Author’s address: 75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia, Cyprus. E-mail: [email protected].

system never enters states that allow these entities to cause harm. The analysis of the observation capabilities of the malicious entities relates to existing notions that focus on characterizing the information flow from the system to the intruder [1]. Opacity falls in this category and aims at determining whether a given system’s secret behavior (i.e., a subset of the behavior of the system that is considered critical and is usually represented by a predicate) is kept opaque to outsiders [2], [3]. More specifically, this requires that the intruder (modeled as an observer of the system’s behavior) never be able to establish the truth of the predicate. In our earlier work [3], [4], we considered opacity with respect to predicates that are state-based. More specifically, we considered a scenario where we are given a discrete event system (DES) that can be modeled as a non-deterministic finite automaton (NFA) with partial observation on its transitions (captured via a natural projection map). Allowing the initial state of the system to be (partially) unknown, we defined the secret behavior of the system as the evolution of the system’s state (at some point in time) to a subset of the given set of secret states S, which was assumed known and fixed over the length of the observation. Also, the intruder was assumed to have full knowledge of the system model and be able to track the observable transitions in the system via the observation of the associated labels. Examples to motivate the study of state-based notions of opacity in the context of sensor network coverage of mobile agents and encryption guarantees for pseudorandom generators are provided in [4]. One particular notion of interest from [3] is the property of current-state opacity which requires that the membership of the system current-state to the set of secret states S remain opaque until the system enters a state outside S. In other words, a system is current-state opaque if for all possible behaviors (and all corresponding observations), the intruder can never pinpoint with absolute certainty that the state of the system (at the point when the last observation is made) belongs to the set of secret states. A variety of application areas and examples where the notion of current-state opacity can be used to characterize relevant security requirements can be found in [3], [4]. In this paper, we consider systems that can be modeled as controlled finite automata that are subject to both user (controllable) inputs and external (uncontrollable) disturbances. In addition, we assume that possibly malicious entities have the capability to drive the system to undesirable states (e.g., cause failures or major disturbances) from certain vulnerable states. The main challenge for these malicious entities is the fact that they do not want to attempt to drive the system

to undesirable states unless they are certain that they will be successful. In other words, we assume that malicious entities do not want to reveal their intentions by applying malicious inputs at points where the system will not be driven to failed states. This restriction translates to a question of assessing whether or not the system current state resides within a set (sets) of states that is (are) vulnerable to malicious action(s). Essentially, the problem reduces to determining whether the given sets of vulnerable states will remain opaque to malicious entities during the operation of the system and whether the user/controller can achieve its objectives without having to expose the system to these malicious entities. As we will see, by using current-state opacity analysis, we can easily determine conditions under which sets of vulnerable states are exposed to malicious entities. One can then try to devise supervisory control strategies that can complete desirable tasks while ensuring that such undesirable situations will not arise. The work in this paper is related to existing security work in the area of current-state opacity [3] as well as supervisory control techniques [5]. The main difference is that the specification of the legal behavior of the automaton is not given explicitly in terms of a language but only implicitly in terms of sequences of inputs that generate observations which reveal that the system is in a set of vulnerable states. Once the set of undesirable sequences of inputs is characterized, one can, of course, use a language-based approach to formulate the problem as a more standard supervisory control task. We do not follow this approach in the paper because we find it more instructive to derive the control strategy from a transition diagram that simultaneously captures both the true state of the system as well as the state of the system perceived by the outside observer. This paper is organized as follows. In Section II we introduce some necessary notation and describe the construction of an observer or current state estimator; we also review the notion of current state opacity as introduced in [3]. In Section III we formulate the control task at hand in terms of a given controlled finite automaton and sets of vulnerable states, and we provide the strategy that allows the controller to drive the system to a target state despite disturbances and while ensuring that the system is not exposed to sets of vulnerable states. We conclude in Section IV with a summary of the contributions of this paper and some quick remarks on possible future research directions. II. BACKGROUND A. Preliminaries and Notation Let Σ be an alphabet and denote by Σ∗ the set of all finite-length strings of elements of Σ, including the empty string ǫ. For a string t, |t| denotes the length of t (with |ǫ| = 0), whereas for a set X, |X| denotes its cardinality. A language L ⊆ Σ∗ is a subset of finite-length strings from Σ∗ . For a string ω, the prefix-closure of ω is defined as ω = {t ∈ Σ∗ | ∃s ∈ Σ∗ {ts = ω}} where ts denotes the concatenation of strings t and s [6].

A deterministic finite automaton (DFA) is denoted by G = (X, Σ, δ, x0 ), where X = {1, 2, . . . , N } is the set of states, Σ is the set of events or observations, δ : X × Σ → X is the deterministic state transition function, and x0 ∈ X is the initial state. The function δ can be extended from the domain X × Σ to the domain X × Σ∗ in the routine recursive manner: δ(i, ts) := δ(δ(i, t), s) for t ∈ Σ and s ∈ Σ∗ , with δ(i, ǫ) := i (since δ is only partially defined, δ(i, ts) is assumed undefined if any of the transitions in the recursion turns out to be undefined). The behavior of DFA G is captured by L(G) := {s ∈ Σ∗ | δ(x0 , s) is defined}. A non-deterministic finite automaton (NFA) is denoted by G = (X, Σ, δ, X0 ), where X = {1, 2, . . . , N } is the set of states, Σ is the set of events or observations, δ : X × Σ → 2X (where 2X is the power set of X) is the nondeterministic state transition function, and X0 ⊆ X is the set of possible initial states. The function δ can be extended from the domain X × Σ to theSdomain X × Σ∗ in the routine recursive manner: δ(i, ts) := j∈δ(i,t) δ(j, s), for t ∈ Σ and s ∈ Σ∗ , with δ(i, ǫ) := {i}. The behavior of NFA G is captured by L(G) := {s ∈ Σ∗ | ∃i ∈ X0 {δ(i, s) 6= ∅}}. The product of two deterministic automata G1 = (X1 , Σ1 , δ1 , x01 ) and G2 = (X2 , Σ2 , δ2 , x02 ) is the deterministic automaton G1 × G2 := AC(X1 × X2 , Σ1 ∩ Σ2 , δ1×2 , x01 × x02 ) where, for α ∈ Σ1 ∩ Σ2 , δ1×2 ((i1 , i2 ), α) := δ1 (i1 , α) × δ2 (i2 , α) (note that δ1×2 ((i1 , i2 ), α) is undefined if δ1 (i1 , α) or δ2 (i2 , α) is undefined). The term AC denotes the accessible part of the automaton (i.e., the set of states reachable from the set of initial states via some string s ∈ Σ∗ ). The construction of the product automaton implies that L(G1 × G2 ) = L(G1 ) ∩ L(G2 ) [6]. B. Current State Estimator In general, only a subset Σobs of the events can be observed. Typically, given a non-deterministic automaton (X, Σ, δ, X0 ), one assumes that Σ can be partitioned into two sets, the set of observable events Σobs and the set of unobservable events Σuo (so that Σobs ∩ Σuo = ∅ and Σobs ∪ Σuo = Σ). The natural projection P : Σ∗ → Σ∗obs can be used to map any trace executed in the system to the sequence of observations associated with it. This projection is defined recursively as P (ts) = P (t)P (s), t ∈ Σ, s ∈ Σ∗ , with P (t) = t if t ∈ Σobs and P (t) = ǫ if t ∈ Σuo ∪ {ǫ} [6]. Upon observing some string ω ∈ Σ∗obs (sequence of observations), the state of the system might not be identifiable uniquely due to the lack of knowledge of the initial state, the partial observation of events, and/or the non-deterministic behavior of the system. We denote the set of states that the system might reside in given that ω was observed as the current-state estimate. The current-state estimator (or observer) is a deterministic finite automaton (DFA) Gobs which captures these estimates and can be constructed as follows [6]: each state of Gobs is associated with a unique subset of states of the original NFA G (so that there are at most 2|X| = 2N states); the initial state of Gobs is associated with X0 , representing the fact that the initial state could be

we arrive at state {6, 7} (because state 2 with a sequence of inputs that generates observation a leads to state 6, and state 3 with a sequence of inputs that generates observation a leads to state 7); similarly, from state {2, 3} with input e1 we are arrive to state {4, 5}. (iii) Following this procedure, Gobs can be completed as in Figure 2. By convention, one does not include the state of the current-state estimator that corresponds to the empty set of state estimates and can be reached from the initial state of the current-state estimator via a sequence of observations that cannot be generated by G; we have followed this convention in Figure 2. 

b a e1

2

a

4

6

a a e2

1

e2 a

a

3

5

e1

7

a

a b

C. Current-State Opacity Fig. 1.

Non-deterministic automaton G.

1 e2 a e1

4,5

2,3

b

a a 6,7

Fig. 2. Observer Gobs for the non-deterministic automaton G in Figure 1.

any state in X0 ; at any state associated with set of estimates Z (Z ⊆ X), the next state upon observing an event α ∈ Σobs is the unique state of Gobs associated with the set of states that can be reached from (one or more of) the states in Z with a string of events that generates the observation α. The construction of the current-state estimator is illustrated in the following example; more details can be found in [6]. Example 1: Consider the NFA G in Figure 1 with set of initial states X0 = {1}. Assuming that Σobs = Σ = {a, b, e1 , e2 }, the current-state estimator Gobs in Figure 2 is constructed as follows (with a slight abuse of notation, we identify each state of the observer Gobs by the set of states of the non-deterministic automaton G associated with it): (i) Starting from the initial state {1} and observing a, the next state is {2, 3} (because in G no other states can be reached from initial state 1 with a sequence of inputs that generates observation a). (ii) At state {2, 3}, the set of possible transitions is the union of all possible transitions for each of the states in {2, 3}. From state {2, 3} with input a

Opacity requires that the system’s secret behavior (i.e., a subset of the behavior of the system that is considered critical and is usually represented by a predicate) is kept opaque to outsiders [2], [3]. More specifically, it requires that the intruder (modeled as a passive observer of the system’s behavior) cannot establish the truth of the predicate, perhaps within some pre-specified time interval. In our earlier work [3], we considered opacity with respect to predicates that are state-based. In the setting we considered, one is given a discrete event system (DES) that can be modeled as an NFA with partial observation on its transitions. Assuming that the initial state of the system is (partially) unknown, we defined the secret behavior of the system to be the evolution of the system’s state to a subset of secret states S, which is assumed to be known and constant over the length of the observation. The intruder was assumed to have full knowledge of the system model and to be able to track the observable transitions in the system via the observation of the associated labels. The following is the formal definition of current-state opacity [3]. Definition 1 (Current-State Opacity): Given an NFA G = (X, Σ, δ, X0 ), a projection map P with respect to the set of observable events Σobs (Σobs ⊆ Σ), and a set of secret states S ⊆ X, automaton G is current-state opaque with respect to S and P (or (S, P, 0) current-state opaque), if ∀t ∈ Σ∗ , ∀i ∈ X0 {δ(i, t) 6= ∅, δ(i, t) ⊆ S} ⇒ ∗

{∃s ∈ Σ , ∃i′ ∈ X0 {P (s) = P (t), δ(i′ , s) * S}}.  Current-state opacity requires that the membership of its current state to the set S remain opaque (uncertain) for all possible behavior in the system. One can check whether a system is current-state opaque by constructing the currentstate estimator and by verifying that no (nonempty) currentstate estimate lies entirely within the set of secret states [3]. Example 2: Consider the non-deterministic finite automaton G depicted in Figure 1 with Σobs = Σ = {a, b, e1 , e2 }. Suppose that S = {4} and X0 = {1}. From Figure 2 which depicts the current-state estimator Gobs associated with G, we see that no current-state estimate (associated with a state of the current-state estimator) lies entirely within the set of secret states {4}; thus, the system is current-state opaque with respect to S = {4}. On the contrary, if the set of secret

u3/b u1/a e1

2

u2/a

4

6

u1/a

u1/a

e2

e2

3

e1

5

u1/a

i2 u2/a

7

u4/c

T

F

I u1/a

u2/a

2 i2

i1

u2/a

1

u3/b

1

i1 u4/c

3

u3/b

4

u2/a u3/b

Fig. 4.

Fig. 3.

Controlled finite automaton Gc .

states is S = {6, 7}, then the sequence of observations aa allows the observer to deduce with certainty that the system is within the set of secret states; thus, with this latter set of secret states, the system is not current-state opaque.  Other state-based notions of opacity have also been developed, including initial-state opacity [4], K-step opacity [7], and probabilistic opacity [8]–[10]. III. P ROBLEM F ORMULATION

AND

S OLUTION

A. Controlled Finite Automata and Control Tasks Definition 2 (Controlled Finite Automata): A controlled finite automaton is captured by Gc = (X, Σ, Y, δ, λ, x0 , Xm ) where • X is the set of states, • Σ = U ∪ E, U ∩ E = ∅, is the set of inputs which is partitioned into user (controllable) inputs U = {u1 , ..., u|U| } and external (uncontrollable) disturbances E = {e1 , e2 , ..., e|E| }, • δ : X × Σ → X is the next state transition function, • x0 ∈ X is the initial state, • Y = L∪E ∪{ǫ}, where L∩E = ∅, ǫ ∈ / L and ǫ ∈ / E, is the set of observation labels (including the empty label that is used to capture unobservable control inputs), • λ : Σ → Y is the mapping that maps inputs to observations and it is assumed satisfy λ(ei ) = ei for i = 1, 2, ..., |E| and λ(ui ) ∈ L∪{ǫ} for i = 1, 2, ..., |U |, • Xm ⊆ X is the set of marked states. Note that a controlled finite automaton (an example is shown in Figure 3 where u/y denotes control input u which maps to output y) essentially has two sets of inputs: control inputs U chosen by the controller and external disturbances E which are caused by environmental and other uncontrollable factors. The controlled finite automaton behaves deterministically under both sets of inputs. The disturbances are assumed to be fully observable and this is captured by the fact that they are associated to a unique label which (without loss of generality) is taken to be the same as the external disturbance. Control inputs, on the other hand, may not be associated with a unique label or may even be associated with the empty label; this implies that control inputs generate

Controlled finite automaton subject to malicious actions.

uncertainty about the true state of the system to outside observers. If one ignores the outputs of a controlled finite automaton, one is left with a deterministic finite automaton (X, Σ, δ, x0 ). Thus, from the controller point of view (who is fully aware of the control inputs and disturbances) the system behaves as a deterministic finite automaton; however, its behavior from the point of view of an outside observer (such as an intruder) resembles the behavior of a non-deterministic finite automaton. For example, the non-deterministic finite automaton in Figure 1 is a non-deterministic finite automaton that is equivalent from the observer point of view to the controlled finite automaton of Figure 3. B. Malicious Actions and Sets of Vulnerable States The objective of the controller in a controlled finite automaton Gc is to drive the system from the initial state x0 to a state in the set of marked (desirable) states Xm ⊆ X. The controller needs to ensure that a marked state is reached despite disturbances that may occur in the system. [Note that disturbances essentially act as uncontrollable events in the supervisory control framework of [11].] We will assume that the presence of intruders that can possibly apply malicious actions in the system posses additional challenges to the controller; more specifically, we will assume that intruder action ij when applied while the system is in any state from a corresponding set of vulnerable states Vj ⊆ X can cause major problems to the system (e.g., lead to a failed or, more generally, undesirable state). For example, in the system of Figure 4 the intruder can apply input i1 (or input i2 ) while the system is in a state in the set V1 = {1, 4} (or the set V2 = {2, 3}) and cause the system to enter the failed state F . What prevents the intruder from constantly applying inputs i1 and i2 is the fact that it does not want to risk exposing itself by applying an input when the system is in a state that prevents this input from taking place. Therefore, we assume that the intruder will only take action ij if it knows with certainty that the state of the system is within the vulnerable set Vj . Given the above discussion it is clear that a controller that tries to protect the system against malicious activity by the intruder only needs to know the sets of vulnerable states V = {V1 , V2 , ..., VM }, Vj ⊆ X, but not the particular details

of failed states and intruder actions. C. Problem Formulation and Solution We now describe the problem at hand and determine a solution if one exists. Problem Formulation: Consider a controlled finite automaton Gc = (X, Σ, Y, δ, λ, x0 , Xm ) and M sets of vulnerable states V1 , V2 , ..., VM , where Vj ⊆ X for j = 1, 2, ..., M . We would like to devise a control strategy (i.e., determine what control inputs to apply following a sequence of observations that may also be affected by uncontrollable external disturbances) to ensure that the system is driven from the initial state x0 to a state in the set of target states Xm , without allowing in the process an external observer to determine with certainty that the system state lies entirely within one of the given sets of vulnerable states. Let the extension of the mapping λ to sequences of inputs (λ : Σ∗ → Y ∗ ) be defined recursively as follows: for t ∈ Σ and s ∈ Σ∗ , we let λ(ts) = λ(t)λ(s) where λ(ǫ) = ǫ (note that we are slightly abusing notation by using ǫ to represent the empty string under the alphabet Σ and under the alphabet Y ). With this notation at hand, we can basically state the problem as follows: the controller needs to avoid sequences of inputs s ∈ Σ∗ that generate sequences of outputs λ(s) that allow the observer/intruder to determine with certainty that the system state lies within a set of vulnerable states. The precise characterization of the sequences of observations that lead to sets of vulnerable states can be obtained by constructing an observer for the given controlled finite automaton and by marking with V (for “vulnerable”) the states of this observer that are associated exclusively with states that lie within some vulnerable set Vj . This observation is summarized in the following step. Step 1: Given a controlled finite automaton Gc = (X, Σ, Y, δ, λ, x0 , Xm ) construct an observer Gobs = AC(2X , Y, δobs , {x0 }) ≡ (Xobs , Y, δobs , X0,obs ) for the nondeterministic finite automaton G = (X, Y, δnc , {x0 }) that corresponds to Gc . The next state transition mapping δnc for the non-deterministic finite automaton can be obtained as follows: for x ∈ X and y ∈ Y define δnc (x, y) = ∪s∈Σ∗ :λ(s)=y {δ(x, s)} . Note that for the controlled automaton Gc in Figure 3, Figure 1 denotes the non-deterministic finite automaton G and Figure 2 denotes the observer Gobs = AC(2X , Y, δobs , {x0 }) ≡ (Xobs , Y, δobs , X0,obs ). The current states estimates associated with each state of the observer can be used to characterize sequences of observations that allow the observer to determine with certainty that the system state lies within a set of vulnerable states. If, for simplicity, we refer to states in the observer in terms of the state estimates they represent, we can easily state following: if a state Z in the observer (Z ⊆ X) is such that Z ⊆ Vj for some j = 1, 2, ..., M , then all sequences of observations in Y ∗ that lead to that state (starting from the initial state of the observer) should not be generated by the system. That means that the controller should try

to disable sequences of inputs in Σ∗ that map to such problematic sequences of observations. In order to determine the controller strategy while keeping track of the capabilities of the observer/intruder, we build a product automaton that keeps track of both the true state of the system and the corresponding state of the observer. Step 2: Given a controlled finite automaton Gc = (X, Σ, Y, δ, λ, x0 , Xm ) and its observer Gobs = (Xobs , Y, δobs , X0,obs ) as constructed in Step 1, we define the product controller/observer automaton Gc,o = AC(X × Xobs , Σ, Y, δc,o , λ, x0 × X0,obs , Xm × Xobs ) ≡ (Xc,o , Σ, Y, δc,o, λ, x0,c,o , Xm,c,o ) as follows: •



the first component of the state (x1 , x2 ) is the state of the system (x1 ∈ X) and the second component is the state of the observer (x2 ∈ Xobs ); the mapping δc,o is deterministic and is defined for x1 ∈ X, x2 ∈ Xobs , and a ∈ Σ as δc,o ((x1 , x2 ), a) = (δ(x1 , a), δobs (x2 , λ(a))) ,

• •

where we consider δobs (x2 , ǫ) = x2 and we take δc,o to be undefined if any of its two constituent δ’s is undefined. All states (x1 , x2 ) whose first component is a marked component in Gc (i.e., x1 ∈ Xm ) are marked; AC denotes the part of the automaton that is accessible from the initial state x0 × X0,obs .

The controller can now use the automaton Gc,o to determine which states need to be avoided and which states need to be targeted. The states that need to be avoided are clearly the states (x1 , x2 ) ∈ Xc,o for which the second component is a subset of some vulnerable set, i.e., there exists j ∈ {1, 2, ..., M } such that x2 ⊆ Vj ; thus, we can go ahead and mark those states with V (for vulnerable). In fact, apart from such states, one needs to also avoid all states that can uncontrollably lead to states marked with V : specifically, a state for which there exists at least one external input e ∈ E that takes the system to a state marked with V should also be considered vulnerable and marked with V . One should also worry if all control inputs in U , U ⊆ Σ, take a state to a vulnerable state (because one has no control of when and if external disturbances might occur). Step 3: Given automaton Gc,o = (Xc,o , Σ, Y, δc,o , λ, x0,c,o , Xm,c,o), mark all states (x1 , x2 ) ∈ Xc,o with V if there exists a vulnerable set Vj such that x2 ⊆ Vj ; then iteratively, mark a state (x′1 , x′2 ) ∈ Xc,o with V if 1) there exists e ∈ E such that δc,o ((x′1 , x′2 ), e) is a state that is already marked with V , or 2) for all control inputs u ∈ U , δc,o ((x′1 , x′2 ), u) is a state that is already marked1 with V . 1 Note that if we disable all controllable events we are essentially blocking the system because we rely on the external disturbances to take us out of this state; thus, unless the state we are at is a target state, this is an undesirable situation. For simplicity, we assume that such states are not target states and we mark them with V ; however, exceptions can be easily handled by the approach presented here.

This process is continued iteratively until no more states in Xc,o can be marked with V . Step 3 will terminate when all states have been marked with V or when no more states can be marked. At the completion of this step, if the initial state of Gc,o has been marked with V , then the controller has no way of ensuring that the system will not be driven to sets of vulnerable states via external disturbances. Assuming this is not the case, in order to determine if the system can be driven to one or more target states, we need to examine if there is a path to a target state (i.e., a state (x1 , x2 ) ∈ Xm,c,o for which x1 ∈ Xm ) that can be reached despite the effects of external disturbances. To achieve this we can follow the iterative approach in Step 4. Step 4: Given automaton Gc,o = (Xc,o , Σ, Y, δc,o , λ, x0,c,o , Xm,c,o), remove all states (x1 , x2 ) ∈ Xc,o that have been marked with V in Step 3 and mark the remaining states (x1 , x2 ) ∈ Xm,c,o with D0 (for desirable at distance 0). Then, consider state (x′1 , x′2 ) ∈ Xc,o ; if 1) for all ej ∈ E, δc,o ((x′1 , x′2 ), ej ) is undefined or leads to a state already marked with Dj , 2) there exists at least one control input uj ′ ∈ U such that δc,o ((x′1 , x′2 ), uj ′ ) is a state that is already marked2 Dj ′ , then we will mark state (x′1 , x′2 ) with Dk where k = 1 + max(max(j), min (j ′ )) . ′ j

that if a vulnerable state is kept (i.e., a certain controllable input is not disabled), then it is possible that the system will visit that vulnerable state, from which it can then — via a sequence of external disturbances — reach uncontrollably a situation that exposes that the system lies within a set of vulnerable states (or it will force the controller to block the system).] Similarly, if the initial state of Gc,o is marked desirable Dd , then, regardless of whether a user input is applied or an external disturbance occurs, the system will move in a state marked Dd′ for d′ < d; eventually, it will be forced to enter a state marked D0 , i.e. a target state. IV. C ONCLUSIONS

AND

F UTURE W ORK

In this paper, we considered a particular class of controlled automata under vulnerabilities to intruder actions, and we developed supervisory control strategies that ensure that the system reaches a target state from a given initial condition without exposing that the states of the system lie within any of the sets of vulnerable states (which will allow the intruder to act maliciously without exposing itself). In this process, we made connections with existing notions of current-state opacity and state-based supervisory control. In the future, we plan to consider optimal control strategies (when given costs associated with each control input). We also plan to introduce probabilistic metrics when characterizing system vulnerabilities, and also extend these ideas to modular systems under decentralized control schemes.

j

Clearly, Step 4 will terminate when all states have been marked with D or when no more states can be marked. At this point we are in position to informally state the main result of the paper: given a controlled finite automaton Gc = (X, Σ, Y, δ, λ, x0 , Xm ) and M sets of vulnerable states V1 , V2 , ..., VM , where Vj ⊆ X for j = 1, 2, ..., M , there exists a control strategy that gets us from the initial state to one of the target states without exposing that the system state lies strictly within one of the sets of vulnerable states (regardless of the external disturbances) if and only if at the end of Step 4 of the above procedure, the initial state x0 × X0,obs of the product automaton Gc,o is marked Dd for some d. To see how one can prove the informal statement above, note first that the vulnerable states (marked with V during Step 3) and their associated incoming and outgoing transitions are removed in Step 4. This essentially tells the controller that it needs to disable certain controllable events in order to avoid entering such vulnerable states. From the remaining states, all (remaining) external inputs lead to nonvulnerable states, otherwise we would have a contradiction from Step 3. This implies that by removing vulnerable states we ensure that we never reach states that expose the set of possible system states to the observer/intruder. [Also note 2 Note that if at state (x′ , x′ ) no control input exists to take us to a state 1 2 already marked desirable, then this state might be a blocking state (since we have no control over when and/or if external disturbances occur); for simplicity, we assume that such states are not target states, however, such exceptions can be easily handled by the approach presented here.

R EFERENCES [1] R. Focardi and R. Gorrieri, “A taxonomy of trace–based security properties for CCS,” in Proceedings of the 7th Workshop on Computer Security Foundations, June 1994, pp. 126–136. [2] J. Bryans, M. Koutny, L. Mazare, and P. Ryan, “Opacity generalised to transition systems,” International Journal of Information Security, vol. 7, no. 6, pp. 421–435, November 2008. [3] A. Saboori and C. N. Hadjicostis, “Notions of security and opacity in discrete event systems,” in Proceedings of the 46th IEEE Conference on Decision and Control, December 2007, pp. 5056–5061. [4] A. Saboori and C. N. Hadjicostis, “Verification of initial–state opacity in security applications of DES,” in Proceedings of the 9th International Workshop on Discrete Event Systems, May 2008, pp. 328–333. [5] W. M. Wonham, Supervisory Control of Discrete Event Systems. [Online]. Available: http://www.control.utoronto.ca/cgi-bin/dldes.cgi [6] C. Cassandras and S. Lafortune, Introduction to Discrete Event Systems. Kluwer Academic Publishers, 2008. [7] A. Saboori and C. N. Hadjicostis, “Verification of K-step opacity and analysis of its complexity,” in Proceedings of the 48th IEEE Conference on Decision and Control, December 2009, pp. 201-206. [8] A. Saboori and C. N. Hadjicostis, “Opacity verification in stochastic discrete event systems,” To appear in Proceedings of the 49th IEEE Conference on Decision and Control, December 2010. [9] Y. Lakhnech and L. Mazar´e, “Probabilistic opacity for a passive adversary and its application to Chaum’s voting scheme,” Verimag, Tech. Rep. TR-2005-4, 2005. [Online]. Available: http://wwwverimag.imag.fr/index.php?page=techrep-list [10] R. Janvier, Y. Lakhnech, and L. Mazar´e, “Completing the picture: soundness of formal encryption in the presence of active adversaries,” in Proceedings of 14th European Symposium on Programming, ser. Lecture Notes in Computer Science, vol. 3444, April 2005, pp. 172– 185. [11] P. J. Ramadge and W. M. Wonham, “The control of discrete event systems,” Proceedings of the IEEE, vol. 77, no. 1, pp. 81–97, January 1989.

Suggest Documents