Cache-Leakage Resilient OS Isolation in an Idealized ... - IEEE Xplore

5 downloads 251 Views 790KB Size Report
tems (mobile phone, airplanes), and for the secure provisioning of infrastructures in cloud computing. One of its claimed benefits is that guest operating systems ...
2012 IEEE 25th Computer Security Foundations Symposium

Cache-leakage resilient OS isolation in an idealized model of virtualization Gilles Barthe

Gustavo Betarte, Juan Diego Campo, Carlos Luna

IMDEA Software Institute, Madrid, Spain

InCo, Facultad de Ingenier´ıa, Universidad de la Rep´ublica, Uruguay

Abstract—Virtualization platforms allow multiple operating systems to run on the same hardware. One of their central goal is to provide strong isolation between guest operating systems; unfortunately, they are often vulnerable to practical side-channel attacks. Cache attacks are a common class of side-channel attacks that use the cache as a side channel. We formalize an idealized model of virtualization that features the cache and the Translation Lookaside Buffer (TLB), and that provides an abstract treatment of cache-based side-channels. We then use the model for reasoning about cache-based attacks and countermeasures, and for proving that isolation between guest operating systems can be enforced by flushing the cache upon context switch. In addition, we show that virtualized platforms are transparent, i.e. a guest operating system cannot distinguish whether it executes alone or together with other guest operating systems on the platform. The models and proofs have been machine-checked in the Coq proof assistant.

most serious security threats in virtualization platforms. As a result of ignoring side-channels in the execution model, provably secure programs may be insecure for all practical purposes; • properties: there are three important isolation properties for virtualization platforms. On the one hand, read and write isolation respectively state that guest operating systems cannot read and write on memory that they do not own. On the other hand, OS isolation states that the behavior of a guest operating system does not depend on the previous behavior and does not affect the future behavior of other operating systems. In contrast to read and write isolation, which are safety properties and can be proved with deductive verification methods, OS isolation is a 2-safety property [51], [10] that requires reasoning about two program executions. Unfortunately, the technology for verifying 2-safety properties is not fully mature, making their formal verification on large and complex programs exceedingly challenging. In particular, we are not aware of previous machine-checked proof of OS isolation in models that capture side-channel attacks.

I. I NTRODUCTION Virtualization technology is increasingly used for protecting high-integrity, safety-critical, components in embedded systems (mobile phone, airplanes), and for the secure provisioning of infrastructures in cloud computing. One of its claimed benefits is that guest operating systems are strongly isolated, i.e. they have independent behavior and cannot learn about or influence each other contents and actions. However, showing that virtualization plaforms ensure isolation remains a significant challenge. There are two dimensions to this problem: • modelling: virtualization platforms are complex programs that interact with many low-level components, for instance: central processing units, memory management units, device drivers. Moreover, many of these components rely on sub-components; for instance, memory management units rely on cache mechanisms that allow lookups to be made on a smaller and faster memory, and thus yield significant performance improvements. As a consequence, virtualization platforms are very large programs whose verification stretches or surpasses the abilities of existing deductive verification tools. More importantly, these tools generally do not capture quantitative aspects of programs, such as execution time. Thus, there are whole classes of attacks that fall outside the execution model purported by most verification methods; this includes cache-timing attacks, that are among the © 2012, v/12 $26.00 Gilles © 2012 Barthe. IEEE Under license to IEEE. DOI 10.1109/CSF.2012.17

Contributions The main contribution of this article is a machine-checked proof of isolation in an idealized model of virtualization where the cache and the TLB may leak information. For tractability and concreteness, we concentrate on memory management in paravirtualization platforms. Our modelling choices are guided by Xen [4], and specifically, by Xen on ARM [25]. Our starting point is an idealized model in the Coq proof assistant, which has been devloped previously to verify isolation properties between guest operating systems [7]. We extend this model with a formalization of the cache and Translation Lookaside Buffer (TLB), a specific cache used to map virtual addresses to physical addresses. Then, drawing inspiration from physically observable cryptography [39], we consider an extended model of traces in which operating systems can draw observations on the history of the cache. The resulting model allows reasoning about the class of synchronous access-driven cache-based attacks; in particular, we prove that flushing the cache upon switching between guest operating systems ensures OS isolation and prevents such attacks. 176 186

The second main contribution of the paper is a machinechecked proof of transparency. Transparency states that the virtualization platform is a correct abstraction of the underlying hardware, in the sense that a guest operating system cannot distinguish whether it executes alone or together with other systems. Transparency is a 2-safety property; its formulation involves an erasure function which removes all the components of the states that do not belong to some fixed operating system. We define an appropriate erasure, establish its fundamental properties, and finally derive transparency. Summarizing, our contributions are: 1) an idealized model of virtualization that includes cache and TLB, see Section II; 2) a proof of OS isolation, see Sections III and IV; 3) an extended model that supports reasoning about cachebased attacks, and a proof of cache-leakage resilient OS isolation, see Section IV; 4) a proof of transparency, see Section VI. The formal development is available from the project web page1 and can be verified using the Coq proof assistant [50].

one that specifies the criteria used to remove a value when the cache is full and a new value needs to be stored. Among the most common policies we find those that specify that the value to be removed is either the least recently used value (LRU), the most recently used (MRU) or the least frequently used (LFU). We model an abstract replacement policy which can be refined to any of the three policies just described. A write policy specifies how the modification of a cache entry impacts the memory structures: a write-through policy, for instance, requires that values are written both in the cache and in the main memory. A write-back policy, on the other side, requires that values are only modified in the cache and marked dirty, and updates to main memory are performed when a dirty entry is removed from the cache. We model a write-through policy. Enhancing the model with an abstract write policy that can be refined to both policies is left for future work. B. Basic concepts and notations We freely use enumerated types, option types, lists, streams and records. Enumerated types and parametric sum types are defined using Haskell-like notation; for example, we define for def every type T the type option T = N one | Some (t : T ). The type of streams of type A is written [A]∞ . Objects of type [A]∞ are constructed with the (infix) operator ::, hence x :: xs is of type [A]∞ whenever x is of type A and xs is of type [A]∞ . Given s : [A]∞ we let s[i] denote the i-th element of s. Record types are of the form {l1 : T1 , . . . , ln : Tn }. Field selection and field update are respectively written as r · l and r [l := v]; we also use simultaneous field update, which is defined in the usual way. We make an extensive use of partial maps, and bounded partial maps: the type of partial maps from objects of type A into objects of type B is written A → B, and the type of partial maps from A to B whose domain is of size smaller or equal to k (where k is a natural number) is written as A →k B. Application of a map m on an object a of type A is denoted m[a] and map update is written m[a := b], where b overwrites the value, if any, associated to the key a.

II. M ODELLING C ACHE AND TLB In this section, we present an idealized model of virtualization that features cache and TLB. First, we define the set of states and the semantics of actions as state transformers. Then, we define a notion of valid state and show that validity is invariant under execution. Finally, we define execution traces. We start by providing some background on cache management. A. Background on cache management In order to reduce the overhead necessary to access memory values, CPUs use faster storage devices called caches to store a copy of the data from main memory. When the processor needs to read a memory value, it fetches the value from the cache if it is present (cache hit), or from memory if it is not (cache miss). In systems with virtual memory, the TLB is analogously used to speed up the translation from virtual to physical memory addresses. In the event of a TLB hit, the physical address corresponding to a given virtual address is obtained directly, instead of searching for it in the process page table (which resides in main memory). The cache may potentially be accessed either with a physical address (physically indexed cache) or a virtual address (virtually indexed cache). There are, in turn, two variants of virtually indexed caches: those where the entries are tagged with the virtual address (virtually indexed, virtually tagged or VIVT cache) and those which are tagged with the corresponding physical address (virtually indexed, physically tagged or VIPT cache). We model of a VIVT cache and TLB (as in Xen on ARM [25]). There exist several alternatives policies for implementing cache content management, in particular concerning the update and replacement of cache information. A replacement policy is

C. States We start from a type os ident of identifiers for operating systems (OSs), and a predicate trusted os that separates between trusted and untrusted OSs. Moreover, we introduce two-valued types exec mode and os activity to distinguish whether OSs execute in user mode or in supervisor mode, and whether they are in running mode or waiting mode—the latter arises when the OS is waiting for an hypercall to be resolved: exec mode

def

=

usr | svc

os activity

def

running | waiting

=

Figure 1 shows a diagram of the memory model of the platform. It involves three types of addresses: the machine addresses are modelled by the type madd and correspond to the real machine memory; the physical addresses are modelled by the type padd and are used by guest OSs; and the virtual addresses are modelled by the type vadd and are used by

1 http://www.fing.edu.uy/inco/grupos/gsi/proyectos/virtualcert.php

187 177



    

    

       

#$

!   

def

State = {

               











The states of the platform are modelled by a record type:

       

   



  



   

"    

 



   

Fig. 1.

: os ident, : exec mode, : os activity, : oss map, : hypervisor map, : system memory, : cache struct, : tlb struct }

We briefly explain the components of the state. The first three components provide basic information about the status of the operating systems: active os is the active OS, mode its execution mode, activity its activity status. The fourth component is a mapping that associates to each OS its current page table, and if any, its pending hypercall (there is always at most one hypercall to be resolved). Formally,   curr page : padd, def oss map = os ident → hcall : option Hyper call

   

     

active os mode activity oss hypervisor memory cache tlb

Memory model of the platform with cache and TLB

applications running on the OSs. These addresses are related by two mappings, which are described below. Each OS has a designated portion of its virtual address space that is reserved for the hypervisor to attend hypercalls. The figure also shows the cache and the TLB. The cache is indexed with virtual addresses and holds a (partial) copy of the memory pages. The TLB is used in conjunction with the current page table of the active OS to map virtual to machine addresses.

The remaining four components are mappings that model the memory. The first mapping relates for each operating system its physical addresses to machine addresses, and is formalized as an object of type: def

hypervisor map = os ident → (padd → madd) The second mapping associates a page to a machine address, and is formalized as an object of type:

As in [7], the model assumes that machine addresses correspond to a memory page, and have at most one owner, either a guest OS or the hypervisor. The contents of a page can be either (readable/writable) values, an OS page table that maps virtual addresses to machine addresses, or nothing; note that a page might have been created without having been initialized, hence the use of option types. Formally:

def

system memory = madd → page The last two components model the cache and the TLB, and are formalized as object of respective types: cache struct

def

=

vadd →max

size cache

tlb struct

def

vadd →max

size tlb

=

page

madd

where max size cache and max size tlb are some positive fixed constants. content

page owner

page

def

= | |

RW (v : option V alue) P T (va to ma : vadd → madd) Other

def

= | |

Hyp Os (osi : os ident) N o Owner

def

{ page content : content, page owned by : page owner }

=

D. Actions and their semantics The behavior of the platform is modelled by defining a set of actions, and providing their semantics as state transformers. Actions include calls to the hypervisor, change in memory mappings, and switching between modes. Here we focus on some actions that interfere with the cache and the TLB; however, our formalization covers a larger set of actions; see [7], [43]. The behaviour of actions is specified by a precondition P re and by a postcondition P ost of respective types: P re : State → Action → P rop P ost : State → Action → State → P rop The precondition defines the conditions for an action to be performed normally—specifying the behavior of the system in error cases is left for future work. Likewise, the postcondition defines the properties of the state after executing the action.

The structure of page tables is kept simple in this extended version of the model. In contrast to related works, which are discussed in Section VII, our model does not consider, for instance, nested or multi-level page tables.

188 178

read_hyper va write va val new_untrusted o va pa

switch o lswitch_trusted pa

The hypervisor reads virtual address va. A guest OS writes value val in virtual address va. The hypervisor adds (on behalf of the untrusted OS o) a new ordered pair (mapping virtual address va to the machine address ma) to the current memory mapping of the untrusted OS o, where pa translates to ma for o. The hypervisor sets o to be the active OS. The trusted active OS changes its current memory mapping to be the one located at physical address pa. This action corresponds to a traditional context switch by the active OS. Fig. 2.

Selected actions

def

P re s (read_hyper va) = ∃(ma : madd)(pg : page), va mapped to ma(s, va, ma) ∧ va mapped to pg(s, va, pg) ∧ is RW (pg.page content) ∧ s.activity = waiting 

def

P ost s (read_hyper va) s = s = s ·

cache := cache add(s.cache, va, pg), tlb := tlb add(s.tlb, va, ma)



def

P re s (write va val) = ∃(ma : madd)(pg : page), os accessible(va) ∧ va mapped to ma(s, va, ma) ∧ va mapped to pg(s, va, pg) ∧ is RW (pg.page content) ∧ s.activity = running P ost s (write va ⎡ mem s = s · ⎣ cache tlb

val) := := :=

def

s = let (new pg : page = RW (Some val), pg.page owned⎤by) in (s.memory[ma := new pg]), cache add(f ix cache synonym(s.cache, ma), va, new pg), ⎦ tlb add(s.tlb, va, ma) def

P re s (new_untrusted o va pa) = os accessible(va) ∧ s.activity = waiting ∧ ¬trusted os(o) ∧ page of OS(s, o, pa) ∧ s.oss[o].hcall = Hypercall new(va, pa) def

P ost s (new_untrusted o va pa) s = ∃(ma : madd)(ptaddr : madd), s.hypervisor[o][pa] = ma ∧ os current vadd mapping madd(s, o, ptaddr) ⎡ ⎤ mem := update pt page(s.memory, ptaddr, va, ma), ⎢ ⎥ oss := (s.oss[o := s.oss[o].curr page, N one]), ⎥ ∧ s = s · ⎢ ⎣ cache := cache drop(s.active os, s.cache, o, va), ⎦ tlb := tlb drop(s.active os, s.tlb, o, va) def

P re s (switch o) = s.activity = waiting ∧ s.oss[o].hcall = N one ⎤ active os := o, def cache := cache f lush(s.cache), ⎦ P ost s (switch o) s = s = s · ⎣ tlb := tlb f lush(s.tlb) ⎡

def

P re s (lswitch_trusted pa) = ∃(ma : madd), trusted os(s.active os) ∧ s.activity = running ∧ s.hypervisor[s.active os][pa] = ma ∧ is P T ((s.memory[ma]).page content) ∧ page of OS(s, s.active os, pa) def

P ost s (lswitch_trusted pa) s = ⎡ ⎤ oss := (s.oss[s.active os := pa, s.oss[s.active os].hcall]), ⎦ s = s · ⎣ cache := cache f lush(s.cache), tlb := tlb f lush(s.tlb) Fig. 3.

Formal specification of access/update/switch actions

189 179

OS then the virtual address va is dropped from both the cache (cache drop(s.active os, s.cache, o, va)) and the TLB (tlb drop(s.active os, s.tlb, o, va)), otherwise the two structures remain unchanged. The correct execution of the action switch o requires that the hypervisor is running and that there is no hypercall pending for the OS o. The effect of this action is to set o to be the active OS and to perform the (full) flushing of the cache and the TLB. The effect of both cache f lush and tlb f lush is to return an empty cache and an empty TLB, respectively. The precondition of the action lswitch_trusted pa requires for the active OS to be trusted and running, and that there exists a machine address ma, associated by the hypervisor mapping to the physical address pa, which is the address of a PT page owned by the active OS. The effect of this action is to set pa as the address of the current page table of the active OS and to perform the (full) flushing of the cache and the TLB.

Figure 2 describes informally the effects of the selected actions that access/update memory pages, namely, read_hyper, write and new_untrusted, or switch operating systems switch and lswitch_trusted. Then, Figure 3 provides their axiomatic semantics. For the sake of readability, we consider that the scope of the existentially quantified variables in preconditions extend to the respective postconditions. The precondition of the action read_hyper va requires that there exists a machine address ma to which va is mapped either in the TLB or in the current page table of the active OS (va mapped to ma), that va has a page pg mapped either in the cache or in memory (va mapped to pg) that is readable/writable (is RW ) and that the active OS is in waiting mode (or equivalently, that the hypervisor is running). The postcondition establishes that the states before and after execution only differ in the cache and the TLB, which are modified respectively to store the mappings of the virtual address to its corresponding machine address and page. Formally, we say that a cache c2 is the result of updating a cache c1 with a pair va and pg, written c2 = cache add(c1 , va, pg), iff pg = c2 [va] and for all virtual addresses va and pages pg  ,

E. Valid states and one-step execution The notion of valid state is formalized by a predicate valid state that captures essential properties of the platform. The following are examples of properties verified by valid states: i) a trusted OS has no pending hypercalls; ii) if the hypervisor or a trusted OS (respectively untrusted OS) is running the processor must be in supervisor (respectively user) mode; iii) the hypervisor maps an OS physical address to a machine address owned by that same OS; iv) all page tables of an OS o map accessible virtual addresses to pages owned by o and not accessible ones to pages owned by the hypervisor; v) the size of the cache and TLB mappings does not exceed their bound; vi) if pg is associated to va in the data cache, then va must be translated to some machine address ma in the active memory mapping, and s.memory[ma] = pg; vii) if va is translated into ma according to the TLB, then the machine address ma is associated to va in the active memory mapping; where the active memory mapping is the current memory mapping of the active OS. Notice that these latter conditions are necessary to ensure the main memory has been updated according to the write-through policy. The notion of valid state is used to define the relation of one-step execution →:

va = va → pg  = c2 [va ] → pg  = c1 [va ] The definition of the formula tlb2 = tlb add(tlb1 , va, ma) is analogous. The precondition of the action write va val requires that va is accessible by the OSs (os accessible(va)), that there exists a machine address ma to which va is mapped, that the page indexed by the virtual address va is readable/writable and that the active OS is running. The postcondition establishes that the state after the execution of the action differs in the value (val) of the page associated to ma and in the values that are stored in the data cache and the TLB. In this paper, we adopt a write-through policy with cache update; moreover, we fix synonyms (using f ix cache synonym) before adding entries to the cache: this allows us to avoid aliasing problems that arise in virtually indexed caches. Given a cache c1 and a machine address ma, the result of f ix cache synonym(c1 , ma) is a new cache c2 such that all virtual addresses va that are indexes of c2 translate to machine addresses ma distinct from ma. The precondition of the action new_untrusted o va pa requires, in the first place, that va is accessible by the OSs, the hypervisor is running and o is an untrusted OS. In addition, it also requires that there exists a machine address associated by the hypervisor mapping to the physical address pa, which is the address of a page owned by the OS o (formalized by the predicate page of OS) and that the operating system o has a pending new hypercall (Hypercall new). The postcondition establishes that in the result state: i) the current page table ptaddr of o (os current vadd mapping madd(o, ptaddr)) is updated with the association of virtual address va with the machine address ma, where ma is the machine address associated to pa by the hypervisor mapping (update pt page(s.memory, ptaddr, va, ma)); ii) there is no longer a hypercall pending for o; and iii) if o is the active

  a s − → s = (valid state(s) ∧ P re s a ∧ P ost s a s ) def

a  The notation s − → s may be read as the execution of the action a in a valid state s results in a new state s . One-step execution preserves valid states:

Lemma 1 (Validity is invariant).   a ∀ (s s : State) (a : Action), s − → s → valid state(s ) F. Traces Isolation and transparency properties are expressed on execution traces of the form: a0 s a1 s a2 s . . . s0 − → 1 −→ 2 −→ 3

190 180

ai s such that every execution step si − → i+1 is valid. In the sequel, a we let t[i] denote the i-th state of a trace t and we use s − →t to denote the trace obtained by prepending the valid execution a step s − → t[0] to a trace t.

The four cases of the conjunction respectively correspond to a cache hit in both states, a hit in s1 and a miss in s2 (and vice versa), and misses in both states. This notion of equivalence ensures that for both states, osi will read the same value (either from the cache or from memory) for all accessible virtual addresses. The definition of osi-TLB equivalence is defined similarly, using the function t instead of c. We can now introduce the notion of osi-equivalence between states. Two states s1 and s2 are osi-equivalent, written s1 ≡osi s2 , iff: i) osi is the active OS in both states and the processor mode is the same, or the active OS is different to osi in both states; ii) osi has the same hypercall in both states, or no hypercall in both states; iii) the current page tables of osi are the same in both states; iv) all page table mappings of osi that map a virtual address to a RW page in one state, must map that address to a page with the same content in the other; v) the hypervisor mappings of osi in both states are such that if a given physical address maps to some RW page, it must map to a page with the same content on the other state; vi) the states are cache and TLB equivalent for osi. Then, we define a relation of osi-equivalence between two traces: formally, two traces t1 and t2 are osi-equivalent, written t1 ≈osi t2 , iff

III. I SOLATION We have mechanically verified three isolation properties: 1) read isolation: no OS can read memory that does not belong to it; 2) write isolation: an OS cannot modify memory that it does not own; 3) OS isolation: the behavior of an OS does not depend on other OSs states. Read and write isolation are safety properties, whereas OS isolation is a 2-property, i.e. it is cast in terms of two executions of the system. We focus on OS isolation, which is by far the most challenging property. OS isolation establishes that the internal state of an OS is not influenced by other operating systems and only depends on the actions it performs as a running and active OS. To capture this intuition, we first define for each operating system osi an equivalence relation ≈act osi between traces, so that two traces are related iff they perform the same set of actions w.r.t. osi:

t1 ≈osi t2 ¬os action(s, a, osi) a (s − → t1 ) ≈osi t2

t1 ≈act ¬os action(s, a, osi) osi t2 act a (s − → t1 ) ≈osi t2 t1 ≈act ¬os action(s, a, osi) osi t2 a t1 ≈act → t2 ) osi (s − t1 ≈act osi t2

t1 ≈osi t2 ¬os action(s, a, osi) a t1 ≈osi (s − → t2 ) t1 ≈osi t2

os action(s1 , a, osi) os action(s2 , a, osi) act a a (s1 − → t1 ) ≈osi (s2 − → t2 )

s1 ≡osi s2 os action({s1 , s2 }, a, osi) a a (s1 − → t1 ) ≈osi (s2 − → t2 )

where os action({s1 , s2 }, a, osi) is a shorthand for the conjunction of os action(s1 , a, osi) and os action(s2 , a, osi). OS isolation can now be formulated precisely: any two osiaction equivalent traces are also osi-equivalent, provided their initial states are osi-equivalent.

where os action(s, a, osi) holds if, in the state s, osi is the active and running OS and therefore is executing action a, or otherwise the hypervisor is executing the action a on behalf of osi. Note that our definition of ≈act osi generalizes over previous work, and specially [7], by allowing related traces to differ in the number of actions performed by other operating systems. OS isolation states that traces related by ≈act osi are also related by a stronger form of state-based equivalence between traces. We first define equivalence between states, and then extend it to traces. As a preliminary step, we define notions of equivalence for the cache and the TLB. For this purpose, we introduce for every state s three helper functions cs : vadd → page, ms : vadd → page, and ts : vadd → page. The function cs returns the page in the cache associated with the given virtual address, whereas the function ms returns the page mapped in memory, and finally the function ts the page mapped through the TLB. Then, we say that two states s1 and s2 are osi-cache equivalent iff either osi is not active in both states, or it is active and for all accessible virtual addresses va: (va ∈ dom(cs1 ) ∧ va ∈ dom(cs2 ) → cs1 (va) = cs2 (va)) / dom(cs2 ) → ms1 (va) = cs2 (va)) ∧(va ∈ dom(cs1 ) ∧ va ∈ ∧(va ∈ / dom(cs1 ) ∧ va ∈ dom(cs2 ) → cs1 (va) = ms2 (va)) / dom(cs2 ) → ms1 (va) = ms2 (va)) ∧(va ∈ / dom(cs1 ) ∧ va ∈

Proposition 2 (OS isolation). ∀ (t1 t2 : T race) (osi : os ident), t1 ≈act osi t2 → t1 [0] ≡osi t2 [0] → t1 ≈osi t2 The proof of OS isolation is based on co-induction principles and on two unwinding lemmas. The first lemma states that equivalence is preserved by all actions. Lemma 3 (Step-consistent unwinding lemma). ∀ (s1 s1 s2 s2 : State) (a : Action) (osi : os ident),     a a s1 ≡osi s2 → s1 − → s1 → s2 − → s2 → s1 ≡osi s2 The second lemma states that execution does not alter the state of non-active operating systems. Lemma 4 (Locally preserves unwinding lemma). ∀ (s s : State) (a : Action) (osi : os ident),   a ¬ os action(s, a, osi) → s − → s → s ≡osi s We conclude this section by observing that there are two facets to OS isolation. To make matters precise, consider two operating systems osA (the attacker) and osV (the victim). The

191 181

instantiation of Proposition 2 to osV entails that osV cannot be influenced by osA . On the other hand, the instantiation of Proposition 2 to osA reflects the intuition that osA cannot learn information about osV —for otherwise, osA could perform actions that lead to states, and hences traces, that are not osA equivalent.

In the next paragraph, we embed ideas from physically observable and leakage resilient cryptography in our model. B. Modeling leakage Our model focuses on access-driven attacks. For concreteness, we focus on the class of probing attacks, where a malicious operating system osA (the attacker) adaptively probes the state to retrieve information about previous execution steps, and then launches an attack to learn information otherwise private to some operating system osV (the victim). The specifics of probing are determined by the adversarial setting, and the side-channel he exploits. Probing attacks can be captured in an extension of our model with observations. Let Obs be a set of observations, and F be a set of observation functions that map states to observations. Informally, two traces t1 and t2 characterize a probing attack if, for some “matching points” i and j, either t1 [i] ≡osA t2 [j], or f (t1 [i]) = f (t2 [j]) for some observation function f . The notion “matching points” can be made precise by introducing a notion of execution and observation trace, and by lifting the definitions of ≈act osi and ≈osi to them. We proceed as follows. First, we define a notion of execution and observation step between pairs of states and a,f   observations. Formally, s, o − −→ s , o is defined by the clause:

IV. C ACHE - LEAKAGE RESILIENT OS ISOLATION The purpose of this section is to prove OS isolation in settings where cache information is leaked to operating systems. We first review the state-of-the-art in side-channels. Then, we introduce our model of leakage, and define a notion of probing attack. Finally, we show that the flushing policy that is enforced in our model ensures cache-leakage resilient OS isolation, and conclude that probing attacks are not possible. A. Side-channel attacks and leakage resilience Side-channels, such as execution time and power consumption, provide surprisingly effective attack vectors for gaining privileged information that would otherwise remain protected by deployed security mechanisms. Prominent examples of side-channel attacks include Kocher’s timing attack on RSA [28] and Kocher et al’s differential power analysis of DES [29]. In this paper, we are concerned with cache-based attacks, i.e. attacks that exploit the latency between cache hits and cache misses to derive which computations were previously executed. The power of cache-based attacks is illustrated in the works of Bernstein [9], Tromer et al [54], and Gullasch et al [21]. Both show practical attacks that allow to recover cryptographic keys of AES implementations. At about the same time, Cohen [11] reports on cache-based attacks in Microsoft hypervisor. More recently, Ristenpart et al [44] showed that cache-based attacks are not confined to closed systems, and can be realized in cloud architectures based on virtualization. Finally, cold-boot attacks [22] provide another example of attacks that exploit cache behavior. The proliferation of side-channel attacks has spurred security researchers, and in particular cryptographers, to integrate side-channels in their models and analyses. Physically observable cryptography [39] is one example of a model that aims at providing a realistic framework for cryptographic proofs, by giving adversaries the ability to draw observations about the physical execution of programs—related models include the Bounded Storage Model [12] and leakage-resilient cryptography; see [13] for a recent account of the field. One essential component of physically observable cryptography is a leakage oracle that can be called by the adversary: more precisely, the adversary is allowed to call the oracle with an observation function f from states to some observation domain O, and obtains as a response the value of f (s) where s is the current state of the system. The leakage function captures at an abstract level and in a uniform manner different sidechannels (such as timing or power consumption. . . ) and yields a tractable model in which to reason about the security of cryptographic constructions, in the style of provable security.

   a valid state(s) ∧ s − → s ∧ f (s ) = o

Second, we define an observation-and-execution trace, or otrace, as an object of the form a0 ,f0 a1 ,f1 a2 ,f2 s0 , o0 − −−→ s1 , o1 −−−→ s2 , o2 −−−→ s3 , o3 . . . ai ,fi such that every execution step si , oi − −−→ si+1 , oi+1 is valid, and o0 = no obs, where no obs is a distinguished observation that yields no information. All notations for traces naturally extend to o-traces. Third, we define counterparts for ≈act osi and ≈osi for o-traces; see Figure 4. In comparison with the previous section, the definitions add an additional premise in the rule for lockstep execution. Specifically, ≈act osi requires that the observation functions coincide, and ≈osi requires that the outcome of the observations coincide. A probing attack from the operating system osA is a pair of traces t1 and t2 such that: • t1 [0].st ≡osA t2 [0].st; act • t1 ≈osA t2 ; • t1 ≈osA t2 ; where t[i].st denotes the state component of t[i]. The notion of probing attack is very general, and can be specialized to many settings. For the purpose of this paper, we focus on cache-based attacks—TLB-based attacks could be modelled similarly—and let Obs be the set of functions of operating systems to sets of virtual addresses. We consider a single observation function cache obs such that for every state s and operating system osi that is not active at state s , cache obs(s , osi) is the set of virtual addresses owned by osi that are present in the cache c—we define cache obs(s , s · active os) as the empty set.

192 182

t1 ≈act osi t2

¬os action(s, a, osi) a,f

((s, o) −−→ t1 ) t1 ≈act osi t2

a,f

(s −−→ t1 ) ≈osi t2 t1 ≈osi t2

((s, o) −−→ t2 )

os action({s1 , s2 }, a, osi)

((s1 , o1 ) −−→ t1 )

¬os action(s, a, osi)

a,f t1 ≈osi (s − −→ t2 )

a,f

≈act osi

¬os action(s, a, osi) a,f

¬os action(s, a, osi)

t1 ≈act osi t1 ≈act osi t2

t1 ≈osi t2

≈act osi t2

t1 ≈osi t2

s1 ≡osi s2

((s2 , o2 ) −−→ t2 ) Fig. 4.

f (t1 [0]) = f (t2 [0]) os action({s1 , s2 }, a, osi)

a,f a,f (s1 , o1 − −→ t1 ) ≈osi (s2 , o2 −−→ t2 )

a,f

Action-equivalence and state-observation equivalence of traces

D. Adaptive attackers

Observe that the function cache obs satisfies several of the axioms laid down by Micali and Reyzin for their model of Physically Observable Computation [39], namely: 1) Only computation leaks information: the only unaccessed memory that may leak information is the one that is moved from the cache and back, 2) Information leakage depends on the chosen measurement: not all the information that might leak is observed. Notice, though, that our model admits adaptively and adversarially chosen measurements. 3) Leaked information is (efficiently) computable from the internal configuration of the observed device: the function cache obs is computed using a portion of the state of the platform, namely, the current cache and an identifier of a guest operating system.

In order to model adaptive attackers, our machine-checked proof of Proposition 5 is cast in a slightly more general setting, where the operational semantics is defined relative to generalized actions, i.e. maps from observations to actions, and the observation functions take as input a valid transition step. A,f   For instance, the execution relation s, o − −→ s , o is defined by the clause:   A(o)  valid state(s) ∧ s − −−→ s ∧ f (s, s , A(o)) = o

The notion of valid trace and the statement of isolation remains unchanged—although the relations ≈act osi and ≈osi need to be adapted to this new setting. The full details are found in the accompanying formalization, and will be reported in the long version of the paper.

C. OS isolation OS isolation extends to the cache-leakage setting. It follows that probing attacks are impossible. Below we let oT race be the set of o-traces.

V. D ISCUSSION Our isolation theorems are established under specific cache policies, for a specific counter-measure against cache-timing attacks, and for a specific model of cache. In this section, we examine alternative settings, and summarize the main issues that would arise from proving isolation in these settings. Strength of the leakage function: We have fixed the definition of cache obs to prove a strong form of cacheleakage resilient isolation. When considering semantics that do not flush the cache upon context switch, it may be more fruitful to consider weaker observation functions that leak the size of the cache that belongs to any given operating system. Cache policies: Two essential ingredients of the cache policy are the replacement policy and the write policy. On the one hand, the replacement policy for the cache and the TLB is left abstract in our model, so any reasonable algorithm will preserve these properties (as embodied e.g. in the definition of cache add in Section II). On the other hand, the semantics of the write action fixes a write-through policy. Large parts of the development are independent of the choice of a write policy; in particular, the various notions of equivalence do not commit to any policy. For instance, the definition of cache equivalence already provisions for possible inconsistencies between the cache and the memory, by performing the verifications on the values held by addresses in an appropriate order, and hence it remains correct for a write-back policy. In fact, the impact of the write policy is confined to the proofs of the unwinding lemmas, where

Proposition 5 (Cache-leakage resilient OS isolation). ∀ (t1 t2 : oT race) (osi : os ident), t1 ≈act osi t2 → t1 [0].st ≡osi t2 [0].st → t1 ≈osi t2 The proof of the proposition is based on the following lemma. Lemma 6 (Empty observation lemma). ∀ (s : State) (osi : os ident), valid state(s ) → cache obs(s , osi) = ∅ The proof of this lemma critically depends on the fact that the cache is flushed at context switches. In fact, one can easily find probing attacks in a modified semantics that leaves the cache unmodified at context switches. For instance, consider the following two partial traces, that start in two osA equivalent states: • the victim operating system, which is currently active, performs a read action read va; then the hypervisor performs a switch and sets osA to be the active OS • the victim operating system, which is currently active, performs a read action read va ; then the hypervisor performs a switch and sets osA to be the active OS. In some situations, the caches will differ after the read, so that the operating system can discriminate between the two traces.

193 183

we use the coherence between the cache and the memory. We nevertheless believe that both lemmas remain valid under suitable provisions on the write policy; for instance, we believe that the unwinding lemmas, and OS isolation would still hold for a write-back policy, provided that all pending writes are performed prior to context switch. We leave it for future work to verify this claim formally. Models of cache: Current versions of the ARM architecture (V6 and newer) use VIPT caches, in which the index of the cache is the virtual address, but each entry is tagged with the physical address. This avoids the need of flushing the cache on every context switch, and therefore requires less software management. Adapting our model and the proof of OS isolation to such caches seems feasible, but it does not seem immediate to prove cache-leakage OS isolation. One interesting possibility would be to develop weaker, quantitative, notions of isolation that measure the amount of information that is leaked throughout execution, and weaker observation functions. Countermeasures: Flushing the cache upon context switch is an effective measure for preventing cache-timing attacks. However, it incurs a significant performance penalty. Bernstein [9] and Tromer et al [54] provide a comprehensive analysis of cache-timing attacks for AES, and describe several attack models and countermeasures. Their suggestions of countermeasures include: flushing the cache upon context switch, as modelled in this paper, disabling cache for critical computations, and preloading tables in the cache for critical computations. We have proved the effectiveness of the first countermeasure, and it would be even simpler to carry a similar proof for the second one. However, it is often the third countermeasure that is favoured for efficiency reasons. For such a scenario, isolation is necessarily conditioned by the victim operating system making an appropriate use of preloading. One could prove a counterpart to Proposition 5 for traces that perform preloading correctly. Wang and Lee [55] independently propose two cache designs that prevent side-channel attacks with minimal performance overhead: the Partition-Locked cache, in which cache partitioning is managed via a locking mechanism, and the Random-Permutation cache, in which the mappings of the cache are randomly permuted. Modeling these designs and formally verifying their security is an exciting avenue for future work.

(concurrent) execution of other guest OSs does not interfere with the execution of osi. Intuitively, our formulation of transparency states that for any operating system osi, any execution trace is osi-equivalent to its osi-erased trace, in which all state components not related to osi have been removed, and all actions not performed by osi, or by the hypervisor on behalf of osi, have been turned into silent actions. This approach is similar, but somewhat stronger than OS isolation, as it requires that after erasing all non-osi data, the execution is still valid; i.e. that state validity and action semantics are maintained. More precisely, the erasure s\osi of a state s is obtained by removing all state components that do not belong to osi, and it satisfies the following properties: i) osi is the only OS in the erased environment; ii) if osi was not active on s, the activity of the erased state is waiting, otherwise it is not changed; iii) hypervisor mappings on the erased state should only be defined for osi and be the same as in s; iv) all memory pages whose owner is not osi (including hypervisor owned ones) are freed; v) osi RW pages remain unchanged; vi) non accessible virtual address (i.e. owned by the hypervisor) are removed from all osi PT pages; vii) if osi is active in s, all non accessible virtual addresses are removed from the cache and the TLB; viii) if osi is not active in s, the cache and the TLB get flushed. Note that if osi is not active, no cache (or TLB) entry will belong to osi, so we erase all of the entries, flushing the cache. If osi is active, cache entries will be owned either by osi or by the hypervisor; in the second case, we erase them by removing all entries corresponding to non accessible virtual addresses. The result of these transformations is that only data from osi stays in the erased cache in all cases. The erasure of a valid state is also valid: Lemma 7 (Valid state erasure). ∀ (s : State), valid state(s) → valid state(s\osi ) Moreover, the erasure of a state is equivalent to the original state. Since osi is always the active OS in s\osi , we must w consider a weaker notion, denoted ≡osi , that is identical to ≡osi except that it does not require that the two states have the same active OS. Lemma 8 (Equivalent state erasure). w ∀ (s : State), valid state(s) → s ≡osi s\osi Next, erasure must be extended to actions. We silence all actions that are executed by OSs different from osi, and by the hypervisor for itself or for one of those other OSs. All other actions (those executed directly by osi or by the hypervisor on behalf of osi) remain unchanged. We use the a\osi notation for the erasure of an action. Erasure preserves valid step executions.

VI. T RANSPARENCY Functional correctness of a hypervisor is conveniently formalized as a transparency property, stating that, under some specific conditions, a guest operating system cannot distinguish whether it runs on the virtualization platform or operates directly on hardware. In this section, we prove a variant of this property suitable for paravirtualization: namely, a guest OS is unable to distinguish between executing together with other OSs and executing alone on the platform. More precisely, the proved properties establish that given a guest OS osi, the

Lemma 9 (Valid step execution erasure). a\osi   a ∀(s s : State)(a : Action), s − → s → s \osi −−−→s \osi The proof of the lemma hinges upon the “loose” semantics silent action. Informally, the semantics allows for the cache

194 184

Our model makes a simplified treatment of page tables. More elaborate models, that account for multi-level page tables, are considered in the L4.verified project [30], and in the ToSS project [16], [15]; interestingly, Franklin et al [16], [15] provide small model theorems that allow reducing the verification of a certain class of properties from models that consider nested page tables to models with page tables of depth 1. Shadow page table algorithms are also considered in [2], where the authors give a formal model of the TLB. Other (not yet implemented) proposals for modelling the TLB and the cache appear in [52], [30]. Tews et al [52] develop a stack of memory models, prove formally that the models satisfy a form of integrity property, called the plain memory property, and discuss an extension to the TLB. Likewise, Kolanski [30] outlines a possible approach to account for the TLB and cache in the seL4 model. Other OS components and mechanisms that have been studied in the literature include device drivers, schedulers, and interrupts, see e.g. [3], [14], [19]. OS safety and security proofs: A significant number of OS verification projects have their primary focus on security. Our work is most closely related to von Oheimb et al’s proof of nonleakage for the SLE88 smart card [41], [42], and to isolation proofs for separation kernels [23], [20], [38]. Other related work includes Sewell et al [48]’s proof of integrity for seL4. Recently, Baumann et al [8] establish OS isolation for an implementation of a separation kernel; however, the isolation property is derived by a pen-and-paper proof from the formal verification in VCC. An alternative to prove the safety and security of legacy operating systems is to design provably safe and secure operating systems. Singularity [24], HiStar [57] and Flume [35] are prime examples of operating systems that were designed with provable safety and security in mind. In a recent article, Krohn and Tromer [34] provide a CSP model of the Flume system [35] and show non-interference of the idealized model; they also discuss covert channels, but do not establish noninterference in presence of leakage. As for legacy systems, machine-checked verification provides stronger confidence in the guarantees achieved by a particular architecture. Tiwari et al [53] and Yang and Hawblitzel [56] use static analysis and type systems to prove security and safety properties of their implementations.

def

P re s silent = T rue def P ost s silent s =  s = s.[cache := , tlb := ] ∧ valid cache(s ) ∧ valid tlb(s ) Fig. 5.

Formal specification of the action silent

and the TLB to be different in the pre and post states, only requiring that both are still valid, according to the explanation provided in Section II-E. The formal axiomatic semantics is presented in Figure 5; for the sake of readbility, we use nondeterministic field update r · [f := ] in the rule—the Coq formalization introduces existential quantification instead. As the erasure of the cache removes every hypervisor entry, a full cache in s will not necessarily be full in s\osi . Moreover, a silenced action that adds an entry to the cache in the original execution (a read_hyper action, for instance) will remove another entry to make space. In the erased execution, the initial state will not be full, but after the effects of silent, the resulting state will have less entries in the cache. This is not an issue, however, because the valid state invariance and lemma 7 imply that caches in both states will be consistent: all memory accesses by osi will obtain the same data either from the cache or from main memory. Lemmas 7 and 9 entail that the erasure of a trace is another trace. The desired transparency property can then be expressed as: given an osi and any valid trace of the system, the erasure of the trace is another trace osi-equivalent to the original one. w Formally, weak equivalence ≈osi on traces is defined as a w straightforward generalization of weak equivalence ≡osi on states. Proposition 10 (Transparency). w ∀ (osi : os ident) (t : T race), t ≈osi t\osi VII. R ELATED

WORK

A. OS verification OS verification has a long and established tradition; more recently, machine-checked OS verification has gained considerable momentum, under the flagship of the L4.verified [26] and the Hyper-V projects [11], [36]. We refer the reader to [27], [49] for an account of prior work and a discussion on the role of machine-checked verification. In this section, we provide a brief account of some closely related work; further references are discussed in [7]. For completeness, we also review some related foundational work on side-channels. Models: Our work (like most works on OS verification) considers an idealized memory model that makes simplifying assumptions, and abstracts away from some components. Idealized models are convenient, because they abstract away irrelevant details; on the other hand, it is important to minimize the gap between idealized models and implementations. A first step is to build realistic models of all OS components; it is not possible to survey all efforts here, so we limit ourselves to representative works.

B. Information flow Nonleakage: Noninterference originates from the seminal work of Goguen and Meseguer [17], [18], and has been generalized in numerous dimensions. Mantel [37] provides a thorough review of a large body of prior work, and elaborates a general framework to formally specify and verify information flow policies. Building upon [37], von Oheimb [41] revisits and generalizes the classical notion of noninterference for state-based systems presented by Rushby in [45]. In particular, he introduces a notion of nonleakage which focuses on information flow during system runs rather on the observability of actions, and guarantees that secret information present in the

195 185

initial state of the system does not leak out of the domain(s) it is intended to be confined to. Nonleakage combined with noninterference give rise to the notion of noninfluence. One minor shortcoming of noninfluence is that it only considers traces that are generated by the same sequence of actions, whereas it is often desirable to prove a refined and stronger property which also consider traces that coincide in the actions performed by the observing domain. Our isolation theorems in Sections III and IV are stated as nonleakage theorems, but avoid the aforementioned shortcoming. Language-based security: One prime goal of languagebased security is to prove that programs adhere to information flow policies, including noninterference or more expressive policies. We refer to [46], [47] for an account of the field. Side channels: Agat [1] proposes a static program transformation that eliminates timing leaks by inserting dummy instructions that balance the execution time of branching statements; alternative methods that achieve a similar effect include [40]. K¨opf et al [31], [32] develop an abstract model and algorithms to quantify the information leaked by side-channels, and to analyze the effects of specific counter-measures such as randomization and timing. One strength of their approach is the use of information-theoretic tools, and most notably entropy measures, to determine precise bounds for leakage. Recently, K¨opf et al [33] propose an abstract interpretationbased analysis for cache-based attacks. Their analysis computes an upper bound on the amount of information that can be retrieved by synchronous access-driven cache attacks. Establishing connections between their analysis and our model is left for future work.

Another direction for future work is to enrich our model with devices and interrupts, and to give sufficient conditions for isolation properties to remain valid in this extended setting. Finally, we have started to prove isolation properties for a toy implementation of the hypervisor. Since isolation properties are relational properties rather than safety properties, we rely on self-composition [6] and product programs [5] for reducing verification of isolation properties to standard verification tasks that can be delegated to automatic tools based on verification condition generators and SMT solvers. ACKNOWLEDGMENTS The authors want to thank CSF reviewers for helpful feedback on the paper. The work of Gilles Barthe has been partially funded by European Project FP7 256980 NESSoS, Spanish project TIN2009-14599 DESAFIOS 10 and Madrid Regional project S2009TIC-1465 PROMETIDOS and the work of Gustavo Betarte, Juan Diego Campo and Carlos Luna by Uruguayan project ANII-Clemente Estable PR-FCE-2009-1-2568 VirtualCert. R EFERENCES [1] Johan Agat. Transforming out Timing Leaks. In Proc. 27th ACM Symposium on Principles of Programming Languages (POPL 2000), pages 40–53. ACM, 2000. [2] E. Alkassar, E. Cohen, M. Hillebrand, M. Kovalev, and W. Paul. Verifying shadow page table algorithms. In R. Bloem and N. Sharygina, editors, Formal Methods in Computer-Aided Design, 10th International Conference (FMCAD’10), Switzerland, 2010. IEEE CS. [3] Thomas Ball, Ella Bounimova, Byron Cook, Vladimir Levin, Jakob Lichtenberg, Con McGarvey, Bohus Ondrusek, Sriram K. Rajamani, and Abdullah Ustuner. Thorough static analysis of device drivers. In Yolande Berbers and Willy Zwaenepoel, editors, EuroSys, pages 73–85. ACM, 2006. [4] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In SOSP ’03: Proceedings of the nineteenth ACM symposium on Operating systems principles, pages 164–177, New York, NY, USA, 2003. ACM Press. [5] G. Barthe, J. M. Crespo, and C. Kunz. Relational verification using product programs. In Michael Butler and Wolfram Schulte, editors, Formal Methods, volume 6664 of LNCS. Springer-Verlag, 2011. [6] G. Barthe, P. D’Argenio, and T. Rezk. Secure Information Flow by SelfComposition. In R. Foccardi, editor, Computer Security Foundations, pages 100–114. IEEE Press, 2004. [7] Gilles Barthe, Gustavo Betarte, Juan Diego Campo, and Carlos Luna. Formally verifying isolation and availability in an idealized model of virtualization. In Michael Butler and Wolfram Schulte, editors, Formal Methods, volume 6664 of LNCS. Springer-Verlag, 2011. [8] C. Baumann, H. Blasum, T. Bormer, and S. Tverdyshev. Proving memory separation in a microkernel by code level verification. In Wilfried Steiner and Roman Obermaisser, editors, 1st International Workshop on Architectures and Applications for Mixed-Criticality Systems (AMICS 2011), Newport Beach, CA, USA, 2011. IEEE Computer Society. [9] Daniel J. Bernstein. Cache-timing attacks on aes, 2005. Available from author’s webpage. [10] M. R. Clarkson and F. B. Schneider. Hyperproperties. Journal of Computer Security, 18(6):1157–1210, 2010. [11] E. Cohen. Validating the microsoft hypervisor. In J. Misra, T. Nipkow, and E. Sekerinski, editors, FM ’06, volume 4085 of LNCS, pages 81–81. Springer, 2006. [12] S. Dziembowski and U. Maurer. Optimal randomizer efficiency in the bounded-storage model. Journal of Cryptology, 17(1):5–26, 2004. [13] Stefan Dziembowski and Krzysztof Pietrzak. Leakage-resilient cryptography. In FOCS, pages 293–302. IEEE Computer Society, 2008.

VIII. C ONCLUSION We have developed an idealized model of virtualization that accomodates cache and TLB, and captures at an abstract level cache-based leakage in the spirit of leakage resilient cryptography. We have observed that in absence of any specific countermeasure, trivial cache-based side-channels attacks are possible, and verified formally that one can enforce strong isolation properties between operating systems, by flushing the cache and TLB upon context switch. Moreover, we have proved that in our idealized model the virtualization platform is transparent for guest operating systems in the sense that they cannot tell whether they execute alone or together with other systems on the platform. The formal development is about 34 kLOC of Coq—adding around 14 kLOC to the development reported in [7]. One main direction for future work is to extend our isolation and transparency proofs to other settings. In particular, we would like to generalize these results beyond the write through policy considered in this paper, and for other countermeasures such as preloading. It would also be interesting to consider other cache and TLB mechanisms, in which entries are tagged with the owning operating system.

196 186

[35] Maxwell N. Krohn, Alexander Yip, Micah Z. Brodsky, Natan Cliffer, M. Frans Kaashoek, Eddie Kohler, and Robert Morris. Information flow control for standard os abstractions. In Thomas C. Bressoud and M. Frans Kaashoek, editors, SOSP, pages 321–334. ACM, 2007. [36] D. Leinenbach and T. Santen. Verifying the microsoft hyper-v hypervisor with vcc. In A. Cavalcanti and D. Dams, editors, FM 2009, volume 5850 of LNCS, pages 806–809. Springer, 2009. [37] Heiko Mantel. A Uniform Framework for the Formal Specification and Verification of Information Flow Security. PhD thesis, Univ. des Saarlandes, 2003. [38] W. Martin, P. White, F.S. Taylor, and A. Goldberg. Formal construction of the mathematically analyzed separation kernel. In The Fifteenth IEEE International Conference on Automated Software Engineering, 2000. [39] Silvio Micali and Leonid Reyzin. Physically observable cryptography (extended abstract). In Moni Naor, editor, TCC, volume 2951 of Lecture Notes in Computer Science, pages 278–296. Springer, 2004. [40] David Molnar, Matt Piotrowski, David Schultz, and David Wagner. The program counter security model: Automatic detection and removal of control-flow side channel attacks. In Dongho Won and Seungjoo Kim, editors, ICISC, volume 3935 of Lecture Notes in Computer Science, pages 156–168. Springer, 2005. [41] David von Oheimb. Information flow control revisited: Noninfluence = Noninterference + Nonleakage. In P. Samarati, P. Ryan, D. Gollmann, and R. Molva, editors, Computer Security – ESORICS 2004, volume 3193 of LNCS, pages 225–243. Springer, 2004. [42] David von Oheimb, Volkmar Lotz, and Georg Walter. Analyzing SLE 88 memory management security using Interacting State Machines. International Journal of Information Security, 4(3):155–171, 2005. [43] The VirtualCert project. Supporting Coq formalization. See www.fing.edu.uy/inco/grupos/gsi/proyectos/virtualcert.php. [44] Thomas Ristenpart, Eran Tromer, Hovav Shacham, and Stefan Savage. Hey, you, get off of my cloud! Exploring information leakage in thirdparty compute clouds. In Somesh Jha and Angelos Keromytis, editors, Proceedings of CCS 2009, pages 199–212. ACM Press, November 2009. [45] J. M. Rushby. Noninterference, Transitivity, and Channel-Control Security Policies. Technical Report CSL-92-02, SRI International, 1992. [46] A. Sabelfeld and A. Myers. Language-based information-flow security. Selected Areas in Communication, 21:5–19, 2003. [47] A. Sabelfeld and D. Sands. Declassification: Dimensions and principles. Journal of Computer Security (JCS), 2007. [48] Thomas Sewell, Simon Winwood, Peter Gammie, Toby Murray, June Andronick, and Gerwin Klein. seL4 enforces integrity. In 2nd Conference on Interactive Theorem Proving, Nijmegen, The Netherlands, Aug 2011. [49] Zhong Shao. Certified software. Commun. ACM, 53(12):56–66, 2010. [50] The Coq Development Team. The Coq Proof Assistant Reference Manual – Version V8.3, 2010. [51] T. Terauchi and A. Aiken. Secure information flow as a safety problem. In C. Hankin and I. Siveroni, editors, Proceedings of SAS’05, volume 3672 of Lecture Notes in Computer Science, pages 352–367. SpringerVerlag, 2005. [52] Hendrik Tews, Marcus V¨olp, and Tjark Weber. Formal memory models for the verification of low-level operating-system code. J. Autom. Reasoning, 42(2-4):189–227, 2009. [53] Mohit Tiwari, Jason Oberg, Xun Li, Jonathan Valamehr, Timothy E. Levin, Ben Hardekopf, Ryan Kastner, Frederic T. Chong, and Timothy Sherwood. Crafting a usable microkernel, processor, and i/o system with strict and provable information flow security. In Ravi Iyer, Qing Yang, and Antonio Gonz´alez, editors, ISCA, pages 189–200. ACM, 2011. [54] Eran Tromer, Dag Arne Osvik, and Adi Shamir. Efficient cache attacks on aes, and countermeasures. J. Cryptology, 23(1):37–71, 2010. [55] Zhenghong Wang and Ruby B. Lee. New cache designs for thwarting software cache-based side channel attacks. In 34th International Symposium on Computer Architecture (ISCA 2007), pages 494–505. ACM, 2007. [56] Jean Yang and Chris Hawblitzel. Safe to the last instruction: automated verification of a type-safe operating system. In Proceedings of PLDI’10, pages 99–110. ACM, 2010. [57] Nickolai Zeldovich, Silas Boyd-Wickizer, Eddie Kohler, and David Mazi`eres. Making information flow explicit in histar. In OSDI, pages 263–278. USENIX Association, 2006.

[14] Xinyu Feng, Zhong Shao, Yu Guo, and Yuan Dong. Certifying low-level programs with hardware interrupts and preemptive threads. J. Autom. Reasoning, 42(2-4):301–347, 2009. [15] Jason Franklin, Sagar Chaki, Anupam Datta, Jonathan McCune, and Amit Vasudevan. Parametric verification of address space separation. In P. Degano and J. Guttman, editors, Proceedings of POST’12, volume 7215 of Lecture Notes in Computer Science, 2012. [16] Jason Franklin, Sagar Chaki, Anupam Datta, and Arvind Seshadri. Scalable parametric verification of secure systems: How to verify reference monitors without worrying about data structure size. In IEEE Symposium on Security and Privacy, pages 365–379. IEEE Computer Society, 2010. [17] Joseph A. Goguen and Jos´e Meseguer. Security policies and security models. In IEEE Symposium on Security and Privacy, pages 11–20, 1982. [18] Joseph A. Goguen and Jos´e Meseguer. Unwinding and inference control. In IEEE Symposium on Security and Privacy, pages 75–87, 1984. [19] Alexey Gotsman and Hongseok Yang. Modular verification of preemptive os kernels. In Manuel M. T. Chakravarty, Zhenjiang Hu, and Olivier Danvy, editors, ICFP, pages 404–417. ACM, 2011. [20] David Greve, Matthew Wilding, and W. Mark Van Eet. A separation kernel formal security policy. In Proc. Fourth International Workshop on the ACL2 Theorem Prover and Its Applications, 2003. [21] David Gullasch, Endre Bangerter, and Stephan Krenn. Cache games bringing access-based cache attacks on aes to practice. In Proc. 2011 IEEE Symposium on Security and Privacy (Oakland 2011), pages 490– 505. IEEE Computer Society, 2011. [22] J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten. Lest we remember: Cold boot attacks on encryption keys. In Paul C. van Oorschot, editor, USENIX Security Symposium, pages 45–60. USENIX Association, 2008. [23] Constance L. Heitmeyer, Myla Archer, Elizabeth I. Leonard, and John McLean. Formal specification and verification of data separation in a separation kernel for an embedded system. In Proceedings of the 13th ACM conference on Computer and communications security, CCS ’06, pages 346–355, NY, USA, 2006. ACM. [24] Galen C. Hunt and James R. Larus. Singularity: rethinking the software stack. SIGOPS Oper. Syst. Rev., 41(2):37–49, April 2007. [25] J-Y Hwang, S-B Suh, S-K Heo, C-J Park, J-M Ryu, S-Y Park, and C-R Kim. Xen on arm: System virtualization using xen hypervisor for arm-based secure mobile phones. In 5th IEEE Consumer and Communications Networking Conference, 2008. [26] G. Klein, J. Andronick, K. Elphinstone, G. Heiser, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, R. Kolanski, M. Norrish, T. Sewell, H. Tuch, and S. Winwood. seL4: Formal verification of an OS kernel. Communications of the ACM (CACM), 53(6):107–115, June 2010. [27] Gerwin Klein. Operating system verification – an overview. S¯adhan¯a, 34(1):27–69, February 2009. [28] Paul Kocher. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems. In Proc. Advances in Cryptology (CRYPTO 1996), volume 1109 of Lecture Notes in Computer Science, pages 104–113. Springer, 1996. [29] Paul Kocher, Joshua Jaffe, and Benjamin Jun. Differential Power Analysis. In Proc. Advances in Cryptology (CRYPTO 1999), volume 1666 of Lecture Notes in Computer Science, pages 388–397. Springer, 1999. [30] Rafal Kolanski. Verification of Programs in Virtual Memory Using Separation Logic. PhD thesis, University of New South Wales, 2011. [31] Boris K¨opf and David Basin. An Information-Theoretic Model for Adaptive Side-Channel Attacks. In Proc. 14th ACM Conference on Computer and Communication Security (CCS 2007), pages 286–296. ACM, 2007. [32] Boris K¨opf and Markus D¨urmuth. A Provably Secure And Efficient Countermeasure Against Timing Attacks. In Proc. 22nd IEEE Computer Security Foundations Symposium (CSF 2009), pages 324–335. IEEE Computer Society, 2009. [33] Boris K¨opf, Laurent Mauborgne, and Martin Ochoa. Automatic quantification of cache side-channels. In Proceedings of CAV’12, Lecture Notes in Computer Science. Springer-Verlag, 2012. Also appears as Cryptology ePrint Archive, Report 2012/034. [34] Maxwell N. Krohn and Eran Tromer. Noninterference for a practical difc-based operating system. In IEEE Symposium on Security and Privacy, pages 61–76. IEEE Computer Society, 2009.

197 187