The Inherently Distributed Adaptable O kernel Francisco J. Ballesteros and Luis L. Fernandez fnemo|
[email protected]
Systems and Communications Group (GSyC) Computer Science Department Carlos III University of Madrid
Resumen Los sistemas basados en micronucleo constituyen el enfoque mas popular en la construccion de sistemas operativos distribuidos. Pero para construir sistemas tanto distribuidos como adaptables, estos enfoques no son adecuados, pues crean una barrera arti cial a la distribucion del sistema, da~nando la transparencia y la adaptabilidad del mismo. En este documento se describe el sistema operativo O, donde el micronucleo y las abstracciones que suministra estan distribuidos y son adaptables. Este informe muestra como es posible construir un interfaz del sistema proximo al hardware sin que este este restringido a un unico nodo. Al contrario de lo que hacen los sistemas \distribuidos" basados en micronucleo de hoy da, O exporta y multiplexa el hardware presente en la red, y no en un 'unico nodo. De este modo, eliminamos la vision arti cial de los nodos como entidades aisladas permitiendo que el sistema pueda mantener inherentemente su adaptabilidad y otras propiedades necesarias para su distribucion.
Abstract To build a distributed operating system the microkernel approach is the most popular. To build an adaptable operating system a minimal microkernel is preferred. But for an adaptable and exible distributed operating system the previous approaches are not enough because they create an arti cial barrier to OS distribution and harm system transparency and adaptability. This paper describes the implementation of the O system, where the microkernel itself and its abstractions are both distributed and adaptable. We show how the system interface is close to the hardware but not restricted to a local node. The whole network is exported and multiplexed hardware in contrast to what is done by today microkernel in \distributed" OSes. In this way the arti cial view of nodes as isolated entities can be eliminated and the system can maintain both adaptability and other good properties for system distribution as inherent features. Keywords: Adaptability, exibility, distributed operating system, kernel.
UC3M-TR-CS-1997-01
1
1 Introduction
process and retain the resulting distributed hardware abstractions as basic system services to be Today distributed operating systems are based used by applications like building blocks. Neton microkernels which essentially multiplex just work hardware resources to be considered inlocal resources. Obviously system services can clude: be later distributed when using a (centralized) Processors. Both the processor time and microkernel. Indeed, that can be done even when the processor status. using a monolithic system [Pike et al., 1990]. But this will not solve the actual problem that Events. Including interrupts, traps, excepthe system (i.e. the kernel is not being actutions and I/O ports. ally distributed and is not multiplexing transparently both local and remote resources. Memory. Not only physical memory, we also deal with virtual memory hardware [ We argued elsewhere Ballesteros and Ferand secondary (i.e. disk) memory. nandez, 1996c] that the best way to distribute the system while retaining the adaptability and extensibility is to distribute the microkernel itAlthough other resources remain (e.g. deself and to make its interface to be close to the vice machinery) we have not included them as distributed hardware. The baseline was that when basic resources. They can be always accessed the system is to be distributed and resources are by means of the previous ones (e.g. a disk conto be exported transparently no matter their lo- troller is accessed by means of memory transfers, cation, the system should not be built based on interrupts and I/O operations). the opposite assumption. In this paper the O To make a long story short, we end up with system, which was built following that intuition, three basic abstractions which O implements: is described. The guidelines for O design try to obtain Shuttles. They are extensible hardware distributable hardware abstractions while in curcontexts which can be loaded at any availrent operating systems the trend is either to proable processor to make them run. Initially, vide high-level portable and general purpose abthey are made of a program counter and stractions [Bershad et al., 1995, Mullender et al., a stack pointer but other properties can 1990] or to multiplex, in a non-portable way, raw be dinamicly added. The processor time hardware [Engler et al., 1994]. We bet for an is quantized and any Shuttle can request intermediate approach [Ballesteros and Fernanany available quantum. dez, 1996c]: Portals. They are supporting a distributed interrupt service. A portal is an identi er 1. start with hardware-level abstractions (page with an associated handler. It can be intables, TLBs1 , processor registers, intervoked to make the handler perform some rupts, traps, etc.) action. Interrupts, traps, exceptions and I/O ports are modeled by portals. 2. Distribute them just to the minimum level where they could be used from remote nodes; Distributed software TLBs2 . (dtlbs i.e. they could be named, accessed, infor short). They perform as a hardware voked, etc. TLB but for the ability to install translations to remote memory and, of course, for the fact that they are implemented by softWhen every hardware resource has been made ware and not hardware. Physical memory network-wide available we stop the abstraction 1
Translation lookaside buers.
UC3M-TR-CS-1997-01
2 Can be also considered as a distributed software Memory Management Unit (MMU).
2
is divided on page frames and are allocated better to ensure system adaptability [Engler, 1995] to applications on demand. but they are not good for system distribution and can not be used outside their nodes in the The key point in all of them is that they can network. be used from any node in the network. Thus a O resources are named using identi ers which portal can be invoked anywhere and its handler provide both bene ts at a low cost while they (the associated Shuttle) can reside at any node. permit ecient resource location whenever it is Page frame numbers can be referred from any needed. dtlb too. In this way every frame of physical memory is available to every node. Identi ers are made of a public part and a private part (see gure 1). For each kind of reIn few words, this three abstractions model source using identi ers the rst part is unique the hardware in the network in such a way that in the whole system. The second part only has they are both extensible and independent of their sense for the owner of the named object which network location. This makes easy the imple- uses it to locate the object implementation quickly mentation of distributed adaptable OSes. upon request. Before describing in more detail the three O abstractions we will devote the section 2 to Node Sequence Slot a common issue for all of them, naming and protection. Public Private In sections 3.2 to 3.4 we will take a closer look to Shuttles, Portals and DTLBs in turn. Section 4 will discuss related work. To conclude, Figure 1: Identi ers in O in section 5 some conclusions will be made and future work will be described. More information about the O kernel can into the public part of the identibe found in [Ballesteros and Fernandez, 1996f, er Looking we can see two elds: the node eld which Ballesteros and Fernandez, 1996c, Ballesteros and identi es the node where the named object was Fernandez, 1996e, Ballesteros, 1996, Ballesteros created and the sequence eld which makes the and Fernandez, 1996d, Fernandez and Balleste- name to be unique. ros, 1996, Ballesteros and Fernandez, 1996g]. In this way, identi ers aid to system distribution and to named object implementors in the following ways:
2 Naming and protection
As it happens with other OS abstractions, there are two trends to provide names in operating systems: use high-level transparent, opaque names (e.g. [Mullender et al., 1990, Accetta et al., 1986]) or use just hardware names (e.g. [Engler et al., 1994]). The rst ones are better to distribute the system because they can be used in a networkwide fashion. However, they degrade system adaptability because applications can not directly obtain the information contained by the name itself (resource position, etc.) The last ones are UC3M-TR-CS-1997-01
The node eld of the public part may serve as a hint for the node where the object is located. Location algorithms may use it in this way.
The private part of the identi er will serve to the named object implementor as an index. On object creation a reference to the object may be placed on an array and the index stored in the slot eld at the private part. If the object is ever migrated a new index can be placed in the identi er, and 3
the requesting application noti ed of the to permit application speci c checks. In O , when applications want to use their own checks change. for system provided objects they should retain As the private part may change, identi er their rights and act as proxies. In the future we equality tests must compare only the public part. could incorporate the Aegis protection scheme, but we feel that our current scheme suces to Portals, Shuttles and dtlbs are identi ed maintain system integrity and does not introin this way, so this section is before their de- duce complexity. scription. Page frames are named with a portal identi er and the frame address. Of course, O does not impose this model to user applications, O though we think it is a good model to be use at higher levels. There are three basic system servers in O : The Protection is accomplished using capabili- shuttle and the portal server and the distributed ties [Tanenbaum et al., 1986]. Although each memory manager. They implement the provided application may implement its own protection abstractions: shuttles, portals and dtlbs respecscheme, O generates long random numbers for tively. Before showing each one in turn we will each resource provided and tells it to any ap- discuss some aspects shared by all of them. plication at resource allocation time. Further requests must present such capability to the sysnon-privileged tem. portal Actually, there are more that one capability per resource. For instance, page frames have read, write, exec and release capabilities. Processor slots have only write capabilities and Portals have only exec (invoke) and write (install- privileged handler) capabilities. We believe that this protection model is both simple and powerful and, in any case, any other Portal server Shuttle server DMM model can be incorporated to the system by mean of system adaptation (e.g. by changing the priHardware vate part of the identi ers to express protection).
3 The
2.1 Discussion
kernel
Protection domain
Figure 2: System model Naming and protection are not dierent that those of other systems [Mullender et al., 1990, Engler, 1995]. The dierence is that we allow part of the identi er to change. This can be used by named object owners to implement a 3.1 Common features distributed data structure so that objects could be located quickly by their implementors. Other As common features we can nd the system model systems use hash tables for the same purpose used to implement them and the way they are handled by the servers. They will be seen below. [Valencia, , Accetta et al., 1986]. In Aegis [Engler, 1995], application speci c capabilities are used for protection. It uses guards UC3M-TR-CS-1997-01
4
3.1.1 System model Each server is implemented as a privileged system server and runs in processor kernel mode (see gure 2). Although in the current implementation they are co-located in a single address space they could be loaded separately on dierent domains. Any other system service can run in kernel mode or in user mode at will. Portals are used to access system services. Each server will create a portal to accept user requests. So, there is no dierence between userlevel and system-level services invocation. A bene t of this is the ability to interpose or rede ne system requests on a per-application basis. So, to rede ne a particular service a user can change the portal identi er to be used. As such identi ers will be usually stored in a userlevel library, this can be done easily. The net eect is that future invocations for such service will reach a dierent service provider.
3.1.2 Using system objects Another thing in common on the three system servers is that they use slab allocators to store the system objects implementing the abstractions. An O Slab is just a dynamicly growable array. Initially, when objects are neither moved nor migrated from one node to another the private part of their identi ers will keep their indexes in the Slab array. When a request is made to the server, it will extract the private part of the identi er involved and use it as an index in the Slab. Most of the times the object will be there and will thus be located in a fast way. When a reference is made to an object not present in the Slab an upcall will be raised so that the application could choose what location and transport protocols should be used to reach the remote object. Although it is up to the application level, we provide handle allocators as a default location protocol implementation. A handle allocator is UC3M-TR-CS-1997-01
a combination of hash table and the information provided by the system Slab. Obviously, this information is kept by the privileged server though it is readable by any application. The hash table of the handle allocator is used to keep track of migrated system objects. When an object migrates to another node, its old slot in the source node Slab will be marked as free and reused in the future. A new slot will be also allocated in its new node. When a request for it arrives to its old node the next things happen:
The slot will not mach the public part of the object identi er and the missing upcall raised. Then, the hash table of the Handle will be consulted for the new object address. If an entry is found in the hash table the request will be forwarded to the associated address. In other case we will assume that the object lives in its creation node (obtained from the identi er public part) and send the request there.
In any case, when a forwarded request arrives to a target node it is processed there if the object is there. When a forwarded request is received and the object is not found in the Handle allocator (i.e. it has no entry in the hash table and is not found in the system Slab), a broadcast is made to locate it and the new location sent to the sender.
3.2 Shuttles Process services are very simple in the O distributed adaptable microkernel if compared with any other OS. The only abstraction provided is the Shuttle. A shuttle is essentially an extensible hardware context. Initially, it consist only of a couple of general purpose machine registers, the program counter and a stack pointer. Though it can be safely extended later on to include other 5
pieces of context such like address spaces, privilege levels, etc. Of course, a Shuttle needs some extra resources apart from the machine register so it could execute (an address space, a privilege level, etc.). Such resources are not contained in another \container" abstraction, avoiding the need of being inside something like a task. Being this said, the following questions arise: How does the Shuttle get those resources |i.e., its full context| needed to execute? The answer is straightforward: It simply associates itself with them. The main dierence between O Shuttles and traditional task-like and thread-like abstractions can be appreciated in gures 3 and 4. In O , Shuttles only know what resources they need to execute. In Mach and other systems, threads know in which task they live and the task in turn knows what the used resources are. Off shuttle res.
A
Resource data structures System resource ‘A’
Figure 3: O Shuttle abstraction
Thread
Task res.
Resource data structures System resource ‘A’
Figure 4: Traditional Tasks and Threads
3.2.1 Discussion In today distributed operating systems the basic process abstractions are tasks or processes, and threads [GoodHeart and Cox, 1994]. Tasks3 contain one or more threads along with system resources. One problem with tasks and threads is that they are not adaptable. They must be reimplemented or the operating system changed when any application needs something dierent. As it is said in [Engler et al., 1994] this lack of adaptability and the complexity of the OS abstractions lead to ineciency and unreliability. Some adaptable systems (like Aegis [Engler, 1995]) just export the processor hardware, the problems here are: there is no way |without explicit application intervention| to use remote processors and the process abstraction is not portable.
To execute, Shuttles can be put to run on any available processor quantum. In case the resources used by a Shuttle are not actually present 3.3 Portals in the local node, kernel transparency will solve the problem: remote address spaces, remote mem- The basic interprocess communication mechanism provided by O is the Portal. It can be ory and remote portals can be used. viewed like a \distributed interrupt". Portals share the basic utility of interrupts , i.e. they 3 We will adopt the term \task" to refer both to tasks and processes.
UC3M-TR-CS-1997-01
6
can be addressed transparently to invoke some be seen in gure 5. The elements involved are: action on the interrupt receiver. A portal has a system wide unique identi- prtlsrv The portal server. It implements the er (see [Ballesteros and Fernandez, 1996e] or portal system calls as a passive privileged section 2) so that it could be invoked from any object (the portal server) providing safe innode attached to the network. Although user terdomain portal invokations. The caller provided transport and location protocols are and callee applications can be in dierused to extend the portal mechanism transparent protection domains so that they don't ently over the network, default implementations need to trust each other. Strictily speakfor both are provided so that centralized appliing, this is the actual portal implementacations could also used the portal mechanism. tion. Roughly speaking, we will refer to the whole architecture as the portal sysWith respect to their semantics, portals are tem. A description of the implementation pretty simple. They just set the invocation refor this component can be found in [Baceiver (known by the portal) program counter to llesteros, 1996]. a prespeci ed value (stored in the portal) and transfer an speci ed number of machine words prtllib The portal library. It is linked to every (may be none) from the caller to the callee stack. application using portals. Though it is not The sender can also specify its desire to get atomstrictly needed it provides convenient enically blocked when the message is sent. Blocktry points and permits each application to ing and not blocking message semantics can then customice or replace the portal interface. be achieved. In case of blocking invocation, the receiver is free to unblock the sender whenever it nds it convenient: on invocation receipt, on invocation processing completion, etc. They do not implement buering nor are user application transport svc net they attached to any particular node nor prolibprtl libprtl cess. What is more, they do not specify wheater synchronous messages, asynchronous messages or RPCs are to be used. portal prtl_ system calls As portals are not attached to a node nor user sys to a process they are free to move through the network. This permits other semanticly heavier privileged inter-process communication (IPC) mechanisms one node to be built on top of them. Any application can adapt the portal mechanism and still bene t Figure 5: Portal architecture from its distribution properties. In addition to this, messages are neither preTo concrete the things said so far, the next speci ced nor typed and not every register is section further specify O Portals behavior. saved. So, fast IPC abstractions [Hsieh et al., 1993, Takuro Kitayama, 1993] can be implemented using portals too. 3.3.1 Portal behavior To send a message through a portal (i.e. to invoke a portal) the only thing needed is its iden- Portals can have a handler attached specifying a ti er. Once such identi er has been obtained the particular value for the program counter which message can be sent. Portals are implemented will be invoqued (on the shuttle [Ballesteros and by the portal server. The whole architecture can Fernandez, 1996g] who installed it) on message UC3M-TR-CS-1997-01
7
receipt. If a portal has no handler attached it ory management. The O distributed memory will not accept invocations. manager (dmm) implements a Distributed Software TLB (dtlb). Its structure similar at rst To send a message to a portal, its identi er sight to that of [Engler et al., 1995]; the user is the only requirement. No source or sender can establish translations from virtual to physidenti er is ever required. Also, if the protocol ical memory addresses and the dtlb is safely used through the network is reliable, portals are multiplexed by O among the compiting applireliable too. Thus, portals are reliable and con- cations (see gure 6). The dierence is that in nectionless. the dtlb physical memory addresses can refer The implementation supports both synchro- not only to local memory but also to remote nous and asynchronous message modes. In syn- memory. chronous mode the sender is automatically blocked sw TLB as seen by one user until the portal handler further decides to unblock it. In non-blocking mode the sender will not await until the portal is ready for acepting messages and an error code may be returned inuser stead. kernel Both inter-domain and intra-domain calls are sw. TLB feasible too. As an optimization, intra-domain calls could be also handled in the portal library in the same way they are in the portal server. hw TLB Inter-domain calls are always supported by the portal server via system calls. Figure 6: DTLBs. User uses them to translate The problem of authentication is enterely up from vaddrs to paddrs. to the application owning the portal and should be done by the portal handler. Thus, this distributed TLB can be safely used at every node in the network. They way Finally, portals are neither attached to a local memory is used to cache remote one is denode nor to an application. They can migrate ned by the user as the dmm only knows how to both from one application to another and from multiplex this dtlb and nothing more. one node to another. They use system-wide unique identi ers for this purpose. A complete user-level distributed virtual memory system has been designed for O [Ballesteros and Fernandez, 1996b]. It has been designed 3.3.2 Discussion to take advantage of the dmm capabilities In this way, dierent Distributed Shared Memory and Even though other adaptable IPC systems [Veitch Distributed Memory (DSMs and DVMs) and Hutchinson, 1996] do exist and many network- systems canVirtual cohexist O and even a more wide IPC systems [Liedtke, 1993] do exist too, complex system like aindistributed manboth things are hardly supported by the same ager can be easily implemented on object [Ballesteit mechanism. Like dtlbs and shuttles, portals and Fernandez, 1996a]. They can be adapted keep both adaptability and good properties for ros because they run in user space and can also be system distribution. replaced when needed. But even when replaced, the memory services continue to be essentially distributed as the basic abstraction, the dtlb, 3.4 Distributed Software TLBs allows that. The dmm introduces three dierent types of The only thing done by O is to export the network hardware, and this also holds for mem- memory addresses. The relationship between UC3M-TR-CS-1997-01
8
them is as follows:
vaddr t Plain virtual addresses used by the programs at application level.
paddr t Distributed physical addresses understood by the software TLB. They are composed of a port t (identifying the dmm containing the page frame) and a maddr t.
There are two ways to implement a dtlb: First, using a single multiplexed software dtlb where user code can include translations and second to implement multiple dtlbs including some means to switch them on the hardware. We have adopted the later one. This is because rst one, the single dtlb, can also be implemented in this way and because it ts better on architectures with page tables.
maddr t Local physical addresses. They're used
by the hardware address translation ma- 3.4.1 Discussion chinery. The dtlb is the rst abstraction we know which Software at application level uses only vaddrs permits physical memory multiplexing including and paddrs (which can refer to local and remote remote memory while supporting user-level virmemory) though it can extract the maddr for a tual memory management. AVM [Engler et al., 1995] is another system which permits user-level given paddr. virtual memory systems but it is not usable on Whenever a distributed physical address is page-table architectures (i.e. with MMU). The rst referenced, the dmm will use local memory abstraction provided by O works well in both to cache the remote page frame. In this way RISC TLB based and CISC MMU based address can remote memory be used transparently. Co- translation hardware. herency is dealt with at user-level as we say in [Ballesteros and Fernandez, 1996b]. There are three parts in the dmm implementation: There are many adaptable and distributed OSes. 1. The hardware dependent part (more or less The key dierence between them and O is how equivalent to the HAT4 layer in conven- adaptability and system distribution are comtional UNIX systems.). It deals with the bined and which grain of abstraction must be address translation hardware, be it a sin- provided by the system kernel. gle TLB or a full MMU, and also splits the We will now describe those related approaches physical memory in page frames. for adaptable or extensible OSes. All of them Memory allocation is thus handled by this have only some of the problems stated above part of the dmm. We have placed it as and use dierent strategies to solve them. Al\hardware dependent" because the dmm though this work has inspired part of our work can also be used to deal with on-disk mem- our approach is radically dierent. ory (see [Fernandez and Ballesteros, 1997]). The Cache Kernel [Cheriton and Duda, 2. The communications subsystem. The part 1994] achieves extensibility by considering a reof the dmm which deals with any other source caching model for the OS. Although it dmm out there. could be extended to consider a network-wide resource caching model it has not been conceived 3. The core of the dmm. This is the part is in that way. In contrast, O will not impose obthe glue that provides the dmm interface stacles to system distribution by modeling only relying on the previous ones. local hardware resources (as it happens in the following systems too). Another crucial dier4 Hardware Address Translation. ence is that it imposes a protection model where
4 Related work
UC3M-TR-CS-1997-01
9
downloading code into the kernel is forbidden. the implementaion of the O servers will be In O , the kernel simplicity avoids the need for detailed in turn. The system has been implecode downloading. mented in native using literate programming with LAT X as the documentation language and C as Another adaptable model is the exokernel theEimplementation language [Knuth, 1992]. The based Aegis system [Engler, 1995]. On more pretty-printed version of the system source code time, this system is only modeling local hard- will be also made available in future reports. ware resources while O is aware of and can handle local and remote resources. In our sysAs future work we plan to implement a usertem, centralized applications can still bene t from level distributed operating systems and distributed O system distribution (they can migrate, use object systems to run experiments to stress the remote resources, communicate transparently over strengths and weakness of the architecture being the network, etc.). In both models there is little proposed. or no reason to extend the kernel because nearly all functionality is under the control of the application. In Spin [Bershad et al., 1995], the system Accetta, can be extended by downloading spindles, i.e. [Accetta et al., 1986] Mike user code, into the kernel. In spite of this, the W. Bolosky, D. Golub, R. Rashid, A. Tevadegree of adaptability is constrained and it would nian, and M. Young. Mach: A new kernel be hard both to distribute system services and to foundation for unix development. In USENIX make centralized applications bene t from pre- Summer Conference, Jul 1986. vious system distribution not to mention that system extensions must be written in a partic- [Ballesteros and Fernandez, 1996a] Francisco J. ular language. Also, the system complexity in- Ballesteros and Luis L. Fernandez. An adaptable and extensible framework for discreases to permit user code downloading. tributed object management. In Special Issues Paramecium [van Doorn et al., 1995] is in Object Oriented Programming. Workshop also an extensible OS but it tries to explore sys- reader of the 10th European Conference on tem distribution at a higher level and is tied to Object-Oriented Programming, ECOOP'96., the Orca system. In this approach the system Linz (Au), july 1996. kernel is considered to be just a local -resource multiplexor. The Paramecium model can be im- [Ballesteros and Fernandez, 1996b] Francisco J. Ballesteros and Luis L. Fernandez. Advice: plemented on top of O if desired. An adaptable and extensible distributed virMore information about other related exten- tual memory architecture. In Parallel and sible and adaptable OSes can be found in [Small Distributed Computing Systems, 1996. and Seltzer, 1996]. [Ballesteros and Fernandez, 1996c] Francisco J. Ballesteros and Luis L. Fernandez. The network is the operating system hardware. submitted for publication, 1996. We have shown the general design of the O [Ballesteros and Fernandez, 1996d] Francisco J. system. We have also stated how the O ap- Ballesteros and Luis L. Fernandez. The o proach departs from other OS architectures and slab and handle allocators. Literate source how its abstractions can be used as basic build- code, 1996. ing blocks to adapt and specialize any applica[Ballesteros and Fernandez, 1996e] Francisco J. tion level software. Ballesteros and Luis L. Fernandez. The o In future technical reports the design and
References
5 Conclusions
UC3M-TR-CS-1997-01
10
system services library. Literate source code, 1996. [Ballesteros and Fernandez, 1996f] Francisco J. Ballesteros and Luis L. Fernandez. O web page. http://www.gsyc.inf.uc3m.es/~ nemo/o.html, 1996. [Ballesteros and Fernandez, 1996g] Francisco J. Ballesteros and Luis L. Fernandez. Shuttles. o process services. Literate source code, 1996. [Ballesteros, 1996] Francisco J. Ballesteros. Portals. an implementation for o inter process comunication. Literate source code, 1996. [Bershad et al., 1995] B.N. Bershad, S. Savage, P. Pardyak, E.G. Sirer, M. Fiuczynski, D. Becker, S. Eggers, and C. Chambers. Extensibility, safety and performance in the spin operating system. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles. ACM, December 1995. [Cheriton and Duda, 1994] D. Cheriton and K. Duda. A caching model of operating system kernel functionality. In Proceedings of the First Symposium on Operating Systems Design and Implementation, pages 179{193, November 1994. [Engler et al., 1994] D. Engler, M. F. Kaashoek, and J. O'Toole. The operating systems kernel as a secure programmable machine. In Proc. of the 6th SIGOPS European Workshop, pages 62{67, Wadern, Germany, Sept 1994. ACM SIGOPS. [Engler et al., 1995] Dawson R. Engler, Sandeep K. Gupta, and M. F. Kaashoek. Avm: Application-level virtual memory. In Hot Topics in Operating Systems (HOTOSV), 1995. [Engler, 1995] Dawson R. Engler. The Design and Implementation of a Prototype Exokernel Operating System. PhD thesis, Massachusetts Institute of Technology, January 1995.
UC3M-TR-CS-1997-01
[Fernandez and Ballesteros, 1996] Luis L. Fernandez and Francisco J. Ballesteros. Advice: Adaptable virtual core engine. notes on design and implementation. Literate source code, 1996. [Fernandez and Ballesteros, 1997] Luis L. Fernandez and Francisco J. Ballesteros. Advice: An adaptable and extensible distributed virtual memory architecture. Technical report, Carlos III University of Madrid, 1997. Submitted for approbation. [GoodHeart and Cox, 1994] GoodHeart and Cox. The Magic Garden Explained. PrenticeHall, 1994. [Hsieh et al., 1993] W.C. Hsieh, M.F. kaashoek, and W.E. Weihl. The persistent relevance of ipc performance: New techniques for reducing the ipc penalty. In 4th Workshop on Workstation Operating Systems, pages 186{190, October 1993. [Knuth, 1992] Donald E. Knuth. Literate Programming. Center for the Study of Language and Information, Stanford University, 1992. [Liedtke, 1993] J. Liedtke. Improving ipc by kernel design. In 14th ACM Symposium on Operating System Principles (SOSP), pages 175{188, Ashville (NC), 1993. [Mullender et al., 1990] S.J. Mullender, G. Van Rossum, A.S. Tanenbaum, R. Van Renesse, and H. Van Staveren. Amoeba: A distributed operating system for the 1990s. IEEE Computer, 14:365{368, May 1990. [Pike et al., 1990] R. Pike, D. Presotto, K. Thompson, and H. Trickey. Plan 9 from bell labs. In NKUUG Proceedings of the Summer 1990 Conference, London (England), July 1990. [Small and Seltzer, 1996] C. Small and M. Seltzer. A comparation of os extension technologies. In Proceedings of the 1996 USENIX technical conference, San Diego, CA, January 1996.
11
[Takuro Kitayama, 1993] Hideyuki Tokuda Takuro Kitayama, Tatsuo Nakajima. Rt-ipc: An ipc extension for real-time mach. In Proceedings of the 2nd Microkernel and Other Kernel Architectures. USENIX, 1993. [Tanenbaum et al., 1986] A.S. Tanenbaum, S.J. Mullender, and R. Van Renesse. Using sparce capabilities in a distributed operating system. In Proceedings of the Sixth International Conference on Distributed Computing Systems, pages 558{563. IEEE, 1986. [Valencia, ] Andy Valencia. An overview of the VSTa microkernel. http://www.igcom.net/ jeske/VSTa/. [van Doorn et al., 1995] L. van Doorn, P. Homburg, and A.S. Tanenbaum. Paramecium: An extensible object-based kernel. In Proceedings of the 5th Hot Topics in Operating Systems (HotOS) Workshop, pages 86{89, Orcas Island, May 1995. [Veitch and Hutchinson, 1996] Alistair C. Veitch and Norman C. Hutchinson. Kea { a dynamically extensible and con gurable operating system kernel. In Proceedings of the Third Conference on Con gurable Distributed Systems (ICCDS'96), 1996.
UC3M-TR-CS-1997-01
12