Operating System Level Support for Resource Sharing Across Multiple Domains Hao Wang Wei Li Donghua Liu Li Cha Haiyan Yu Zhiwei Xu Institute of Computing Technology of Chinese Academy of Sciences, Beijing, China {wanghao, liwei, dliu, char, yuhaiyan, zxu}@ict.ac.cn Abstract On demand resources sharing is a fundamental requirement for distributed systems. However, the existence of multiple administrative domains makes it hard to share resources among these domains dynamically. We think an important reason is domain-dependent user management in traditional operating systems, and previous middleware approaches can not solve the problem thoroughly. In this paper, we propose Cross-Domain Operating System to implement domain-independent user management at the operating system level. Following methods are adopted to achieve the goal: 1) using global user identifiers to replace traditional user identifiers; 2) implanting global user identifiers into kernel entities such as processes and files; 3) adopting an access control mechanism based on global user identifiers. The most important feature of our approach is that we implement it by introducing innovative elements into operating system kernels. Compared with middleware approaches, our method can provide better security, performance and compatibility.
1. Introduction Efficient sharing of computational resources is a principal challenge for operating systems (OS) technologies, and resource sharing across multiple administrative domains is a long-standing problem for the design of distributed systems such as the Grids [5]. Especially, the goal of the Grids is “coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations” [7]. That is, a virtual organization (VO) should contain sharable resources managed by different institutions or administrative domains. In order to realize such a VO, a necessary condition is the support for dynamic resource sharing among huge number of grid users coming from many domains. However, the user management in traditional OS brings great obstacle for flexible and dynamic resource sharing in wide area. Centralized user management and access control mechanism in single domain systems (such as a single machine or a cluster) bring great burden to administrators to support the dynamic sharing requirements. For a publicly accessible domain in the Grids, it is very hard to predetermine who and when
will access resources of this domain, because the sharing behaviors may be highly dynamic and uncertain. It is possible that the resources are accessed by all users or few users; it is also possible that they are utilized occasionally or frequently. We think the reason for this situation is the use of domain-dependent user management in traditional OS. In single domain systems, domain-dependent local user accounts are provided, and access control policies are also based on these local accounts. The domain-dependent information are stored in entities such as processes and files, and they are invalid to other domains. However, this pattern of user management will make the resource sharing among multiple domains very difficult. On the one hand, it is hard to move users among domains without the administrator’s intervention at the destined domain. On the other hand, it is also very difficult to move resources among domains without changing the resources’ attributes at the destined domain. For example, when a file moves from one domain to another domain, the domain-specific information (e.g. UID, GID and access control information) in this file will be invalid at the destined domain. In [15], the above problems are also described as “transient (user) collaboration with little or no intervention from resource administrators”. The problems caused by domain-dependent user management come from the early design of traditional OS, which mainly focuses on single domain issues. At that time, a common scenario is that many users share resources in a single domain (e.g. a mainframe). A practical solution is to manage user accounts and access control policies centrally, which does not consider interactions with other domains. In addition, several kernel technologies such as processes and files are implemented as domain-dependent. All of these make users and resources close coupling with administrative domains and bring great obstacle to achieve the goal of dynamic resource integration in the Grids. Many previous systems [2, 3, 9, 11, 18, 20] have implicitly or explicitly apperceived the above problems, and propose various approaches to break user management from administrative domains. However, most of them did not point out the inability of the design of traditional OS, and most of them use middleware approaches to solve the problem. Different from these systems, we enhance traditional OS with the ability of cross domain sharing at the operating system level. In our approach, OS kernel is extended to support
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
domain-independent user management. In this paper, such a system is called Cross Domain Operating System (CDOS for short). The rest is organized as follows. Section 2 introduces the background and design goal. Section 3 introduces the related work. Section 4 gives the design details. Section 5 provides implementation issues, Section 6 evaluates our work and Section 7 concludes the paper.
2. Background and Design Goals Early OS designs mainly focus on system efficiency, system adaptability, generality of applications and system reliability [1]. It assumes that all entities in an OS exist in a single administrative domain, and the cross domain requirements are not considered. From the view of administrators, user accounts and access control policies are relative stable and determined. However, the domain-dependent user management can not support the dynamic resource sharing in the Grids. To implement domain-independent user management, we mainly focus on three design goals: 1) Decoupling user identifiers from administrative domains. This issue seeks to use domain-independent global user identifiers (GUIDs) to replace current UIDs. In addition, user management should be fulfilled by a domain-independent system. 2) Decoupling access control mechanism from administrative domains. Currently, access control information in one domain is invalid for users of other domains. In a multiple domain environment, we should provide GUID-based access control mechanism, which is valid for all grid users wherever they are. 3) Decoupling OS kernel entities from administrative domains. Processes and files are kernel entities to provide computational resources. However, the domain-dependent user information and access control policies hinder users from accessing resources globally. Domain-independent entities should be provided to enable global access to resources. The above three issues are also identified by recent grid researches. In Globus Tookit [11], GSI-based GUIDs [23] are used to uniquely represent grid users, grid services [5] are used to decouple system entities from traditional domains, and CAS [23] is used to decouple access control mechanisms from domains.
3. Related Work Many research work have made contributions to implement domain-independent user management, which can be summarized as two classes: The first class uses a single global domain to manage multiple domains centrally. In such a global domain, all machines should share a same copy of user accounts, obey same access control policies, and provide same runtime environments. The examples include [21, 22, 4]. Especially, in cluster systems, people choose to integrate many machines into a single-system-image system [12] to avoid the cross domain problem.
However, in an environment such as the Grids, it is a fact that multiple domains do exist, and it is impossible to integrate all computers into a single domain. The second class uses a middleware on top of multiple domains to provide domain-independent user management. Recent grid research such as Globus and Legion both provide domain-independent user management via a middleware. In Globus, each grid user has a global unique X.509 certificate to represent his identity. Related tools such Simple CA are used to manage user certificates [23]. Grid services, as system entities of the Globus middleware, are designed to loosely couple with low level OS domains. CAS [23] is proposed to support flexible access control among multiple domains. All these components can greatly ease the constructing of Virtual Organizations. Other examples include [9, 20, 18, 2]. Keeping cross-domain sharing functions out of OS can make legacy systems (OS and applications) not be affected. However such an approach will bring security problems and performance lost. For security issues, as pointed out by [15], a middleware approach will violate some fundamental security principles. One possible problem is that they do not support legacy applications and system services. Another problem is lowering the overall security. For performance issues, an additional layer may bring extra cost for deploying and accessing resources. Our approach has fundamental difference with the above methods. In CDOS, we support the existence of multiple administrative domains. In addition, compared with the middleware approach, we implement domain-independent user management at the OS level by extending OS kernels. Similar methods have been used in our another work called VegaFS [14], which aims at file sharing across multiple domains Our approach is different from the work in [15]. They mainly aim at providing fine grained access control for file resources. They use extend file permissions to support collaboration in the Grids at the OS level. They do not modify OS kernels and remain traditional user management unchanged. By contrast with their work, we use GUIDs to replace traditional UIDs at the OS level. Kernel entities (processes and files) and access control mechanism are modified to support this feature. Our approach also supports shared memory besides file resources. Our approach is also different from the security identifier (SID) in Windows NT [10]. At the NT kernel level, security identifiers are used to identify different entities including files, devices, users and domains. A SID will be unique and centralized managed in a domain. In another domain, this SID will be invalid, which is similar to that of UNIX-like OS.
4. Design 4.1 System Structure In our design, the CDOS comprises of following components (in Figure 1):
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
Figure 1.
The structure of the proposed CDOS
y Global User Identifiers (GUIDs). GUIDs should satisfy following requirements: 1) global unique; 2) decentralized maintained; 3) hard to forge. Therefore, we choose X.509-based Distinguished Name (DN) to meet the above requirements. Firstly, the Certificate Authority (CA) can guarantee the global uniqueness of an issued DN; secondly, the Public Key Infrastructure [16] (PKI) can manage certificates with good scalability; and thirdly, the PKI can also guarantee a GUID hard to forge. Similar applications can be found in distributed systems such as [3, 8, 13, 17, 9, 11]. y Global User Management System (GUMS). In CDOS, we need domain-independent GUMS to issue, identify and update GUIDs with good scalability. Due to the distributed and global properties of PKI, we adopt it as GUMS for user registration, certificate issuing and other regulation tasks. y CDOS Domains. A CDOS domain will provide resources via kernel entities such as processes and files. It should guarantee the authenticity of users and the correctness of sharing behaviors. It should also protect user information stored in kernel entities. In addition, it should establish trust relationship with GUMS. When a user accesses a domain, it should verify whether he has a certificate issued by a trust CA. y CDOS Files. In traditional OS, hardware resources can be divided into several classes such as processors, memory, disk, and devices. The utilization of these resources is dependent on two basic kernel entities: processes and files. In CDOS, each file (normal files and device files) is marked by the owner’s GUID
instead of the UID. Then, file resources can be identified uniquely wherever it is. y CDOS Processes. In CDOS, we use a GUID to replace the UID for a process. When a CDOS process’s static image is loaded to run, the owner’s GUID will be transferred and stored in this process’s structure. When the process will interact with file resources or shared memory, the kernel will use the owner’s GUID to check the access right. y CDOS Access Control Information Directory. In CDOS, traditional UID-based access control mechanisms become unavailable, and we provide a free space called Access Control Information Directory (ACID) to store access control policies for resource providers. The kernel will get information from there to guarantee the correct behaviors among providers and consumers. The access control can be divided in two classes: processes vs. processes and processes vs. files. The sharing between processes and processes mainly refers to shared memory. The sharing between processes and files mainly refers to data resources or devices. The kernel will guarantee that only resource owners can edit their access control information stored in ACID. Even an administrator can not modify a provider’s access control policies without permissions.
4.2 Provider’s Operations For resource providers, there are four important steps (indicated by broken lines in Figure 1):
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
1) A resource provider gets a X.509-based DN from a CA. That is, a provider will get a certificate issued by this CA and a public/private key pair. 2) Then the provider will login to a domain with his certificate. If the certificate provided is issued by a trusted CA, the domain will ask a signature signed by the private key of the provider. Then the domain uses the public key of the provider to verify the signature. If the authentication succeeds, the domain will fork a new process (a shell process) and set this provider as the owner of this process. This operation is implemented by a special program called UniLogin (discussed in Section 5.4), which will call fork() to create a new process and call a new system call setguid() to write the provider’s DN into the new process. 3) If login is successful, the provider can operate in his shell environment. For example, he can create a new data or device file, and provide a block of shared memory for other processes. In our implementations, we extend related system calls such as open() and shmget() to support the above operations. 4) A provider can set the sharing policies for his resources in ACID, and the kernel will manipulate access control actions according to these policies.
main task of an administrator is to determine who is allowed to login this domain (i.e. which GUID is a local user of this domain). The generating, issuing, and updating of user certificates are regulated by CAs.
4.3 Consumer’s Operations
We have implemented a prototype of CDOS on Linux, which is a classical open source UNIX-like OS.
For resource consumers, important operations include (indicated by real lines in Figure 1): 1) and 2) are same as that of providers. A consumer should also get a certificate and a public/private key pair, login to a domain, and start up a shell process marked with his DN. 3) If login is successful, the consumer can access resources. For shared memory resources, the kernel will get the DN of both the consumer and the provider, and check in ACID to determine whether the consumer has the access rights. If it has, the consumer process can operate the shared memory. The functions are implemented in an extension of shmget() system call. 4) Similar to 3), when a consumer accesses file resources, the kernel will get the DN of both users, and check in ACID. These functions are implemented in a extension of open() system call.
4.4 Administrator’s Operations In essence, an administrator is a special resource provider that manages hardware resources. After a CDOS starts up, an administrator process will run as the gatekeeper of this domain. A user can come into this domain and obtain needed resources after authentication via the UniLogin program, which is executed by the administrator. Only those users who are authorized by domain kernels are allowed to enter this domain, and the access control information is stored in the administrator’s access policy file in ACID. In addition, the certificate of an administrator process comes from the kernel, and a normal user can also authenticate the trueness of the domain by the certificate. In CDOS, the
4.5 Kernel’s Operations A CDOS kernel is a supervisor that will guarantee the sharing behaviors among providers and consumers. Each kernel also has a certificate and a DN as the administrator’s GUID. The DN is stored in a security place invisible to any other users. The startup of a CDOS kernel is same as that of traditional OS. After that, it will start up the init (in Linux here) process and implant the kernel’s DN in it. Administrators will use this process as a delegate of the kernel to manage hardware resources. In addition, the CDOS kernel knows the existence of ACID, and it will perform access control actions according to the access policies stored there. Administrators can edit the domain’s access control information in ACID to determine who can use this domain’s hardware resources.
5 Implementations
5.1 Security Assumptions For simplicity, we do not consider the mutual authentications between the kernel and users. That is, a user must trust the domain (i.e. UniLogin program) and would like to provide his private key. We also assume that a user’s private key is stored in a secure place invisible to other users except for domain kernels. In future, we can further improve our system and make private keys invisible to kernels.
5.2 DNs and Certificate Authorities Currently, the prototype of CDOS adopts OpenSSL [19] of version 0.9.7a as authorization infrastructure, which is a widely used open source toolkit. The X.509-based certificates we used is 128-byte long, and we use tools provided by OpenSSL to build CAs.
5.3 Kernel Extensions We do following extensions to kernel and utilities: y Modifications of the structure of process. As we know, in task_struct of the original kernel, there is a field uid to hold a UID, whose type is unsigned short. This field can not hold a DN (as a character string). We add a new field guid (a character string with maxim length of 128 bytes) to task_struct to deposit the DN. When creating a process, the kernel will set this field. When a process operates on files or shared memory, the kernel will get the owner DN from this field. The fields
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
such as euid, gid, and groups remain unchanged. Currently, they are not support by our implementation. y Modifications of the structure of inode. For each CDOS file, it should deposit a 128-byte long string DN. In the original structure of inode, there is no place to hold this string. Therefore, we use an additional inode (128-byte size) structure for each file to deposit the DN, which is gotten from the field guid in task_struct. We use the field i_flags in the first inode to indicate the second inode, when it equals to the value of 0x00008000. As we know, we can get the location of an inode’s data in the disk through the field i_sb and i_ino of inode structure. When we create a file, we allocate two inodes and make the inodes have the same i_sb field. So we can get the field i_sb from the first inode. We deposit the field i_ino of the second inode in the field i_generation (we assume this field is not important) of the first inode. These functions are implemented by modifying the kernel function ext3_new_inode() and by a new kernel function ext_new_inode_cdos(). y Modifications of the structure of shared memory. A block of shared memory is represented by a kernel structure shmid_kernel. There is an important field shm_perm (with the type of kern_ipc_perm) that indicates the owner of this shared memory. Similar to the process, we add a new field guid to deposit a DN. We also modified the related kernel function newseg() to set the field of guid when creating a shared memory. In addition, a new kernel function ipcperms_cdos() is provided to replace the original ipcperms() to perform access control. The difference between these two functions is that the former will get the owner’s access control information from ACID to verify whether other processes can access this shared memory. y GUID-based access control mechanisms. We mainly modified two access control mechanisms of the original kernel. One is permission(), which is used to judge the access permission to a file. Another one is ipcperms(), which is used to judge the access permission to shared memory. Therefore, we provide a function permission_cdos() to replace the function permission(). By permission_cdos(), the kernel will first get the owner’s DN from the file by a new function ext3_get_inode_guid(). Then, it will get the DN from the process which accesses the file. Next, the function will check in ACID to see whether this process has the access right by calling a new function get_mode_from_acid(). For shared memory, we provide a kernel function ipcperms_cdos() as the substitute of ipcperms(). When a process accesses an existing shared memory, the kernel will get the DN from this process and get the owner’s DN of shared memory from the variable shp, which is a pointer to the structure shmid_kernel. Then the kernel will call ipcperms_cdos() to verify access permissions. y Modifications of system calls. The system call open() is modified to write a DN into an inode when creating a file, and permission_cdos() is executed when accessing an exist file. The system call shmget() is
modified to write a DN into the structure of a shared memory, and ipcperms_cdos() is executed when accessing an exist shared memory. For compatibility, the interfaces of these system calls remain unchanged. y Adding new system calls. The original setuid() uses an integer variable as its parameter, which can not transfer a character string. We provide a new function setguid() to write a DN into guid in task_struct. The format is int setguid(const char *dn). A corresponding function is getguid(). Only administrator processes can execute setguid(), and normal processes can only execute getguid(). y Modifications of kernel startup process. In our implementation, the function init() is modified to deposit a DN to the field guid in task_struct of the administrator process when a kernel starts up.
5.4 UniLogin In traditional OS, the login program is used to create a new shell. It will call the fork() to create a new process and call setuid() to make it as another user’s process. In our implementation, a new program called UniLogin is provided to replace the original login. UniLogin will be started up by the first administrator process and wait for a user’s input. After a user inputs his certificate, it will access the administrator’s access control information to see whether this user is allowed to login. If it is, UniLogin will verify whether the certificate is issued by a trusted CA. If it is, UniLogin will prompt the user to input his private key. UniLogin will use the private key encrypt a message to generate a signature. Then it will use the user’s public key to decrypt the signature to compare with the original message. If they are same, UniLogin will execute fork() and setguid() to create a new shell for this user.
5.5 ACID and Access Control Schemes In CDOS, each user has a unique file in /etc/acid to store his access control policies. The file can be edited by its owner only, and the owner’s DN is used as the file name (we use ‘%’ to replace ‘/’).There are two different sections to describe the access policies. The SHAMEM section indicates who can access the owner’s shared memory with what access right. The format is “[user’s DN] [access right]”. The FILE section indicates who can access which file with what access right. The format is “[file name] [user’s DN] [access right]”. Figure 2 gives an example of a user “/C=CN/O=AC/OU=ICT/
[email protected] (user lin for short)”. It indicates that a user “/C=CN/O=AC/OU=ICT/
[email protected]” can read user lin’s shared memory and the file “program1”; a user “/C=CN/O=AC/OU=ICT/
[email protected]” can read lin’s shared memory and execute “program2”. When a kernel executes functions such as permission_cdos() and ipcperms_cdos(), it will first get the DN of both the provider and consumer. Then it will read a file whose name is same as the provider’s DN in
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
# Access control policies edited # by /C=CN/O=AC/OU=ICT/
[email protected] [SHAMEM] /C=CN/O=AC/OU=ICT/
[email protected] /C=CN/O=AC/OU=ICT/
[email protected] [FILE] program1 program2
read_only read_write
/C=CN/O=AC/OU=ICT/
[email protected] /C=CN/O=AC/OU=ICT/
[email protected]
read_only exec_only
# For Administrator Only # # /bin/shell /C=CN/O=AC/OU=ICT/
[email protected] exec_only
Figure 2. An example file for a user “/C=CN/O=AC/OU=ICT/
[email protected]”. The file name is “%C=CN%O=AC%OU=ICT%
[email protected]”
ACID to check the permission. If the provider had granted the consumer with permissions, the kernel will execute the sharing actions. For an administrator, he can use the [FILE] section to determine who can login by authorizing this user with the exec_only right on the program /usr/bin/shell. When a user logins via UniLogin, the kernel will check this information and determine whether start up a new shell.
5.6 Compatibility Issues One important factor should be considered is the compatibility to legacy applications. Currently, we added only two system calls getguid()/setguid() to support GUIDs, and the interface of other modified system calls remain unchanged. We also remain the original getuid()/setuid() to support legacy applications. Some utilities, such as the passwd file, the shadow file, the passwd command are also remained. For legacy applications, there is no or very small modification to migrate to CDOS and obtain the security and cross-domain benefits of our design.
6. Evaluations We implemented our design in the linux-2.4.20.8 kernel. All the cases described previously have run successfully. The C code added to the kernel is about 2000 lines. We used an Intel PC with the following characteristics: 1) Celeron processor at 1.5 GHz. 2) 256MB RAM. 3) 40 GB Seagate IDE Disk. We test all modified system calls and compare them with the original system calls. Each test is repeated for 20 times, and we use their average value as the final result.
6.1 Performance Analysis In this section, we give the test result and analyze the performance impact (test unit is microsecond).
Table 1. getuid/getguid and setuid/setguid System Call getuid setuid getguid setguid Result 7 11 12 17 In Table 1, the execution of setuid() is a bit slower than getuid(). This is because that setuid()will check the setuid permission of a execute file, which will bring extra cost compared to getuid(). The execution of getguid() is slower than getuid() because it copies a string but not an integer from a process to a buffer, which spends more time. The similar phenomenon can be observed in the comparison of setuid()/getuid(). Table 2. Creating a new file System Call open (Original) Result 95
In Table 2, the modified open() function approximatively spends double time compared with the unmodified one. The reason is that when creating a new file, the modified open() will create another inode to store a DN. The increased time mainly comes from the creating and initializing of this inode. In addition, because we only test open() with O_CREAT permission, the time-consuming I/O operation does not really occur. That is why it is much quicker than that of opening an existing file (in Table 3). Table 3. Opening other user’s files System Call open(original) Open(modified) Result 15459 17940 In Table 3, the cost of the modified open function increases a bit. Two operations affect the result. The first one is opening the owner’s file and getting the DN, which will access two inodes and cost a bit more time that the original one. The second one is getting access control information from a file buffer in ACID, whose time is approximatively equal to shmget() in Table 5.
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
open (Modified) 193
Table 4. Creating a shared memory System Call shmget(original) shmget(modified) Result 138 142
approach provides good compatibility to traditional applications, which can be easily migrated to CDOS with no or a little modification.
7. Conclusions and Future Plans In Table 4, the modified shmget() spends more time. The extra cost comes from the copying of the DN string. This test is similar to the operation of getguid(). In fact, the shmget()with IPC_CREAT only creates shared memory, which needs to get a DN from task_struct and initialize related structures such as shmid_kernel. The modified one is about 4 microseconds slower, and the difference is approximatively equal to that between getuid() and getguid() (in Table 1). Table 5. Getting other process’s shared memory System Call shmget(original) shmget(modified) Result 13 131 The test in Table 5 is similar to opening another user’s files. When accessing shared memory, the kernel will read access control information to check the permission. For the original shmget(), the kernel only needs to read several integers from kern_ipc_perm; however, for the modified shmget(), the kernel needs to search in a file buffer and find a possible match, which will cost more time.
6.2 Functional Benefits Compared with the middleware approach, our approach can bring following functional benefits: 1) Better security. Firstly, we need not to map a global user to a local account, which avoids the many-to-one user mapping problem. Secondly, the authentication and authorization are fulfilled by OS kernels, which are more reliable compared with a middleware approach at an application layer. This feature is also supported by the analysis in [15]. 2) Better performance. Our design can support lower level resource sharing such as DSM, process migration and network file system, which can provide better performance compared with application level entities. A middleware approach can not support the sharing at this level in a multiple domain environments because it can not manipulate the domain-dependent information in kernel entities such as process and files. In addition, our approach enables resource access at the process and file level, which also eliminate the performance overhead brought by a middleware. Compared with the notable performance lost brought by a middleware entity, the small cost brought by our modification is acceptable. 3) Better Compatibility. In middleware approaches, applications must be based on a middle layer software to achieve cross domain resource sharing. Legacy applications running on traditional OS can not gain any benefits from such a middleware. On the contrary, our
This paper provides a set of design goals for a cross-domain operating system to address cross-domain resource sharing problems, and gives several fundamental issues should be solved. We depict the design details and demonstrate how these approaches are integrated into traditional OS. We evaluate that our domain-independent user management can provide better security, support lower level resource sharing and put small impact on legacy applications. Currently, we have not supported the concept of user group yet, because such a group may include users distributed in different locations. In addition, we use a simplest method to describe access control policies, which will make the file very large when user number and resource number increase. Our future work is to support global user group and access control based on a policy description language.
Acknowledgement This work is supported in part by the National Natural Science Foundation of China (Grant No. 69925205), the China Ministry of Science and Technology 863 Program (Grant No. 2002AA104310) and 973 Program (Grant No. 2003CB317000 and No. 2005CB321807), and the Chinese Academy of Sciences Hundred-Talents Project (Grant No. 20044040).
References [1] David H. Abernathy, John S. Mancino, Charls R. Pearson, Dona C. Swiger, “Survey of design goals for operating systems”, ACM SIGOPS Operating Systems Review, 1973, 7(2):29-48. [2] E. Balkovich, S. R. Lerman, R. P. Parmelee, “Computing in Higher Education: The Athena Experience”, Communications of the ACM, 1985, 28(11):1214-1224. [3] M. Blaze, “A Cryptographic File System for Unix”, Proceedings of the First ACM Conference on Computer and Communications Security, November 1993. [4] David A. Curry, Samuel D. Kimery, Kent C. De La Croix, and Jeffrey R. Schwab, “ACMAINT: An Account Creation and Maintenance System for Distributed UNIX Systems”, Proceedings of the Fourth Large Installation Systems Administrator's Conference, 1990, pp. 1-9. [5] I. Foster, C. Kesselman, J. Nick, S. Tuecke, “Grid Services for Distributed System Integration”, Computer, 2002. [6] I. Foster, C. Kesselman, G. Tsudik, S. Tuecke, “A Security Architecture for Computational Grids”, Proc. 5th ACM Conference on Computer and Communications Security Conference, 1998, pp. 83-92. [7] I. Foster, C. Kesselman, and S. Tuecke, “The Anatomy of the Grid: Enabling Scalable Virtual Organizations”, International Journal of High Performance Computing Applications, 2001, 15(3): pp. 200-222.
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE
[8] R. J. Figueiredo, N.H.Kapadia, and J.A.B.Fortes, “The PUNCH virtual file system: Seamless access to decentralized storage services in a computational grid”, In Proceedings of the Tenth IEEE International Symposium on High Performance Distributed Computing, IEEE Computer Society Press, 2001. [9] A. S. Grimshaw, W.A.Wulf, et al. “The legion vision of a worldwide virtual computer”, Communications of the ACM, 40 (1), January 1997. [10] Hans Hedbom, Stefan Lindskog, Stefan Axelsson, and Erland Jonsson, “A Comparison of the Security of Windows NT and UNIX”, 1998, http://www.ce.chalmers.se/sta#/sax/nt-vs-unix.pdf. [11] http://www.globus.org/toolkit/downloads/3.2/ [12] K. Hwang, Z. Xu, Scalable Parallel Computing: Technology, Architecture, Programming, McGraw-Hill Book Company, 1998. [13] John Kubiatowicz, David Bindel, et al, “OceanStore: An Architecture for Global-Scale Persistent Storage”, Proceedings of the Ninth international Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000), November 2000. [14] Wei Li, Jianmin Liang, Zhiwei Xu, “VegaFS: A Prototype for File-sharing crossing Multiple Administrative Domains”, IEEE International Conference on Cluster Computing (Cluster 2003), 2003. [15] M. Lorch, D. Kafura, “Supporting Secure Ad-hoc User Collaboration in Grid Environments”, Proceedings of the 3rd Int. Workshop on Grid Computing, 2002.
[16] U. Maurer, “Modeling a public-key infrastructure”, Proceedings 1996 European Symposium on Research in Computer Security (ESORICS' 96), Lecture Notes in Computer, Science, Springer, LNCS, 1996, pp. 325-350. [17] D. Mazieres, M.Kaminsky, M.F.Kaashoek, and E.Witchel, “Separating key management from file system security”, In Proceedings of the 17th ACM Symposium on Operating Systems Principles (SOSP), Kiawah Island, South Carolina, 1999. [18] James H. Morris, Mahadev Satyanarayanan, Michael H. Conner, John H. Howard, David S. H. Rosenthal, F. Donel, “Andrew: a Distributed Personal Computing Environment”, Communications of the ACM, 1986, 29(3):184-201. [19] OpenSSL, The OpenSSL project, http://www. openssl.org/. [20] W. Rosenberry, D. Kenney & G. Fisher, Understanding DCE, O’Reilly & Associates Inc, 1992. [21] Sun Microsystems Inc., System and Network Administration (Part Number 800-3805-10), 1990. [22] Sun Microsystems Inc., Solaris ONC Network Information Service Plus (NIS+), 1991. [23] V. Welch, F. Siebenlist, I. Foster, J. Bresnahan, K. Czajkowski, J. Gawor, C. Kesselman, S. Meder, L. Pearlman, S. Tuecke, “Security for Grid Services”, Twelfth International Symposium on High Performance Distributed Computing (HPDC-12), IEEE Press, 2003.
Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA’05) 0-7695-2486-9/05 $20.00 © 2005
IEEE