Nowadays a large part of information is directly created in the digital world and ... then makes a digital signature over the values of a subset of PCRs to prove to ...
Trusted Computing and Infrastructure Commons Antonio Lioy, Gianluca Ramunno, Davide Vernizzi
Politecnico di Torino Dip. di Automatica e Informatica Torino (Italy)
Abstract The current software-based security mechanisms to protect computers and networks are often insufficient to face the increasing number of sophisticated and distributed attacks. The Trusted Computing paradigm, offering strong hardware-based protection, may be a solution to these issues. This paper describes the technology behind Trusted Computing and analyzes its benefits and current limitations. We discuss also the results achieved in this area by the OpenTC European project, an open-source implementation of a trusted platform. Finally, we figure out how Trusted Computing can be used to protect Commons and Infrastructure Commons.
Introduction Nowadays a large part of information is directly created in the digital world and shared on public networks like the Internet. In the past few years we have faced an increasing number of sophisticated and distributed attacks showing that the protection offered by software mechanisms is not enough to protect computers in current scenarios. Trusted Computing (TC) aims to change this approach and increase the trustworthiness of computer systems through the use of a low-cost hardware device acting as a root-of-trust to build security features on top of it. Apart for local virus protection, computer security is largely focused on protecting the communication channels. Trusted Computing offers the possibility to extend the protection to the communication end-points. Their integrity is a key factor to protect the whole lifecycle of information, from the creation to the use. In this work we present the basic concepts of trusted computing and discuss its advantages and disadvantages. We present also the achievements of OpenTC, a European research project focussed on the creation of an open framework for trusted computing, and we discuss how trusted computing can be used together with infrastructure commons.
Trusted Computing The constant growth of the interconnection between computer systems has increased the need for protection from remote attacks. To address these needs the TCG, a not-for-profit group of ICT industry players, developed a set of specification to create a computer system with enhanced security named trusted platform . A trusted platform is based on two key components: protected capabilities and shielded memory locations. A protected capability is a basic operation (performed with an appropriate mixture of hardware and firmware) that is vital to trust the whole TCG subsystem. In turn capabilities rely on shielded memory locations, special regions where is safe to store and operate on sensitive data. From the functional perspective, a trusted platform provides three important features rarely found in other systems: secure storage, integrity measurement and reporting. The integrity of the platform is
defined as a set of metrics that identify the software components (e.g. operating system, applications and their configurations) through the use of fingerprints that act as unique identifiers for each component. Considered as a whole, the integrity measures represent the configuration of the platform. A trusted platform must be able to measure its own integrity, locally store the related measurements and report these values to remote entities. In order to trust these operations, the TCG defines three socalled root of trust , components that must be trusted because their misbehaviour might not be detected: ●
the Root of Trust for Measurements (RTM) that implements an engine capable of performing the integrity measurements;
●
the Root of Trust for Storage (RTS) that securely holds integrity measures and protect data and cryptographic keys used by the trusted platform and held in external storages;
●
the root of trust for reporting (RTR) capable of reliably reporting to external entities the measures held by the RTS.
The RTM can be implemented by the first software module of a computer system that is executed when it is switched on (i.e. a small portion of the BIOS) or in hardware by processors of the last generation. The central component of a TCG trusted platform is the Trusted Platform Module (TPM). This is a low cost chip capable to perform cryptographic operations, securely maintain the integrity measures and report them. Given its functionalities, it is used to implement RTS and RTR, but it can also be used by the operating system and applications for cryptographic operations although its performance is quite low. The TPM is equipped with two special RSA keys, the Endorsement Key (EK) and the Storage Root Key (SRK). The EK is part of the RTR and it is a unique (i.e. each TPM has a different EK) and non-migratable key created by the manufacturer of the TPM and that never leaves this component. Furthermore the specification requires that a certificate must be provided to guarantee that the key belongs to a genuine TPM. The SRK is part of the RTS and it is a non-migratable key that protects the other keys used for cryptographic functions1 and stored outside the TPM. Also SRK never leaves the TPM and it is used to build a key hierarchy. The integrity measures are held into the Platform Configuration Registers (PCR). These are special registers within the TPM acting as accumulators: when the value of a register is updated, the new value depends both on the new measure and on the old value to guarantee that once initialized it is not possible to fake the value of a PCR. The action of reporting the integrity of the platform is called Remote Attestation. A remote attestation is requested by a remote entity that wants evidence about the configuration of the platform. The TPM then makes a digital signature over the values of a subset of PCRs to prove to the remote entity the integrity and authenticity of the platform configuration. For privacy reasons, the EK cannot be used to make the digital signature. Instead, to perform the remote attestation the TPM uses an Attestation Identity Key (AIK), which is an alias for the EK. The AIK is a RSA key created by the TPM whose private part is never released outside the chip; this guarantees that the AIK cannot be used by anyone except the TPM itself. In order to use the AIK for authenticating the attestation data (i.e. the integrity measures) it is necessary to obtain a certificate proving that the key was actually generated by a genuine TPM and it is managed in a correct way. Such certificates are issued by a special certification authority called Privacy CA (PCA). Before creating the certificate, the PCA must verify the genuineness of the TPM. This verification is done through the EK certificate. Many AIKs can be created and, to prevent the 1in order to minimize attacks, SRK is never used for any cryptographic function, but only to protect other keys.
traceability of the platform operations, ideally a different AIK should be used for interacting with each different remote attester. Using trusted computing it is possible to protect data via asymmetric encryption in a way that only the platform s TPM can access them: this operation is called binding. It is however possible to migrate keys and data to another platform, with a controlled procedure, if they were created as migratable . The TPM also offers a stronger capability to protect data: sealing. When the user seals some data, he must specify an unsealing configuration . The TPM assures that sealed data can be only be accessed if the platform is in the unsealing configuration that was specified at the sealing time. The TPM is a passive chip disabled at factory and only the owner of a computer equipped with a TPM may choose to activate this chip. Even when activated, the TPM cannot be remotely controlled by third entities: every operation must be explicitly requested by software running locally and the possible disclosure of local data or the authorisation to perform the operations depend on the software implementation. In the TCG architecture, the owner of the platform plays a central role because the TPM requires authorisation from the owner for all the most critical operations. Furthermore, the owner can decide at any time to deactivate the TPM, hence disabling the trusted computing features. The identity of the owner largely depends on the scenario where trusted computing is applied: in a corporate environment, the owner is usually the administrator of the IT department, while in a personal scenario normally the end-user is also the owner of the platform. Run-time isolation between software modules with different security requirement can be an interesting complementary requirement for a trusted platform. Given that memory areas of different modules are isolated and inter-module communication can occur only under well specified control flow policies, then if a specific module of the system is compromised (e.g. due to a bug or a virus), the other modules that are effectively isolated from that one are not affected at all. Today virtualization is an emerging technology for PC class platforms to achieve run-time isolation and hence is a perfect partner for a TPM-based trusted platform. The current TCG specifications are essentially focused on protecting a platform against software attacks. The AMD-V [1] and the Intel TXT [5] initiatives, besides providing hardware assistance for virtualization, increase the robustness against software attacks and the latter also starts dealing with some basic hardware attacks. In order to protect the platforms also from physical attacks, memory curtaining and secure input/output should be provided: memory curtaining extends memory protection in a way that sensitive areas are fully isolated while secure input/output protects communication paths (such as the buses and input/output channels) among the various components of a computer system. Intel TXT focuses only on some so called open box attacks, by protecting the slow buses and by guaranteeing the integrity verification of the main hardware components on the platform.
A critical look at TC Trusted computing is a relatively new technology that provides several benefits but still has limitations and suffers from some drawbacks. Many exponents of the Internet community have criticised trusted computing. Professor Anderson of the University of Cambridge claims that trusted computing is more useful for IT industry than for people: trusted computing may enhance the Digital Right Management (DRM) systems giving content producers and providers a much higher power to implement unfair usage policies [2]. Also Richard Stallman, the founder of the GNU project and president of the Free Software Foundation (FSF), criticises trusted computing by asserting that it may put the existence of free operating software and
free applications at risk, because users may be not able to run them anymore [11]. While these criticisms spot some very critical usage of trusted computing, it is not possible to implement such scenarios with the actual technology: the TPM is a passive component controlled by the operating system and requiring the authorisation of the owner for the most critical operations. Also the use of AIKs to perform the remote attestation requires proper authorisation from the user. So the owner of a TPM-equipped computer has always the control whether to enable the TPM or not and which operating system to install. However it could be possible that a closed-source operating system enforces some discriminatory policies by leveraging the TPM, but this is not intrinsic in the trusted computing concept. A more specific criticism has been raised with respect to the use of AIK. By using a different AIK with each remote party, the user is guaranteed that different remote parties cannot link together different remote attestations and trace these actions. However a problem arises if the PCA that created the certificates for the AIK colludes with the remote party and reveals the identity of the TPM (i.e. reveals the EK related to the AIK it certified). In this case the AIK can be linked to the EK and it may be used to trace the different operations made by a single TPM, hence breaking the privacy of the platform (and thus the privacy of the users of the platform). To overcome this problem, the TPM specifications version 1.2 introduces (a) the possibility to revoke and re-generate the EK, and (b) a privacy-aware protocol called Direct Anonymous Attestation (DAA) for replacing the PCA. While this provides in theory a solution to the privacy problem, regenerating an EK is still a difficult and critical operation and the DAA protocol is not fully supported yet. Regenerating the EK is more likely to be possible in a closed environment (e.g. a corporate one) than in an open scenario (e.g. the Internet community): when the EK is regenerated, the old EK and its certificate are revoked and the owner of the platform must obtain a certificate for the new EK. While it may be realistic that a CA internal to the company issues the certificates for the new EK, it is not likely to have a public CA that issues such certificates. We remind the reader that the EK with its certificate vouch the genuineness of the TPM when it requests the certificate for the AIK. If the regeneration of the EK happens in a controlled environment (for instance in the IT department of the company when computer systems are bought) it is possible to persuade the CA that the new EK still belongs to a genuine TPM. In other cases (as when an individual buys a computer for personal use), it might be very difficult if not impossible to convince the CA about the genuineness of the TPM (i.e. the CA cannot be sure that the EK does not belong to a software emulator or a compromised TPM). Another related problem is that most TPM chips are not currently shipped to the market with the EK certificate. Without an original certificate issued by the manufacturer, trusting the TPM as RTR in open scenarios is virtually impossible. Even more, the TCG has specified an additional credential to certify that a TPM is properly installed on a platform and this is needed to request the AIK certification. This certificate, named Platform Credential, is not currently issued by any platform manufacturer. When it comes to DAA, this is a very complex protocol that takes a long time to be executed and requires software support not existing yet. Another non-trivial problem is related to computing the integrity of the platform. The TPM is capable of securely holding and reporting the platform configuration but currently the configuration of a platform is represented by the list of the fingerprints saved in the PCR. This poses both technical and privacy issues. On the technical side, applying a security patch or changing the version of a model would change its fingerprint (hence bringing the system to a new configuration). This means that a remote verifier should have a large database of fingerprints for all possible software modules and their versions and for each of them. On the privacy side, revealing which fingerprints are saved in the PCR means revealing which software and version is running on the platform. With this knowledge, a remote entity could exploit known bugs and weak points of such software or could decide not to allow the
platform to access its services depending on the software that it runs. This would impose to the user to adopt specific software limiting his freedom. Moreover modern systems are very complex in terms of hardware and software and inferring the dynamic behaviour of a platform from a set of fingerprints representing its configuration is a hard task. Rather than directly checking the platform configuration (i.e. performing the so called binary attestation), it could be better to derive and certify security properties bound to a configuration and check such properties during the remote attestation (i.e. performing the so called property-based attestation) [9]. Therefore different configurations that share the same security properties can be considered trustworthy. Unluckily, defining the security properties and evaluating them for a given configuration is still an open problem subject to research. An additional issue is that measurements are computed only when the components are loaded: without proper software architecture for monitoring the platform resources at run-time, nothing can be said about the dynamic behaviour of the components. For example, if a component is altered by a virus at run-time (i.e. after it has been loaded and measured), this fact is not detected. A last open issue is that the TCG only released a specification, but no conformance tests are forced onto the vendors. Therefore it may be difficult for an end user to tell whether his trusted platform is compliant to the whole specification or only to a subset of it or if there are undocumented functions implemented within the TPM. First independent tests [10] tell that currently no TPM is completely conformant to the TCG specifications.
The OpenTC project In order to exploit the hardware capabilities of a trusted platform, specific software is required: both the operating system and applications must be enabled to use the TC features. OpenTC [7] is a research project co-funded by the European Commission through the 6th framework programme. Its main goal is to develop an Open Trusted Computing framework; to enable maximum community benefit, the OpenTC consortium is committed to use and develop open source software. Technical foundations of OpenTC are the use of the TPM and virtualization. OpenTC leverages the TPM as hardware root of trust to build a comprehensive system that implements in software flexible security services for trusted platforms. Virtualization allows many virtual machines to run on a single physical machine providing strong run-time isolation among them. In OpenTC this is used to separate applications from critical services like secure GUI, secure storage, trusted channel (i.e. a secure channel bound to the integrity of the end-points). Virtualization is also used to protect critical user applications like the browser used for home banking from other standard applications used for day-by-day operations. The framework uses two different virtualization engines, Xen [12] and L4 [8]. The design principles [6] of OpenTC are the openness of design, implementation and validation, the use of explicit policies, the implementation of a fine-grained control over the trusted platform, the multilateral security and the scalability. The open specification and the use of open-source software allow easier validation of both the design and implementation bringing to higher trustworthiness of the system. Another principle is to separate the policies from the mechanisms: the security policies must be explicit and only enforced if they are in accordance with the user's consent. A relevant goal is to provide a more powerful control of the trusted platform, by allowing fine-grained choices over the whole platform: for example, a user could add a virtual TPM to a virtual machine that exposes only a subset of the TPM functionalities. Moreover multilateral policies should be negotiated between the system owner and the other parties involved in a transaction: for example, some temporary restrictions could be agreed and applied only for the time of the transaction. Also the privacy of the involved
parties is an important aspect to be dealt with. Finally OpenTC focuses on data scalability: software migration should be possible between platforms with equivalent policies and protection levels.
TC and Infrastructure Commons The price of capitals involved in information production and distribution has greatly decreased in the interconnected world, determining the development and diffusion of a new model of production: the common-based peer production [3]. Its specific and innovative characteristic is the decentralized collaboration among large groups of people. Many individuals, sometimes even in the order of hundreds of thousands, cooperate to produce cultural and creative digital goods. Cultural goods, created in peer production and distributed in the digital public domain, are addressed as commons, because of their intrinsic nature of being commonly available to every one. No one owns the commons, but every one can modify and share them. When a common is a resource critical for the creation and diffusion of other commons (e.g. the Internet for sharing information or the radio-frequency spectrum as the physical media to transfer data), then it can be defined as an Infrastructure Common. Three distinct, successive phases can be identified in the production of commons: ●
A common is created. This implies that a human meaningful expression is produced, for example under the form of a paper or a picture.
●
The common is then usually placed into a map of knowledge. This is a subjective judgment that each individual performs, depending on the context where the common is used, on the credibility of the authors and, but not necessarily, on the quality of the common. For instance, if an individual wants to be informed on the economical situation in a country, an article by a reputed journalist is preferable to an informal chat with some colleagues. Similarly, a news report that country or a neighbour one is more relevant than a movie filmed in the continent, even if the latter has a much higher overall quality.
●
Finally the common is distributed and shared.
In classical mass-media scenarios the three steps are usually integrated. A media broadcaster produces the content, gives it credibility, and distributes it. On the contrary, the Internet allows disaggregating these three steps. The credibility and the reputation of the commons authors are therefore critical for the common-based peer production, because it is often used in public projects to discriminate poor contributions. A project unable to defend itself from malicious or incompetent contribution fails. Currently, projects protect themselves with formal methods (i.e. by using licenses like the GNU GPL or Creative Commons) and with technical constraints or social norms. Common-based peer production has several requirements related to its characteristic of being distributed and collaborative: identification and reputation of commons authors, non-discriminatory access to Infrastructure Commons are examples of relevant aspects to be dealt with. Trusted Computing could be used as the root of trust for building systems that address these requirements. Such systems can be used to protect commons that are created and used online (e.g. discussions on forums or other interactive collaborative works), but also to protect commons that are created offline and shared afterwards. For online scenarios, these systems provide reliable management of reputation, strong authentication and identification of peers and integrity reporting, as well as non-repudiation policies and support to multiple identities. Moreover TC can provide powerful control over access to Infrastructure Commons
and it can help preserve the privacy of sensitive information. For offline created commons, the TPM could be used as root of trust for the identification of the common s author and the verification of the common s integrity. Specifically, it is possible to picture a trusted implementation of reputation in distributed and collaborative environments. If an agent provides a malicious contribution, its reputation is affected. The problem arises when a malicious agent joins a different project with a new clean reputation. This can be avoided if the reputation is somehow related to the agent s platform (for instance, it is bound to the TPM). Furthermore, trusted platforms can impose technical constraints in order to mitigate the actions of malicious agents. Contents provided by a trusted platform cannot only be authenticated by means of hardware-rooted methods, but can also be related to a TC base that vouches for the integrity of the environment where they were created. Through the evidence of the agent s integrity, TC can support the enforcement of non-repudiation policies. Without the integrity verification, it may not be possible to decide if the transmission of rogue data was intentional or not. Using the integrity reporting function offered by the TPM installed on a peer s platform, the communication could not be repudiated claiming that a rogue process created the rogue data. TC can also be used to enforce some regulation policies regarding the access to critical Infrastructure Commons, thus providing protection from unauthorized access or mitigating misuses and abuses. By verifying peers integrity before allowing the access to the Infrastructure Commons, it is possible to discriminate peers that could break security or privacy policies related to the commons (for instance by disclosing confidential data). TC can also be used to increase the security of the operations on the common producer s side. Indeed the use of the TPM together with strong isolation between applications (e.g. using virtualization) protects the privacy of personal data. It is possible to guarantee that relevant data will be treated in a controlled environment (e.g. a virtual machine) according to some policies accepted by the common producers. For example a producer providing some personal data to an Infrastructure Common can be assured, using the remote attestation, that the sensitive data will not be disclosed to untrusted third parties against a publication policy. Moreover, an extensive use of virtualization allows having multiple identities. An identity is a set of software, personal data and credentials that are isolated from the rest of the system by running in a virtual machine. The agent can choose to use different identities according to scenarios' requirements. For instance the agent may have an identity for official use (e.g. working) and a different identity for personal use. This avoids unwanted information flow from one identity to another (e.g. working data that belong to the company will not be available from the personal identity). Furthermore, thanks to the hardware protection provided by the TPM, TC secures data from being stolen or modified (e.g. by using the sealing function). This is especially useful when physical attacks occur2, or in scenarios requiring multiple identities when one of the identities is compromised. With the current technology, peers may not only become the authors of the commons, but they can also act as providers. It is possible to create a peer-to-peer distribution system that is more independent than commercial-driven ones. An example of such systems is BitTorrent [4]. In this scenario, a trusted computing base is essential to guarantee the authentication and integrity of distributed commons. Furthermore, the ownership of information is an issue in common-based peer production. While any 2 Note that the TCG does not provide any protection from physical attacks. Nonetheless, the TPM, acting as a crypto-device may offer protection of keys used to protect sensitive data. Such keys are securely managed by the TPM and are not released unless some conditions are met (e.g. the system must be in a certain state or a password is provided)
peer usually has access to the commons, it is important to guarantee the rights of the original authors of the common. In order to be suitable for the digital Public Domain, hardware-software platforms using TC technologies should able to guarantee the right balance between privacy and reputation of the commons producers or among control policies for accessing the Infrastructure Commons and nondiscrimination for their use. In the future, when TC platforms will be able to meet the requirements for commons, it could be possible that independent organisations could set up procedures for evaluating and certifying such platform.
Conclusions TC is a technology that may be useful to commons. It may be the trust foundation for the Infrastructural Commons in order to meet the requirements for digital Public Domain. Moreover it can enforce some regulation policies regarding the access to the Infrastructural Commons, as well as guarantee the privacy of the personal data of the commons producers while interacting with the Infrastructure Commons. The use of Trusted Computing together with isolation techniques makes it easier to protect the privacy of personal data and offers the possibility to have multiple identities on the Infrastructure Commons. Unfortunately Trusted Computing is a young technology and still suffers from some drawbacks, but it is improving. For example privacy issues are being addressed both by the TCG and independent researchers. Moreover projects, like OpenTC, largely mitigate issues regarding the decision whether an operating system can be considered trustworthy or not. We think that Trusted Computing can be very useful in designing more secure infrastructure commons if its limitations and critical points are carefully taken into account.
Bibliography [1] AMD Virtualization, http://www.amd.com/usen/Processors/ProductInformation/0,,30_118_8796_14287,00.html [2] 'Trusted Computing' Frequently Asked Questions, http://www.cl.cam.ac.uk/~rja14/tcpafaq.html
[3] Yochai Benkler, Coase's Penguin, or Linux and the Nature of the Firm, in 112(3) Yale Law J., 2002. [4] BitTorrent, http://www.bittorrent.com/ [5] Intel Trusted Execution Technology, http://www.intel.com/technology/security/ [6] D. Kuhlmann, R. Landfermann H. V. Ramasamy, M. Schunter, G. Ramunno and D. Vernizzi, An Open Trusted Computing Architecture Secure Virtual Machines Enabling User-Defined Policy Enforcement, IBM Research Report, 2006, http://www.opentc.net/images/otc_architecture_high_level_overview.pdf [7]Open Trusted Computing (OpenTC), http://www.opentc.net/ [8] TU Dresden Operating Systems Group, Fiasco micro-kernel, http://os.inf.tudresden.de/fiasco/
[9] A.-R. Sadeghi and C. Stuble, Property-based attestation for computing platforms: caring about properties, not mechanisms, in Proceedings of the 2004 workshop on New security paradigms, pp. 67 77, 2004. [10] A.-R. Sadeghi, M. Selhorst, C. Stuble, C. Wachsmann and M. Winandy, TCG inside?: a note on TPM specification compliance, in Proceedings of the First ACM Workshop on Scalable Trusted Computing (Alexandria, Virginia, USA, November 03 - 03, 2006). STC '06. ACM, New York, NY, pp. 47-56, 2006.
[11] Can You Trust Your Computer?, http://www.gnu.org/philosophy/can-you-trust.html [12] P. Barham et al., Xen and the art of virtualization, in Proceedings of the nineteenth ACM symposium on Operating systems principles, pp. 164 177, 2003.