algorithmic state machine flowchart model for average-scale software ...

7 downloads 320 Views 127KB Size Report
models can be used as a backbone in development of a certain security policy, and in defining a set of .... Proc. of ASM Annual Conference, Oct. 1976, Houston.
SECURITY SHIELD AS A NEW APPROACH TO SECURITY IN AUTOMATED SYSTEMS Dmitriy Dunaev1 and László Lengyel2 Ph.D. student1, Associate Professor – Ph.D.2 Dept. of Automation and Applied Informatics, Budapest University of Technology and Economics, Hungary [email protected], [email protected]

ABSTRACT This is evident that the unauthorized access to information in automated systems is still a problem that possesses challenges. Therefore the existing methods and means of information protection are being constantly improved, as well as the new methods are being developed, but that does not change the situation essentially. The standards required for certification of the automated systems in the security area are constantly revised as well. The proposed changes in such standards (at intervals of 3-5 years) are sometimes of fundamental importance, and that puts automated systems developers in a difficult position and in fact confirms the severity of the situation with the problem of information security posed by the automated systems. In recent years the concept of attack and vulnerability of computing environment came to the focus. However, the criteria for the detection of these events are rather uncertain; furthermore, the completeness and adequacy of the protective means used in automated systems requires further examination. KEYWORDS Information protection, security shield, automated system, unauthorized access. 1. INTRODUCTION Nowadays the prevailing approach to information security is based on the representation of processing of information in the form of abstract computing environment. Such environment employs a variety of actors, denoted as subjects (e.g. users and/or processes) and a set of objects (e.g. resources and/or data sets). The construction of protection in this approach means a creation of a protective environment in the form of a set of restrictions and procedures (managed by a security kernel) to disallow unauthorized access (UA) and allow authorized access of subjects to objects. This environment protects automated systems (ASs) from a variety of intentional or accidental external and internal threats. In this paper we are about to call this protective environment a security shield. The presented approach is based on theoretical models of safety and security: ADEPT-50 by Hartson and Hsiao [1], BLM by Bell and LaPadula [2], MMS by Lendver and McLean [3], BIM by Biba [4], Clark-Wilson’s model [5], etc. These models can be used as a backbone in development of a certain security policy, and in defining a set of requirements that must be performed in a specific implementation of the system. However, in practice it is extremely difficult for

developers to implement these models, and therefore they can be recommended only for the analysis and assessment of the security of ASs. Thus, in development of AS security shield we propose the usage of special standards, which are still based on the aforementioned models. This paper presents the concept of security shield. In addition, a respective conceptual approach to AS security is introduced. The rest of this paper is organized as follows: next section introduces the ITSEC and Common Criteria standards, analyzes them and points out their drawbacks. Furthermore, the model of offender behavior is discussed and a concept of security shield is introduced with respect to this model. Finally, conclusions and future work are delineated. 2. CONTRIBUTION The Information Technology Security Evaluation Criteria (ITSEC) [6] is a structured set of criteria for evaluating computer security within products and systems. Since the launch of the ITSEC in 1990, a number of European countries have agreed to recognize the validity of ITSEC evaluations. Today the ITSEC has been largely replaced by Common Criteria, which provides similarly-defined evaluation levels and implements the target of evaluation concept. The Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification [7]. Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements, vendors can then implement and/or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. Thus we may state that CC provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard manner. The CC documents formulate the requirements for the set of security functions, the implementation of which should ensure the overall security of AS. All accidental and intentional threats, known at the time of the creation of documents, should be considered. Completeness of the implementation of such requirements is estimated by experts, which determine the class of security according to the standard, but do not provide quantitative indicators. The problems and disadvantages of CC methodology were published in the article of GCN columnist W. Jackson [8] and in the research paper of a computer specialist D.A. Wheeler [9]. We believe that the approaches and methods presented in CC possess fundamental flaws, leading to an insufficient level of security of information processes. The shortcoming lies primarily in uncertainty of the problem statement, which makes the solution of such problem even more difficult. Although the list of well-known objects and subjects interacting with a particular system can somehow be determined, the types and number of processes and their accidental or deliberate unauthorized influence on AS (especially in regional and global networks) still cannot be estimated. Considering CC only, estimation of durability of protection measures (security degree) for each anticipated threat does not seem possible either.

2.1 DRAWBACKS OF COMMON CRITERIA AND MODERN SECURITY STANDARDS We would like to further mention the lack of objective means of evaluating the security degree, or degree of durability of protection measures, so our estimations might differ from the true values. The CC standard is missing a mechanism for creating an enclosed container and calculating its strength (durability of overall protection). The absence of the former leads to the presence of holes in the security and the absence of the latter leads to a significant deviation of the accuracy of the estimated results and the expected effectiveness of protection. The protective environment developed according to CC is purely fragmented; as a consequence it is impossible to determine its boundaries and density of applied protective measures. In practice, the protective environment is realized by sets of functions, for which quantitative indicators of the security level are not provided. These functions are defined only on the basis of experience. The sufficiency of a chosen set of functions for ensuring the protection of AS is not proved; criteria for evaluating their implementation and the expected performance are very vague. The analysis of terms and definitions used in most security standards (assurance requirements, correctness, adequacy of the functionality, etc.) shows their very approximate influence on the final security assessment [10]. The main drawback here is that when designing an AS the developer has no initial datasets and no computational methods by which he could state that a secure system has been designed. In other words, the processes of design and evaluation of information security in the AS are weakly linked. Due to lack of adequate theory and absence of calculated numbers and ratios in CC, there is no way to introduce the measurement units and quantitatively express the degree of information security in AS. The existing theories of security calculation are so complex and abstract that they have no practical application. According to [8], “the effort and time necessary to prepare evaluation evidence and other evaluation-related documentation is so cumbersome that by the time the work is completed, the product in evaluation is generally obsolete”. Based on the analysis of modern security standards, we came to the conclusion that they do not always consider or even completely ignore the following factors: (1) classification of data-processing and data-transmitting objects on the basis of their implementation; (2) classification of potential threats, identical to the classification of data-processing and data-transmitting objects; (3) possible ways (channels) of UA to information in data-processing and data-transmitting objects; (4) differentiation of security measures in the case of deliberate and accidental UA, as they have different physical nature, impact area and point of application; (5) a system of interrelated barriers, which would enclose the target of protection and prevent the bypass of the barriers by an offender; (6) information lifetime, the limitations for detection and blocking the UA; (7) estimation of time needed for offender to bypass the barriers and break into the AS. Based on the above we can conclude that good security design does not guarantee the absence of holes and ways to break the AS and access the protected

information; it may happen that the presence of a security certificate does not necessarily mean that the security is actually present. 2.2 THE PROPOSED CONCEPTUAL APPROACH We recognize that the solution of every problem starts with looking for readymade solutions in similar areas. In our case, such solution is well-known: to protect any valuable item a physical and/or logical enclosed barrier is built around it and it is impossible for a potential offender to overcome such a barrier. Let us consider the model of expected behavior of offender. The offender starts deliberate actions at the “perimeter” of the system. By the “perimeter” of AS we understand the outer physical frame of the system, the elements of which are covers, housings, external hardware connectors, management, mapping and printing tools, I/O devices, information carriers and cable connectors. Violation of security is the UA to any piece of information under protection. In general it is not possible to predict the time and nature of malicious actions of the offender [11]. Thus, it is worth considering the possible most hazardous model of behavior of offender: (1) the offender may appear anytime and anywhere at the perimeter of the AS; (2) to achieve his goal he will choose the weakest link in security; (3) the qualifications and awareness of the offender will comply with the importance of information under protection; (4) the offender knows the principles of AS and applied security designs; (5) the offender is not necessarily compromised outside person, but also can act as a legitimate user of AS; (6) there can be multiple offenders, but their actions are not coordinated. Information processing in AS can be considered at the global (data-processing complexes – data-transmission systems) and local (technical means – transmission channels) levels. The fragment of an AS is represented on the model shown in Fig.1, where the data-processing complex (DPC) is the object of automatization with respect to the AS with lumped data processing. We may state that the aggregate of DPCs with transmission channels already represent a more general distributed system.

Figure 1

Data I/O, data storage, processing and transmission of information in the AS involve a number of channels provided by documentation (e.g. regular I/O devices) and not provided by documentation (e.g. electric circuits, memory devices, electromagnetic fields, etc.). In the absence of protection of all channels, including non-documented ones, they can be used by the offender. In AS the security shield is formed by the aggregate of methods and means, which physically or logically wrap over the information channels to be protected. We assume that the degree of completeness of channels protection determines the degree of total enclosure of the security shield around the subject of protection. In the presented approach the strategy of information protection against intentional UA consists of identification of potential channels that can be used by offender and overlapping them with protective means. Here, the potential channels concept also includes the documented channels of information access, which must also be protected from the offender. This approach allows reducing the variety of information threats to a finite set of information channels, the complete list of which can be easily identified by security professionals. This approach highly reduces the uncertainty level in problem statement and makes the choice of subsequent decisions in the construction of guaranteed protection more clear. Protection tactics is based on the access control and UA prevention. Means of control and restriction are set at such channels where it is technically and organizationally possible, and preventive means are used where control and restriction is inapplicable. For example, the login from the keyboard can be controlled by a special program, but communication channels of a geographically distributed system - not always. Therefore, the “channels under threat” are technically divided into controlled and uncontrolled channels. Accordingly, the protective means form controlling and preventive protection shields. 3. CONCLUSIONS AND FURTHER WORK In this paper the concept of security shield has been presented. This work introduced the new conceptual approach to AS security. We have shown that drawbacks of modern industrial security standards require the development of a new approach, which on one hand is not so complex and abstract as existing standards, and on the other hand can quantitatively express the degree of information security. To better facilitate the use of security shield, the concrete calculation of security degree of a single protective mean in security shield and of the security shield as a whole will be developed. It will involve the analysis of principle ways of construction of shield and searching for possible ways of breaking it. In general, if any ways of bypassing the security barriers are detected, then the respective protective means should be further developed. If the detected bypass way is successfully blocked by another protective mean of the shield, then we continue with calculating the security degree of the latter. The algorithm continues until we achieve the complete overlapping of all potential channels under threat. As a result, an enclosed information security shield will be designed. The security degree of the whole shield will be determined by the security degree of the weakest link (channel with the lowest security degree). It is important to mention that by the security

degree of a channel we understand the probability of breaking in the channel by an offender at the given period of time. Further work will also include further improvement and generalization of security shield concept. In order to create a single security mechanism, the protective means can be combined into a single information security system with the help of centralized control and management tools. Such secure AS should be put under analysis of its composition and design principles in order to search for possible ways of breaking it. If such security-critical ways are found they should also be overlapped by protective means, and such means are also included into the AS security shield. The approach proposed in the presented paper addresses the shortcomings of existing concepts and acquires additional advantages, which complement the existing concepts of information security in the AS. ACKNOWLEDGMENTS This work is connected to the scientific program of the "Development of quality-oriented and cooperative R+D+I strategy and functional model at BUTE" project. This project is supported by the New Hungary Development Plan (Project ID: TÁMOP-4.2.1/B-09/1/KMR-2010-0002). REFERENCES [1] Hartson H.K., Hsiao D.K. “Full protection specification in the semantic model for data bases protection languages”. Proc. of ASM Annual Conference, Oct. 1976, Houston. [2] Bell D.E., LaPadula L.J. “Secure Computer Systems:Mathematical Foundations”. MITRE Technical Report 2547, Volume I, March 1973. [3] Landwehr C.E., Heitmeyer C.L., McLean J. “A security model for military message systems”. ACM Transactions on Computer Systems, Volume 2, Issue 3, Aug. 1984. [4] Biba K. J. "Integrity Considerations for Secure Computer Systems". MTR-3153, TheMitre Corporation, April 1977. [5] Clark D. D., Wilson D. R. “A Comparison of Commercial and Military Computer Security Policies”. In Proceedings of the 1987 IEEE Symposium on Research in Security and Privacy (SP'87), May 1987, Oakland, CA; IEEE Press, pp. 184–193 [6] Information Technology Security Evaluation Criteria (ITSEC): Preliminary Harmonised Criteria. Document COM(90) 314, Version 1.2. Commission of the European Communities. Retrieved 2006-06-02. [7] Official CC/CEM versions: http://www.commoncriteriaportal.org/cc [8] Jackson W. “Under Attack: Common Criteria has loads of critics, but is it getting a bum rap”, Government Computer News, retrieved 2007-12-14, http://gcn.com/articles/2007/08/10/under-attack.aspx [9] David A. Wheeler, “Free-Libre / Open Source Software(FLOSS) and Software Assurance /SoftwareSecurity”, December 2006. [10] Wäyrynen J., Bodén M., Boström G. “Security Engineering and eXtreme Programming: An Impossible Marriage?” Lecture Notes in Computer Science, 2004, Volume 3134/2004, pp. 152-195. [11] Шаньгин В.Ф. “Информационная безопасность компьютерных систем и сетей”, Москва, ИД «Форум», 2008.