JanDeMeer - SecurityTesting-engl

15 downloads 300 Views 2MB Size Report
d. test controllers, tool integration, monitoring techniques, test environments, remote ..... responsibilities of handling Vulnerabilities in Products or Services (VPS);.
                       

                     

 

Club R2GS White Paper Security Assessment & Control in Critical Infrastructures (KRITIS) – A review of IT Security Technology Standardization Projects. Jan deMeer, [email protected] Delegate to DIN NIA27/AK3 (ISO/IEC JTC1 SC27/WG3); Partner of the Alliance for Cyber Security (ACS) of the German Federal Government Institute BSI Chair of the Industrial Specification Club R2GS German Chapter (SoSo Club) c/o Fa. Smartspacelab.eu GmbH Berlin;

Abstract: Critical Infrastructures are type of eternal systems that are said to be permanent under operation and thus have to cope with transient configurations of their system assets and resources. The transient configurations are also called continuous state transitions. The operation of the system continuously transforms the system state into another one. The more frequent the system state changes occur the more vulnerable the system behaves and hence the higher the risk is about. The invention of appropriate security indicators, i.e. the controls of the system state changes and the invention of appropriate countermeasures against system attacks will minimize vulnerabilities and thus the risk to be hacked. Notice, both the controls and the countermeasures are continuous activities, whereas penetration testing is a discrete activity. Penetration Testing is a tool by which controls and countermeasures can be identified and analyzed. Security Controls require a clear definition of the different kinds of manifestations of security in a concrete eternal system, and of the abilities to stimulate and to observe these security manifestations. Stimulation can be executed by the penetration (testing) of a system in order to observe a security control attribute or an impact of an attack to the system, both to be analyzed. The standardization projects of the working groups of ISO/IEC JTC1/SC27 and ETSI MTS and ETSI ISG ISI are reviewed and compared in order to achieve a harmonized security testing methodology and guideline. In the reviewed standards from ETSI and ISO/IEC, both, the notions of Security and the Notion of Testing, despite of control are not (yet) fully aligned to each other. Consequently a security testing guideline cross-over the standards is required by which we can easily relate security controls to testing activities. Guidelines can be a kind of reference model dealing with configurations of system resources and assets. Changes of these configurations – in most cases - can be observed and thus there is a good chance for vulnerabilities to be detected before hacked. A limited amount of configurations shall be testable provided there is a limited number of test suites too. Otherwise statistical evaluation methods are required to assess the vulnerability of a huge amount of configurations.

Keywords: Critical IT Infrastructure, Penetration, Fuzz, Conformance, Interoperability, Performance, Discrete/Continuous, Functional/Non-Functional -Testing, Security Architecture, Security Evaluation, Verification, Benchmarking, Security Indicators, Resiliency; Cyber, IT, Network Information Security

22.02.14,    Seite  

1  

                       

                     

 

1. Introduction “Relevant Practice for delivering secure software” is promised by the “Software Assurance Pocket Guide Series” [1]. Practice means “Security Testing” to perform assessment of security properties and behavior of the software that interacts with external entities, i.e. its environement (cp. with figure 1 below). [1] states the objectives of software security testing as follows: a. b. c.

To check1 whether a software’s dependable operation continues even under hostile conditions, i.e. attack-patterned input, or attack-induced failures; To check the software’s trustworthiness in terms of its safe behavior, its lack of exploitable flaws and weaknesses; To check the software’s robustness (survivability) under exceptional conditions such as securityrelevant anomalies, failures and errors; i.e. to minimize the extent and damage impact that may result from intentional failures of the software; to prevent the emergence of new vulnerabilities;

The security tester anticipates the role “Tester-as-an-Attacker” based on misuse and abuse test cases, incorporating known attack patterns but also anomalous interactions by which the invalid assumption taken by the software and its environment are checked to be without impact. Statics [1] highlight the severity of the problem of identifying the time when an attack eventually will occur: a. b. c.

80% of organizations experience security incidents 90% of web sites are vulnerable to attacks 75% of attacks are performed at the application layer.

Thus single test techniques cannot identify different type of software vulnerabilities that may occur during Software Development Life Cycle (SDLC) which includes security test cases and scenarios based on: a. b. c. d.

misuse or abuse of security functionalities; meaningful and fuzz security test data; pass/fail criteria for each security test; capture and analysis of security test results.

Fuzz testing injects invalid data generated by a random data generator, i.e. noise, to the System-under-Test (SUT). The SUT is checked for revealed vulnerabilities that results from noise injection. The ETSI Working Group dealing with ‘Methods for Testing and Specification’ [2.1] as it is outlined in the attachment 1 of the ETSI MTS – ISO/IEC JTC1/WG3 liaison statement [12] on security testing, defines security testing as an activity to provide assurance on system behavior by performing component, integration, acceptance, vulnerability, penetration (susceptibility) tests to the SUT. [2.1] provides guidance on ‘Security Contexts and Principles’ such as: a. b. c.

d.

Governance to understand threat environment, or the functioning of a formal management regime; Risk to understand general or security risks; Personal, Physical, Procedural, Technical Controls • to maintain organizational and practitioners competences, • to protect physical artefacts, • to perform management of projects, suppliers, system configurations, • to perform trusted asset management and assurance confirmation, • to maintain fault management, • to validate architecture-driven implementations, • to check for trustworthy implementations, • to practice hygienic coding, • to enable dependable deployment; Compliance to independent Validation/Verification and to maintain ongoing reviews.

[2.2] contains case studies in order to demonstrate security testing based on models in terms of SUT, applied tool chain, and technical requirements.

                                                                                                                1  Notice,  the  authors  of  the  document  [1]  use  on  this  place  the  term  ‚verify’.  However  we  want  to  restrict  the  use  of  the  term   ‚verify’  to  the  application  of  Formal  Description  Techniques  (FDT)  and  not  for  the  application  of  any    Testing  Techniques.  For  the   latter  we  prefer  to  use  the  term  ‚check’.    

22.02.14,    Seite  

2  

a. b.

                       

                     

 

Use case ‘banknote processing machine’ that operates as part of a network and comprises components of currency processors, reconciliation stations, vault management systems, control centers and WAN-LAN interconnection firewalls; Other case studies are planned and will be taken from the realm of Radio, Automotive, Spacios.

Security testing requires knowledge of the a. b. c. d.

used operating systems of the test generator, monitors, controllers, and SUT; implemented interfaces for testing, network access, network protocols, APIs; applied programming, modelling languages of SUT; test controllers, tool integration, monitoring techniques, test environments, remote access (VPN) to test environment;

The security testing approach tests the SUT against the vulnerabilities derived from risk analysis. [2.2] suggests the following security test patterns for selecting appropriate test generation techniques and procedures for the test of these vulnerabilities: a. b.

detection of vulnerability to injection attacks, to prevent control or disruption of behavior of Target of Evaluation (ToE). System resilience to injection attacks is essential to increase trustworthiness (confidence) into system behavior; usage of unusual behavior sequences, preventing vulnerabilities of code injection attacks on system access to secured data;

[2.3] specifies terms and methods for system security domains of testing features, performance, robustness: a. b. c.

Verification of Security Functions and Features, Load, Stress and Performance Testing, Resilience, Reliability and Robustness testing (fuzzing).

Fuzz testing is a technique for automatically generating and passing valid and invalid message sequences into the SUT/TOE to see of the tested system withstands. Fuzz-testing is applied to a system in operation and checks for unknown vulnerabilities. It is a way of risk-based system evaluation to be used as part of the “post development TVRA [13] activities. Robustness is the degree to which the tested system functions correctly in presence of invalid input or stressful environment conditions. Vulnerability is understood as a weakness of design, implementation, configuration mistakes, to cause failures in the operative phase of the system. Penetration Testing (PeTe) means the SUT is checked by applying injection, scanning, monitoring tools and anticipating the role of ‘tester-as-a-hacker’. Observability of failures is critical to security testing, in order to find root-cause of failure. Functional Security Testing (FST) considers the SUT/TOE from end user’s perspective and comprises interoperability and conformance testing; positive and negative checks. In addition to benign, legitimate users, FST takes into account possible attackers attempting to consume system services without authorization or legitimation. FST is a type of black-box testing, since FST does not take into account a system’s or component’s internal structure, just its observable functionality. FST under the Common Criteria ISO/IEC 15408, (compare with [7]), focuses - instead of the System Under Test (SUT) – on the Target of Evalution (TOE) and its Security Functional Interfaces (TSFI) that must be checked against the Security Functional Requirements (SFR).

22.02.14,    Seite  

3  

                       

                     

 

2. Catalogue of Architectural and Design Principles for Secure Products, Systems, and Applications[3] [3] aims at constraints of how to develop secure systems2. Security is understood as the capability to withstand the attacks a system is enfacing in its intended operational environment. If attacks cannot be prohibited the system (architecture) shall allow for the detection of attackers and the limitation of the damage an attack can cause to systems.

Figure 1: Security Target, Target of Evaluation, System Under Test Hence Users, Attackers, Intruders, Vendors usually share a system’s operational environment; sometimes it is hard to distinguish both from each other. However in most cases e.g. authorized users apply to legal system interfaces, attackers try to intrude illegally to the system aside interfaces. [3] suggests to solve the problem by applying the notion of isolation, i.e. to restructure the system into isolated domains that can communicate only by using well-defined communication channels (paths). These channels are able to enforce selected security and robustness (preventing performance degradation) policies. The application of isolation notion yields 3 control elements 1. 2. 3.

To separate domains and to define domain structures To invent inter-domain communication To select security policies to be enforced by inter-domain communication channels

Besides ‘isolation’ [3] suggests 4 more design notions to achieve secure system architecture, which are ‘layering’, ‘encapsulation’, ‘redundancy’, and ‘virtualization’. However, in total, 5 general purpose architecturing notions are not enough to design secure systems. Thus, additionally 7 ‘Design Principles’ for Security are listed in [3]: 1. 2. 3. 4. 5. 6. 7.

Least Privilege Principle Attack Surface Minimization Principle Centralized Parameter Validation Principle Centralized General Security Services Principle Preparing for Error and Exception Handling Principles Preparing for Modular Design Principle Preparing for Formal Method Design Principles

Consequently it is left open to the reader of how the 7 design principles are to be applied to the 4 architecturing notions. One could think of a cross reference table describing 7*4 = 28 ‘architecture –

                                                                                                                2  Through-­‐out  this  text  we  will  use  the  term  ‚system’  which  represents  all  three  elements:  ‚products,  systems,  and  applications’,   as  it  is  used  in  [1].    

22.02.14,    Seite  

4  

                       

                     

security’ relations (table entries) to be specified. Some of those relations would be difficult to be implemented, i.e. Central Service Principle * Distribution/Virtualization architectural notion.

 

Similar ‘formal method design principle’ can make small systems more robust, by verifying design properties. Unfortunately there is no experience in verifying huge complex clouds.

3. Evaluation Criteria for IT Security (Common Criteria ISO/IEC 15408) [4] The Common Criteria comprise 3 elements, (and the documentation 3 parts) i.e. 1) General Model defines the general principles of IT security evaluation 2) Functional Components establish base functional requirements for Targets of Evaluation (TOE) organized in families and classes; 3) Assurance Components establish base assurance requirements for TOEs organized in families and classes plus Evaluation Criteria for Protection Profiles (PP) and Security Targets (ST) by 7 levels of assurance packages, called Evaluation Assurance Levels (EALs); The 3 elements of an anticipated security system architecture, i.e. General Model, Functional Components, Assurance Components will eventually be assessed by 3 stakeholders, i.e. Consumer, Developer, and Evaluator. Consumers are interested in Background Information on guidance of use of PP, requirements for TOEs, determining required EALs. Developers are interested in information on development of security specifications for TOEs, interpreting functional requirements, specifying functional Specification for TOEs, assurance requirements and approaches of TOEs. Evaluators need information on the structure of Protection Profiles (PP) and Security Targets(ST), functional and assurance requirements. Evaluation shall yield confidence in countermeasure to owners of system assets. The quality of countermeasures is either ‘sufficient’ or ‘correct’ and therefore, when demonstrated to be sufficient and correct, will minimize risks to system assets. Whereas sufficiency of a countermeasure to a security target means that specified threats to identified assets counter these threats - correctness of a countermeasure means that the evaluation of a countermeasure’s specified Security Functional Requirements (SFR) facilitate operational exactness and comparability in a way that the SFR meets the security objectives of the countermeasures (Target of Evaluation). According to figure 1 the countermeasures of a security targets are divided into (i) countermeasures (security objectives) to threats to system assets, the Target of Evaluation (TOE), and into (ii) countermeasures (security objectives) to threats to the system’s operational environment. Whereas to the (i) group correctness must be determined during evaluation – to the (ii) group correctness is not determined. In order to determine correctness of a TOE, various activities can be executed, such as: • • •

Testing of the TOE, Examining the architectural design of the TOE, Examining the physical security of the development environment of the TOE.

A TOE tested to be correct meets Security Assurance Requirements (SAR) specified according to ISO/IEC 15408-3. A TOE tested to be correct satisfies the conformance statement between its Security Target (ST) and the user needs specified by the Protection Profile (PP).

22.02.14,    Seite  

5  

                       

                     

 

4. Methodology for IT Security Evaluation [5] There is a direct relationship between ISO/IEC 15408 Common Criteria(CC) [4] and ISO/IEC 18045 Common Evaluation Methodology(CEM) DCOR 1 in the way how CC elements are related to CEM elements, [5 - figure 1], e.g. • • •

(CC Assurance Class  CEM Activity); (CC Evaluator Action Element  CEM Action); (CC Developer Action Element & CC Content and Presentation of Evidence Element  CEM Work Unit).

The Evaluation Process Model comprises following 4 roles of Sponsor, Developer, Evaluator, Evaluation Authority: 1) Sponsor requests the evaluation and ensures the evaluator providing with evaluation evidence; 2) Developer produces the TOE and, on behalf of the sponsor, and provides evidence for the evaluation; 3) Evaluator performs evaluation tasks, subtasks, and receives evidence from the developer on behalf of the sponsor, and delivers evaluation assessment results to the Evaluation Authority; 4) Evaluation Authority establishes and maintains the scheme, and monitors the evaluation conducted by evaluator, and issues certification reports provided by the evaluator: [5] suggests 3 mutually exclusive evaluation verdicts: 1) PASS Verdict is assigned to CEM Evaluator Completion Action (ECA) determining that the requirements for PP, ST, TOE are met, including the constituent work units of the evaluation methodology, the coherence of the evidence for performing these work units, and the evidence does not have any obvious inconsistencies to other evaluation evidence. (Notice, it does not mean that the evaluator undertakes a consistency check every time a work unit is executed); 2) FAIL Verdict is assigned to CEM ECA determining the requirements for PP, ST, TOE are not met, or evidence is incoherent, or obvious inconsistencies have been found; 3) INCLUSIVE Verdict is initially assigned to all CEM ECAs, and remain so until PASS/FAIL verdicts are assigned. Since assurance applies to the entire TOE, all evaluation evidence pertaining to all TOE parts must be made available to the evaluator. The evaluator requires stable and formally-issued versions of the evaluation evidence to be used as the base for verdicts. The evidence documentation includes test procedures, TOE design decisions, source code, and hardware drawings. The evaluator also shall perform configuration control of the evaluation evidence and shall protect it from alteration or loss while evaluator is executing evaluation work units. Since an evaluator may have access to sponsor and developer commercially or nationally sensitive information during the discourse of an evaluation, additional confidentiality requirements may be given to the evaluator to maintain confidentiality of the evaluation evidence. At the conclusion of an evaluation the disposal of the evaluation evidence must be achieved by returning, or archiving, or destroying it. Finally the Observation Report (OR) and the Evaluation Technical Report (ETR) are written by the evaluator to fulfill the universal principle of repeatability and reproducibility of results. Both reports OR and ETR have to be consistent.

5. Refining Software Vulnerability Analysis [6] Development Vulnerabilities are vulnerabilities that were introduced during development of the TOE according to ISO/IEC 15408-3. The evaluation assessment of which is covered by the assurance family AVA_VAN vulnerability analysis. The assessment is expected to detect whether identified vulnerabilities could allow attackers to violate the Security Functional Requirements (SFR) or an attacker will be able to discover flaws. Penetration Testing actions are defined by ISO/IEC 18045:2008 by the work units (1-5, 2-6, 3-6, 4-6) from basic attack potentials via enhanced-basic to moderate attack potentials of the assurance family AVA_VAN.

22.02.14,    Seite  

6  

                       

                     

 

Penetration Testing demonstrates the most feasible way to test for the TOE’s susceptibility by using the TOE interface of the security Functionality (TSFI) to stimulate the security function of the TOE and to observe responses from it. Initial conditions will need to exist for the test together with appropriate test equipment. When results of an initial test can be extrapolated to a given number of tests are likely to succeed, theoretical analysis should replace penetration testing. By [6] it is intended to add refinement to the work units of “potential vulnerability identification from public sources” (AVA_VAN 1-2E, 2-2E,4-2E) and of “penetration testing” (AVA_VAN 1-3E, 2-4E, 4-4E) performed by the evaluator. By [6] it is proposed to refer to the dictionary of Common Vulnerabilities Exposurs (CVE®) of publicly known information security vulnerabilities in order to use common identifiers to provide a baseline for evaluating coverage of an organization’s security work bench. By CVE® there exist a publicly available, substantive standardized enumeration of potential software vulnerabilities (weaknesses) represented by the format of Common Weakness Enumeration (CWE™), and it serves as a common language for describing SW security weaknesses, a standard measure for SW tools targeting vulnerabilities and a baseline standard for weakness identification, mitigation and prevention efforts. Furthermore [6] proposes a technique, called “Common Attack Pattern Enumeration and Classification (CAPEC™) to specify and identify relevant attack patterns. The basic idea behind is “to think outside the box”, thus to have a “firm grasp of the attacker’s perspective and the approaches used to exploit software”. These data brings considerable value to Software Security Assessment Activities through all phases of the SW Development Lifecycel (SDLC) including: • • • • • •

Requirements Gathering from misuse and abuse cases, Architecture and Design Risk analysis w.r.t. security architecture, Implementation and Coding Software Testing and Quality Assurance, e.g. Risk-based Penetration Testing System Operation and Lessons learned from Security incidents Policy and Standard Generation of prescriptive organizational policies and standards.

Software weaknesses from CWE™ and Attack Patterns from CAPEC™ for a given ToE evaluation can be identified by alternative approaches: 1) Apply to an existing structured assurance case coping with relevant weaknesses and attack patterns, 2) Derive directly from public resources of CWE™ and CAPEC™ The latter approach provides to the evaluator a persistently versioned content on software weaknesses and attack patterns fulfilling certain properties such as: • • • •

CWE™ scheme element “Weakness_Abstraction” == “BASE/VARIANT” CWE™ scheme element “Application_Platform” == “is_defined” CWE™ scheme element “Detection_Methods” == “is_defined” CWE™ scheme element “Related_Attack_Pattern” == “is_defined”

Similar the data base CAPEC™ can be searched for attack patterns with a minimum level of defined properties, e.g. Pattern Completeness, Pattern Abstraction, Attack Execution Flow, Technical Context, Related Weaknesses etc. With respect to penetration testing the evaluator should limit ad-hoc testing outside the set of relevant attack patterns to only those cases that require a follow-up of unexpected results, or that need investigations into potential vulnerabilities. The evaluator should exercise all relevant CAPEC™ attack patterns against all corresponding CWE™ weaknesses. There is one test case for a specific variation of a specific attack pattern against specific potential weaknesses.

6. Detailing Software Penetration Testing [7] With respect to [6] the assurance family “vulnerability analysis (AVA_VAN) is covered by 5 assessment levels the property of development vulnerability. The levels range from “survey”  “analysis”  focused analysis”  “methodical analysis”  “advanced methodical analysis”. 22.02.14,    Seite  

7  

                       

                     

 

Penetration Testing is one of the evaluator action that is based on the potential vulnerabilities to determine that the TOE is resistant to attacks performed by an attacker possessing “basic attack potential”, “enhancedbasic attack potential”, “moderate attack potential”, or “high attack potential”. Work units which are associated with penetration testing are stated as follows: 1) Evaluator shall devise penetration testing (PeTe), based on the independent search for potential vulnerabilities (AVA_VAN 1-5,2-6,3-6,4-6); 2) Evaluator shall produce PeTe documentation for the tests based on listings of potential vulnerabilities in sufficient detail to enable repeatable tests (AVA_VAN 1-6,2-7,3-7,4-7); 3) Evaluator shall conduct PeTe (AVA_VAN 1-7,2-8,3-8,4-8); 4) Evaluator shall record actual results and report in ETR penetration testing effort, outlining testing approach, configuration, depth, results (AVA_VAN 1-8,2-9,3-9,4-9; 1-9,2-10,3-10,4-10); 5) Evaluator shall examine results of all PeTes to determine TOE in its operational environment, is resistant to an attacker possessing an “Enhanced-Basic Attack” potential (AVA_VAN 3-11). 6) Evaluator shall examine results of all PeTes to determine TOE in its operational environment, is resistant to an attacker possessing an “Moderate Attack” potential (AVA_VAN 4-11). 7) Evaluator shall report in ETR all exploitable and residual vulnerabilities detailing for each: • Sources (e.g. 18045 evaluation activity) • SFR not met • Description • Whether it is exploitable in its operational environment • Amount of time, level of expertise, level of knowledge about the ToE, level of opportunity, equipment required to identify vulnerabilities, corresponding values using tables from Annex B.4 of ISO/IEC 18045. Identification of relevant potential vulnerabilities by applying fuzz testing to use them as indicators to weaknesses. Fuzz testing uncovers faults in software by injecting unpredictable input and then monitors for failures; it involves repeatedly manipulating and supplying data to target software for processing. Fuzz testing is capable of detecting previously unknown issues, since it is based on misuse or abuse of external resources simulating faults to identify security related errors that expose potential vulnerabilities.

7. Testing Methods for Mitigation of Non-invasive Attack Classes against Cryptographic Modules [8] Non-invasive attack methods need to be addressed for conformance to ISO/IEC 19790:2012, they include but not limited to: • • • • • • •

Simple Power Analysis (SPA), Simple Electro-Magnetic Analysis (SEMA), Differential Power Analysis (DPA), Differential Electro-Magnetic Analysis (DEMA), Correlation Power Analysis (CPA), Mutual Information Analysis (MIA), Timing Analyis (TA).

The goal of non-invasive attack testing is to assess whether a Crypto Module (CM) can provide resistance to attacks at a desired security level. Measurements with certain limitations, e.g. maximum waveforms, elapsed time etc. are collected and analyzed in order to determine the extend of Critical Security Parameter (CSP) information leakage. Thus test limitations and leakage thresholds constitute test criteria. The evaluator (tester) collects measurement data from the Implementation Under Test (IUT) and applies a suite of statistical tests on collected data. Core tests refer to test a single security function with a single CSP class. A CSP class includes Crypto keys, biometric data or a PIN. If some security function deal with more than one CSP class, leakage analysis for each security function and for every applicable CSP class will be performed. The test method requires repeating core tests with different CSP classes until a first test fails or all CSP classes pass.

22.02.14,    Seite  

8  

                                                8. Cryptographic Algorithms and Security Mechanisms Conformance Testing [9] From a conformance testing point perspective standard specifications address approved security functions. Cryptographic Algorithms that do not introduce randomness the following tests shall be applied: knownanswer tests, Monte Carlo Test, Multiblock Message Tests. Crypto Algorithms that introduce randomness the test independent verification shall be applied. Conformance Testing provides tests to determine correctness of the implementation of the algorithm under Test (AUT), but also implementation errors including insufficient allocation of space, improper error handling, incorrect behaviour of the AUT implementation. The tests are designed to detect accidental implementation errors but not intentional attempts. Security Conformance Testing is not an evaluation or endorsement of the overall product security; it shall utilize statistical samplings about testing, hence security conformance testing of a device does not imply 100% correctness.

9. Vulnerability Disclosure [10] & Vulnerability Handling Process [11] Vulnerability is a weakness of software, hardware, or online service that can be exploited. An exploitation of vulnerability results in a disruption of confidentiality, integrity or availability of the ICT system or assets. Vulnerabilities can be caused by design or programming flaws, poor administrative processes, lack of user awareness or education. Vulnerability disclosure is a process through which actions such as reporting, coordination, pushing information on vulnerability resolution are found and hence reduce risks. The processes from [10] and [11] can be mapped to each other: Whereas [10] provides guidelines to vendors on how to include into their business processes, from external individuals or organizations, information on potential vulnerabilities; - [11] provides guidelines on how to process and resolve potential vulnerability information. The major 5 stakeholders of the vulnerability disclosure process are the following: 1) User (also referred to as consumers, customers, end-users), operates directly SW or HW products, or makes use of online service. Users need information about vulnerability remedies in order to make appropriate risk decisions; 2) Vendor (also referred to as developer, maintainer, distributor, supplier), develops products or services or maintains them. Vendors are responsible for the quality of their products or services. Thus vendors require to learn about vulnerabilities, to develop resolutions and mitigation, and to disseminate information to users; 3) Intermediate Vendor () buys subsystems from other vendors in order to supply a combination of systems and services to users, e.g. mobile phone combined with a service contract. They learn about vulnerabilities, e.g. as part of quality controls for incoming goods, and report to their vendors. However intermediate vendors may not be in a position to simply wait a solution from their own vendors and to remove the vulnerabilities. 4) Finder (also referred to as user, vendor, researcher) identifies potential vulnerabilities of a product or a service who attempts to inform a vendor or coordinator about vulnerabilities. 5) Coordinator (e.g. Computer Security Response Teams CSIRT) cooperates with other coordinators to obtain help with domain expertise, language, time zones, cultural barriers, to share effort, or provide vulnerability coordination services on an operational basis, including: a. To help finders identifying and contact vendors b. Coordinating vulnerabilities that affect multiple vendors c. Performing technical analysis of vulnerability reports d. Publishing advisories A vendor’s vulnerability disclosure process comprises the following steps: 1) A vendor receive a vulnerability report from a finder; 2) Vendor investigates the report by reproduce environment and reported behaviour; 3) Vendor develops a resolution for reported vulnerabilities, i.e. applies remediation or mitigation techniques, test-positves, test-negatives (regressions) to provide assurance about not disrupting functionality; 4) Vendor deploys remedies and documentations about ensuring remediation does not introduce new vulnerabilities; 22.02.14,    Seite  

9  

                       

                     

 

5) Vendor collects feed-back from users and updates remedy and mitigation information, e.g. regression issues or side effects. The full Vulnerability Disclosure Process based on the stakeholders above is outlined in the figure 3 “Vulnerability Information Exchange” of the Standard ‘ISO/IEC FDIS 29147’ [10]. Obvious to say that sensitive vulnerability information is to be communicated confidentially, since it can be used to attack vulnerable products or services. In order to prove remedy activities as being authentic communicated messages must provide integrity. Vulnerability Information is basically published by an advisory, which describes the vulnerability, focusing on remedies and mitigations to be taken, but also includes information about affected systems, threats, impacts, and related references. Various factors, i.e. target population, exposure of target, value of target to attacker, cost of exploit development will influence an attacker’s decision to exploit a vulnerability or not. To predict an attempt whether vulnerability will or hast been attacked is fraught with uncertainty. A disclosure policy with respect to [10] should state the intentions of the vendor, its responsibilities and those expected from other stakeholders in order to enable easy reporting of product vulnerabilities to the vendor. Hence a disclosure policy should include information about: a) How the vendor would like to be contacted for receiving vulnerability information; b) How information exchange is protected and how these capabilities are to be configured with a finder prior to communication; c) How is the agreed method of communication, including acknowledgement of receipt, status updates etc. with the finder; d) How can information about vulnerabilities be shared and risk for users be reduced efficiently to maintain open and cooperative dialogue between vendor and finder; e) How vendors will deal with security incidents or other security related question, e.g. in case when a vulnerability affects multiple vendors, it is useful to know if the finder has reported vulnerabilities to other affected vendors; f) How are the means to track received information about possible vulnerabilities and how to communicate that method with finders. A vulnerability handling policy according to [11] defines and clarifies the intention of the vendor, when investigating and remediation vulnerabilities. The policy has an internal-only part and a public part. The internal-only part is intended to the vendor’s stuff and defines who is responsible in each stage of the Vulnerability Handling Process and how information of potential vulnerability is handled, including the following items: a) Basics, principle and responsibilities of handling Vulnerabilities in Products or Services (VPS); b) List of departments, roles for handling VPS; c) Safeguards to prevent premature disclosures about VPS before they are fixed; The public part of a vulnerability handling policy is devoted to internal and external stakeholders, including finders, users who wish to report on potential vulnerabilities. The public part tells them how the vendor is willing to interact with them when vulnerabilities are found in the vendor’s product or service. The vendor’s Security Incidents Response Teams (SIRT) for Computers or Products act as single points of contact for all outside stakeholders and thus is implanted centrally to the vendor. The SIRT responsibilities are as follows: a)

Communication with external finders or coordinators in order to understand timeliness, and the finder’s different agenda on vulnerabilities; b) Communication with Coordinators or other vendors, i.e. to make an arrangement for sharing vulnerability information. c) Timing of vulnerability Disclosure in order to prepare the advisory with assistance of the product business devision; d) Public Vulnerability Monitoring of known public sources or discussion, e.g. open source forums or data bases that affect the vendor’s product or services:

22.02.14,    Seite   10  

                       

                     

 

10. EAL-based Security Control in Critical IT&non-IT Infrastructures (KRITIS) In order to develop a cost-effective approach for improving system security, you need first tools and methodologies to assess safely the current security level of your critical infrastructure (KRITIS). Thus providers of CRITIS require the precise security metrics, indicators, testing approaches, evaluation criteria for IT Security, vulnerability -analysis, -disclosure and -handling processes. To improve the level of security in an organization requires first that the specific needs of that organization is considered by the invention of security measures as part of the full SW Development Lifecycle (SDLC). Next system complexity must be safely be reduced by applying design principles such as isolation, layering (modularity), encapsulation, redundancy, virtualization etc. In order to demonstrate the vulnerability of an IT-system it is combined with some mission-critical system of a city’s or organization’s infrastructure. The latter could be any energy or water distribution plant, or even a transportation system of the city. Notice we call the coupling of a decision-making support IT system with a plant - to be enforced to operate stable with minimized vulnerabilities and with absence of impacts from attackers - an Eternal System. It is a combination of at least 2 systems based on very different technologies, e.g. IT and watering or energy, and that must run for-ever, i.e. the services of a water/energy distribution system cannot be cut-off because of no reason, even not due to malfunctioning, attacks, or overloads etc. Consequently such an eternal system must have capabilities, all effective during system operation, that provide security-critical activities of: 1. 2. 3. 4. 5.

Security Indicator Measurement and Metering in order to read-in critical security system state parameters for evaluation and decision-making purposes; Security Fuzz/Penetration Testing in order to check possible vulnerabilities and to identify them for risk minimization; Resiliency in case of troubles, i.e. misbehavior; Self-healing functionality in case of continuous malfunctioning or destruction after an undiscovered attack, or stability problems; Resource Isolating and replacement functionality, i.e. re-distribution of resources, re-configuration of components, to amputate intruded or damaged system parts;

Whereas activities 1 and 2 are devoted to the assessment of the IT Security of Eternal Systems, the activities 3, 4, and 5 are devoted to the improvement of IT Security of Eternal Systems. However, resiliency, selfhealing and resource isolating capabilities require at first indicator measurement, metering and testing capabilities by receiving information of the critical system state metered or as test results. In an organization these activities are executed by related stakeholders. The finder dealing with testing uses the fuzz/penetration test generator to stimulate certain security functionalities and to observers the test response. In case of analyzed vulnerabilities the testing-finder informs the vendor and the other coordinators about. When an coordinator acts as the C-SIRT he provides reference data as input to the SOC Manager of the decision-making SOC. The SOC Manager compares the reference data from the testing-finder with the measured indicators of the system configuration state, taken by the measurement-finder, and decides on possible countermeasure commands to improve vulnerable security functionality of the system. Finally the system state changes accordingly and by the indicators it will be fed-back to the SOC and a new security improvement cycle begins. These dependencies are outlined by figure 2 ‘KRITIS Process Control Model’.

22.02.14,    Seite   11  

                       

                     

 

figure 2: KRITIS Process Control Model

22.02.14,    Seite   12  

                       

                     

 

11. List of Technical Abbreviations used in the document API

Application Programming Interface

AUT

Algorithm under Test

CAPEC Common Attack Pattern Enumeration and Classification CC

Common Criteria

CEM

Common Evaluation Methodology

CM

Common Criteria Crypto Module

CVE

Common Vulnerability Exposure

CPA

Correlation Power Analysis

C-SIRT Computer SIRT CSP

Critical Security Parameter

CWE

Common Weakness Enumeration

DEMA Differential Electro-Magnetic Analysis EAL

Common Criteria Evaluation Assurance Levels

ECA

CEM Evaluator Completion Action

ETR

Evaluation Technical Report

FST

Functional Security Testing

IUT

Implementation under Test

MIA

Mutual Information Analysis

OR

Observation Report

PeTe

Penetration Testing

PP

Common Criteria Protection Profile

SAR

Common Criteria Security Assurance Requirements

SDLC

Software Development Life Cycle

SEMA Simple Electro-Magnetic Analysis SFR

Common Criteria Security Functional Requirements

SIRT

Security Incidents Response Team

SOC

Security Operation Center

SPA

Simple Power Analysis

ST

Common Criteria Security Target

SUT

System Under Test

TA

Timing Analysis

TOE

Common Criteria Target of Evaluation

TVRA ETSI Method Trust – Vulnerability – Risk Analysis TSFI

Common Criteria Target Security Functional Interface

VAN

Common Criteria Vulnerability Analysis

VPN

Virtual Private Network

VPS

Vulnerabilities in Product or Services

22.02.14,    Seite   13  

                       

                     

 

Bibliography: [1]

‘Software Security Testing ‘Software Assurance Pocket Guide Series: Development, Volume III, Version 0.7, May 2010;

[2.1]

Attachment 1 to SC27 N12821 (NA043-01-27-03AK_N574): DEG 201 581 v0.0.3 (2013-05) “Methods for Testing and Specification (MTS); Security Design guide Enabling Test and Assurance (V&V)”

[2.2]

Attachment 2 to SC27 N12821 (NA043-01-27-03AK_N574): DTS 201 582 v0.0.1 (2013-05) “Methods for Testing and Specification (MTS); Security Testing Case Study Experiences”

[2.3]

Attachment 3 to SC27 N12821 (NA043-01-27-03AK_N574): TS xxx xxx v.0.0.3 (2013-05): “Methods for Testing and Specification (MTS); Security Testing; Security Testing Terminology and Concepts”

[3]

ISO/IEC JTC1 N11607: SC27 NWI Proposal “TR on Catalogue of Architectural and Design Principles for Secure Product, Systems, and Applications”, 2013-06-17 (NA043-01-2703AK_N555)

[4]

ISO/IEC JTC1/SC27 N12761: 15408-1:2009/DCOR 1 Draft Technical Corrigendum (Attachment 2 to SC27 N12219) IT ST Evaluation Criteria for IT Security Part 1: Introduction and General Model (NA043-01-27-03AK_N552)

[5]

ISO/IEC JTC1/SC27 N12762: Draft Technical Corrigendum ISO/IEC 18045:2011/DCOR 1: IT ST Methodology for IT Security Evaluation TCOR 1 (NA043-01-27-03AK_N553)

[6]

ISO/IEC JTC1/SC27 N12476 ISO/IEC 1st WD20004 IT ST Refining SW Vulnerability Analysis under ISO/IEC 15408 and ISO/IEC 18045 (NA043-01-27-03AK_N596)

[7]

ISO/IEC JTC1/SC27/WG3 N1006: ISO/IEC 2nd WD30127 IT ST – Detailing Software Penetration Testing under ISO/IEC 15408 and ISO/IEC 18045 Vulnerability Analysis (NA043-01-2703AK_N586)

[8]

ISO/IEC JTC1/SC27 N12485: ISO/IEC 4th WD17825 IT ST – Testing Methods for the Mitigation of non-invasive attack classes against Cryptographic Modules (NA043-01-27-03AK_N584)

[9]

ISO/IEC JTC1/SC27 N12487: ISO/IEC 2nd WD18367 IT ST Cryptographic Algorithms and Security Mechanisms Conformance Testing (NA043-01-27-03AK_N585)

[10]

ISO/IEC JTC1/SC27/WG3 N312503: ISO/IEC FDIS 29147 IT ST Vulnerability Disclosure (NA043-01-27-03AK_N571)

[11]

ISO/IEC JTC1/SC27/WG3 N312501: Final Text for Publication ISO/IEC 30111 IT ST Vulnerability Handling Process (NA043-01-27-03AK_N583)

[12]

ISO/IEC JTC1/SC27 N12821: Liaison Statement received from ETSI MTS to ISO/IEC JTC1/SC27/WG3 on Security Testing (2013-07-19) (NA043-01-27-03AK_N574)

[13]

ETSI TS 102 165-1 v4.2.1 (2006-12): TISPAN; Methods and Protocols; Part 1: Method and Proforma for Threat, Risk, Vulnerability Analysis (TVRA)

[14.1]

ETSI GS ISI 001-1 v1.1.1 (2013-04): ISI; Indicators (INC); Part 1: A full set of operational Indicators for organizations to use benchmark their Security Posture.

[14.2]

ETSI GS ISI 001-2 v1.1.1 (2013-04): ISI; Indicators (INC); Part 2: Guide to select operationall indicators based on the full set given in part 1.

[14.3]

ETSI GS ISI 003 v0.2.1 (2013-02) draft: ISI; A set of Key Performance Security Indicators (KPSI) for Security Event Detection Maturity Evaluaition.

[14.4]

ETSI GS ISI 004 v0.0.2 (2013-04) draft: ISI; Guidelines for Event Detection Implementation.

[14.5]

ETSI GS ISI 005 v0.0.2 (2013-04) draft: ISI; Guidelines for Testing and Detection Capabilities.

[.]

ENISA Project on Resiliency

22.02.14,    Seite   14