(1996), was seen as outside of the scope of this paper. The adaptive software security metrics approach is targeted at the very end of the development process, ...
Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria
Mikko Siponen Department of Information Processing Science, University of Oulu, Oulu, Finland
Keywords Information , Security, Computer software, Standards
Abstract Traditionally , information securit y management standards listin g generic means of protectio n have received a lot of attentio n in the field of information securit y management . In the background a few information security management-oriente d maturity criteria have been laid down. These criteria can be regarded as the latest promising innovation s on the information security checklist standard family tree. Whereas information security maturity criteria have so far received inadequat e attention in information security circles, software maturity endeavour s have been the focus of constructiv e debate in software engineerin g circles. Aims to analyze what the alternativ e maturity criteria for developing secure informatio n systems (IS) and software can learn from these debates on software engineerin g maturity criteria. First, advances a framework synthesized from the information systems (IS) and software engineering literatures , includin g six lessons that information security maturity criteria can learn from. Second, pores over the existin g information security maturity criteria in the light of this framework. Third, presents, on the basis of results of this analysis , implication s for practice and research.
Information Management & Computer Security 10/5 [2002 ] 210±224 # MCB UP Limited [ISSN 0968-5227] [DOI 10.1108/0968522021044656 0]
[ 210 ]
A few studies suggest that the alternative methods for developing and managing secure IS are influenced by the IS/software development methods of previous generations (Baskerville, 1988, 1992; Dhillon, 1997; Dhillon and Backhouse, 2001; Siponen, 2001; Siponen and Baskerville, 2001). It is interesting that perhaps the oldest approach, namely checklist-standard-based securing of software/IS (Baskerville, 1988, 1992; Dhillon, 1997), has continued to exist. Even though the checklists are not a hot topic in the contemporary information security literature, their cognate method ± security management standards (see Baskerville, 1992; Dhillon, 1997; Dhillon and Backhouse, 2001; Siponen, 2001; Siponen and Baskerville, 2001) ± have received increasing attention from both information security researchers and practitioners (Eloff and Solms, 2000a, b; Fitzgerald, 1995; Hardy, 1995; Hopkinson, 2001; Janczewski, 2000; Solms, 1998, 1999). Information security management standards can be regarded as a legacy of checklists. Both (checklists and management standards) offer a ready-made generic catalogue of protection means (Baskerville and Siponen, 2002), fabricated rather by the experiences of practitioners than the results of academic research. Recently, following ideas and developments in the field of software engineering (e.g. Pfleeger et al., 1994), a few information security management-oriented maturity standards have seen the light of day. Recognizing this analogy in the field of software development, we see that information security maturity standards are the latest descendant of the information checklists-management standard genealogy. Indeed, the ``major’’ difference between information security maturity criteria and
checklists-management standards is the concept of ``maturity levels’’. However, any information security checklist or management standard can be turned into a maturity criterion simply by dividing the checklist or management standard into maturity levels. Of all the standards targeted at management level in the field of information security (see Eloff and Solms, 2000a), BS 7799 has received the greatest interest, at least measured in terms of the sheer number of conference and journal articles. Utilizing the same gauge, it is strange that the existing information security management-oriented maturity standards such as SSE-CMM (1998a, b), adaptive software security metrics (Voas et al., 1996), information security program maturity grid (Stacey, 1996), software security metrics (Murine and Carpenter, 1984) and an approach by Krzanik and SimilaÈ (1994) have not received similar attention. This is intriguing, given that the maturity ventures are currently the latest stage in the evolution of the checklist-management standard concept. And yet, the promise of information security maturity criteria is very attractive. If you want to do business with, or otherwise co-operate with, other organizations, you need to ensure that those organizations do not constitute the weakest link in your systems. You need to know that the parties with whom you do business have a certain level of security in their systems. This, the establishing of an objective maturity criterion that shows how secure a system is, is the key promise of information maturity criteria. In the field of software engineering, the various maturity efforts have received a lot of positive attention (Herbsleb et al., 1997; Iivari et al., 2001; Paulk et al., 1993; Shere and Versel, 1994), as well as harsh critiques (Bollinger and McGowan, 1991; Pfleeger, 1999;
The research register for this journal is available at http://www.emeraldinsight.com/researchregisters
The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/0968-5227.htm
Introduction
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
Rifkin, 2001; Voas, 1999). The existing critiques of software engineering maturity models provide good lessons on the problems of maturity endeavours: why should we not learn from our fellow scholars and practitioners in the fields of software engineering and IS, and thus avoid repeating the same mistakes again? Thus the aim of this paper is to scrutinize the alternative information security maturity approaches using these lenses. By unveiling the weaknesses of existing information security maturity criteria this paper has particular relevance for people involved in developing such criteria. In addition, this study also has relevance for ordinary security practitioners who are applying such criteria. For such practitioners, this study reveals some crucial weaknesses of existing information security maturity criteria. We hope that by being aware of these potential pitfalls, practitioners may be better prepared to deal with possible problems originating from them. A preliminary outline of this study appeared in the Proceedings of the 17th International Conference on Information Security, Cairo, Egypt, May 6-8, 2002. The rest of the paper is composed as follows. The second section provides an overview of the alternative maturity approaches, points out their links to related information security management standards and presents the framework for this study. The third section analyses the alternative information security management-oriented maturity standards. The fourth section discusses the findings of the paper while also proposing future research directions and questions which maturity standards should be met. The fifth section concludes by summarizing the key contributions of the paper.
Research design, an overview of the alternative maturity approaches and the framework for analysis Research design The overall research approach taken in this paper is that of conceptual analysis. The hermeneutic circle is also utilized in this paper. The hermeneutic circle is an interpretive (see Klein, and Myers, 1999, 2001; Walsham, 1996) and a conceptual-analytical research method. It is commonly used by historians, philosophers and theologians to reveal in a document something that is not explicitly present in it (Mautner, 1996, p. 188). Because this study is only about discovering the assumptions of different information
security management approaches that their authors have not explicitly stated, the hermeneutic circle is a natural research methodological choice. In order to ``validate’’ our findings, i.e. to show how we came to a particular conclusion, we have included relevant citations from the original material.
Research criteria and an overview of the alternative IS security maturity approaches In selecting the information security maturity criteria to be analysed, the following criteria have been adopted. First, the maturity criterion must be a ready package, i.e. it must offer practical prescriptions for evaluating maturity. Hence, possible meta-analyses and pure research agenda papers on information security maturity criteria would not meet this criterion and are therefore outside the scope of this paper. Second, a criterion to be included in this analysis needs to be an information security management-oriented maturity criterion, not a computer security one[1]. Five information security maturity approaches meet the first criterion, including adaptive software security metrics (Voas et al., 1996), the common criterion (e.g. Caplan and Sanders, 1999; Chokhani, 1992), the information security program maturity grid (Stacey, 1996), software security metrics (Murine and Carpenter, 1984), the systems security engineering-capability maturity model (SSE-CMM, 1998a, b). An approach labelled as ``adding security concerns into maturity models’’ by Krzanik and SimilaÈ (1994) does not meet these criteria. This approach is particularly influenced by the Bootstrap SW maturity criterion (Kuvaja et al., 1994). This approach differs from the three others in the sense that the approach of Krzanik and SimilaÈ is more an abstract issue paper than a concrete method to be used as such. Hence, it does not meet the first selection criterion, and we are obliged to leave it out of the scope of the present study. Only three maturity approaches seem to meet the second selection criterion: the information security program maturity grid (Stacey, 1996), software security metrics (Murine and Carpenter, 1984), and the systems security engineering-capability maturity model (SSE-CMM, 1998a, b). The approaches under the banner of ``evaluation standards’’, the common criterion being perhaps the most notable (Abrams and Podell, 1995; Caplan and Sanders, 1999; Chokhani, 1992), do not meet the second selection criterion. These
[ 211 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
evaluation standards are focused on technical aspects, computer systems and/or the very end of the development process or software products (see Overbeek, 1995) with the result that they cannot be regarded as information security management-oriented maturity endeavours (Baskerville, 1992). For these reasons, such technical or computeroriented standards are omitted from the present analysis. For the same reason, an assessment approach called adaptive software security metrics, proposed by Voas et al. (1996), was seen as outside of the scope of this paper. The adaptive software security metrics approach is targeted at the very end of the development process, being at the coding level for fixing bugs. Such an approach is more relevant to assessing computer systems ± or software particularly ± than ISs that encompass socialorganizational dimensions. Table I summarizes the information security management-oriented maturity approaches. These approaches (Table I) are considered next. SSE-CMM is a cognate of the capability maturity model (CMM) and ISO SPICE, both (CMM, SPICE) used to determine and improve the maturity of software processes with the help of five maturity levels (see Herbsleb et al., 1997; Paulk et al., 1993; Shere and Versel, 1994). The first version of SSECMM appeared in 1994 and the current, second version, in 1998 (SSE-CMM, 1998a, b), which is the version considered in this analysis. SSE-CMM has received attention only among North American scholars and practitioners (Ferraiolo and Sachs, 1996; Hefner, 1997; Hopkinson, 2001; SSE-CMM, 1998a, b). SSE-CMM, like CMM, put forward five maturity stages where the first stage denotes the lowest level of maturity and the fifth stage the highest stage of maturity. Each stage consists of a fixed number of security processes (a sequence of steps performed to achieve a certain objective), i.e. security activities which the organization should
meet. To determine the maturity level of an organization, these security processes are trawled through in the form of a questionnaire consisting of ``yes/no/don’t know’’ questions (e.g. ``order risks by priority’’ ± ``yes/no/don’t know’’) within 22 process areas (such as PA03: assess security risk; PA12: ensure quality). Yet, SSM-CMM includes the concept of exploratory (freeform) questions to resolve possible inconsistencies and unsupported answers (no evidence) from questionnaires. With the help of SSM-CMM, organizations can improve the maturity of their IS security through incremental improvements (Ferraiolo and Sachs, 1996). SSE-CMM consists of two parts, a model (consisting of five maturity levels, as mentioned) and an appraisal method. The latter is a four-phased (planning, preparation, on-site, reporting) method with which to conduct the evaluation of organizations’ maturity on the basis of SSM-CMM model.
Stacey’s information security program maturity grid Stacey’s (1996) information security program maturity grid also stems from the CMM and particularly Crosby’s (1979) approach labeled as ``the quality management maturity grid’’. As in the case of SSE-CMM, Stacey (1996) proposes five stages in order of increased maturity: 1 uncertainty (a total lack of understanding of information security ± security is a hindrance to productivity); 2 awakening (realization of the value of security ± but inability to provide resources and money for security); 3 enlightenment (security is a must ± as well as resources and money for security ± organizations need also to prevent violation, instead of merely recovering from incidents); 4 wisdom (security development reflects organizations’ environmental factors and needs ± all users are empowered in terms of information security); and
Table I The alternative maturity approaches
[ 212 ]
Name
Key ideas
Sources of influence
Key advocates/references
SSE-CMM
Five maturity levels
CMM
Information security program maturity grid
Five maturity levels
CMM, the quality management maturity grid
SSE-CMM (1998a, b); Ferraiolo and Sachs (1996); Hefner (1997); Hopkinson (2001); Annual ISSEA conference Stacey (1996)
Software security metrics
11 high-level security criteria and five milestones
Murine and Carpenter (1984)
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
5
benevolence (continuous security process improvement through research and practice).
Also, like SSE-CMM, Stacey’s information security program maturity grid incorporates several more detailed prescriptions associated with each maturity level.
Murine-Carpenter maturity criterion The incentive behind the maturity criterion of Murine and Carpenter (1984) lies in building quantifiable metrics for the information security maturity of systems and software (Murine and Carpenter, 1984 p. 208). They feel that software quality metrics do not address the security aspect adequately enough, resulting in the need to elaborate software security metrics. They list 11 highlevel security criteria (Murine and Carpenter, 1984 p. 212) and five milestones (Murine and Carpenter, 1984, p. 213), i.e. phases in software development where the maturity of security in software development should be analyzed in the light of their maturity criteria.
A framework for analysis Maturity endeavours started in the field of software engineering. The US Department of Defense (DoD) misjudged the ability of several its contractors to develop mature SW with the result that the DoD deemed it imperative to mobilize a maturity standard (Bollinger and McGowan, 1991; O’Connell and Saidian, 2000 p. 28). Accordingly, several SW maturity criteria have been presented (Kuvaja et al., 1994). Yet, several criticisms have been levelled against the SW maturity criteria. These problems about the existing software maturity models form the analytical framework of this study, i.e. six lessons from which information security maturity criteria can learn. These perceived problems are summarized in Table II. The problems summarized in Table II will now be discussed in more detail.
Do the criteria entail conventionalism or an operational focus? Rifkin (2001) recognized that SW maturity criteria focus on operational issues alone. In fact, the SW maturity criteria do not support innovation at all, since they do not encourage organizations to innovate, but instead insist on the use of existing and well-known methods and practices. This is known as conventionalism. Rifkin (2001) therefore reported that organizations regarding innovation as their main competitive strategy gain nothing from the adaptation of SW maturity criteria. Furthermore, organizations using very state-of-the-art security solutions, perhaps even participating in the creation of new security contributions (e.g. the highest level of information security program maturity grid by Stacey), may not rank high in maturity estimations, as the new solutions are not recognized by the maturity criteria (as the criteria are based on old information). The point of the operational focus is to ponder whether the information security maturity criteria uphold conventionalism (and have an operational focus), or do these criteria support reforms and innovations?
Are the criteria naturalistic-mechanistic? The naturalistic-mechanistic view refers to the idea that phenomena can be quantified and controlled. Positivism, which claims that the methods of natural sciences should form the basis of all sciences, is the most wellknown example of this view (see Hirschheim, 1985; Ray, 2000). Also, so-called behaviorists in the field of the behavioral sciences advocated a similar view under the direction of B.F. Skinner, although with inferior results. When it comes down to software development, Humphrey (1988) states several hints giving the impression that he is a proponent of the naturalistic-mechanistic view. To provide examples, Humphrey sees that the software process should be ``predictable’’ (Humphrey, 1988, p. 73),
Table II The recognized problems of existing SW maturity approaches Problems
Problem description
Operational focus
Conventionalism vs emphasis on operational issues, does not support innovation Naturalistic-mechanistic (positivistic) world-view Do the maturity standards support IS/SW development in emergent organizations? Distort the truth The focus of inspection is on predefined spots ± not a holistic posture of overall maturity Reference-only, subjective, partially objective and objective
Naturalistic-mechanistic Stable, non-emergent, organization structures and functions Double standard Spot focus The degree of ambiguity
[ 213 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
predefined, repeatable (Humphrey, 1988, p. 74) and stable (Humphrey, 1988, p. 75). The aforementioned objectives, namely predefined, repeatable and predictable, accord well with this scientific posture, as it leans on the paradigm of positivism. It is, however, erroneous to contend that such a view of natural science is valid in all areas of science (i.e. positivism). The fallacy lies in the fact that the naturalistic-mechanistic view may be valid in the arena of the natural sciences. However, this is not the case in the social or humanistic sciences, including IS[2]. At least currently, human behavior cannot be fully deduced from specific causal reactions (Abrahamsson, 2001). If someone were able to this, there would most likely be no need for information security, as we would be able to manipulate or induce the necessary causal reactions, with the result that people would not commit security violations. Advocates of software maturity models may try to avoid this objection by saying that the software can be developed within the research paradigm of natural sciences/ positivism. The next quotation illustrates this view. Humphrey (1988, p. 74) argues that: While there are important differences, these concepts [maturity, statistical process control] are just as applicable to software as they are to automobiles, cameras, wristwatches, and steel. A softwaredevelopment process that is under statistical control will produce the desired results within anticipated limits of cost, schedule and quality.
Such a viewpoint entails problems, however. Voas (1999, p. 120) put forward a rebuttal to Humphrey’s claim, pointing out that software development is an inventive process; it is not a manufacturing process (as assumed in the citation from Humphrey). Yet, Pfleeger (1999, p. 33) has pointed out that software maturity criteria have relied upon the natural science (see HarreÂ, 2000) conception of science: We [software engineers] seek relationships to help us understand what makes good software. We then apply what we learn so that we get good software more often. Our search is based in large part on the notion of cause and effect.
Pfleeger further remarks that this idea of ``cause and effect’’ is fallacious because the processes are not natural, but human made, i.e. social processes. Nevertheless, even if one still insists that software can be developed on the basis of the natural science paradigm, the problem with respect to security maturity criteria remains that these models ± at least SSE-CMM and information security program
[ 214 ]
maturity grid ± are applied to secure systems, not just secure software development. Hence, even if the naturalistic-mechanistic view would be adequate for software development, it is definitely not an adequate framework for approaches aimed at securing organizations’ ISs.
Do the maturity standards support IS/SW development in emergent organizations? SW maturity standards seem to imply that the IS/software development takes place in a stable environment, for two reasons. First, current SW maturity models rely on strict waterfall models (Boehm, 2000). Second, they stem from the analogy cited from Humphrey between software development and manufacturing, which, however, is argued here to be a mistake. This analogy is flawed since traditional manufacturing is largely a stable and fine-tuning replication process, whereas software development is more a creative design process (Bollinger and McGowan, 1991, p. 36). It has been argued that the same goes for IS development. For example, increasing numbers of organizations are reported to be emergent (i.e. turbulent), as opposed to stable, in terms of their business environment, and hence they require appropriate means for developing SW/IS as the current IS/SW development methods are suited to stable organizations (Baskerville et al., 2001; Baskerville and Pries-Heje, 2001; Truex et al., 1999, 2000). Their basic claim is that any modern IS/SW development method should support the requirements posed by emergent organizations. In other words, a successful method should be able to adapt rapidly to ever-changing requirements owing to a fastpaced business environment. This issue is also relevant for information security maturity criteria. Given that the business environment is a turbulent one requiring that IS/software be developed rapidly, there may not be time to wait for bureaucratic and long-term security processes to take place.
Can maturity criteria tackle the problem of the double standard? Companies may be under pressure to perform well in terms of maturity as a good maturity rating results in good publicity and an increase in business competence (O’Connell and Saidian, 2000, pp. 32-3). The problem of double standard refers to a situation where an organization manipulates its results in order to look better in a maturity evaluation. O’Connell and Saidian (2000, pp. 33-4) describes several such tricks that an organization may play. Moreover, Bollinger and McGowan reported how
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
maturity evaluators schooled by DoD are trained to ``distinguish genuine answers from attempts at obfuscation or even outright falsification’’ (Bollinger and McGowan, 1991, p. 28). They also provide a few countermeasures for this problem, including conducting online empirical evaluations (not just paper-based), requiring two sources of confirmation, choosing a representative evaluation team. However, is it clear that these proposed cures do not remove these problems. The problem of the double standard is at least as relevant an issue in the field of information security as in the field of software engineering. One key objective of having the concept of maturity levels is to sketch ``objective and universal’’ criteria, which are able to indicate the maturity of all kinds of organizations. By developing such criteria, the major objective is not to show the maturity level for the organization adopting the maturity criteria (interorganizational) ± although this is also a secondary objective of maturity standards ± in which case the problem of the double standard is not so crucial. As with CMM, one crucial aim of setting up information security maturity criteria is to assist other organizations, third parties, business partners, etc. to ensure that organizations which they deal with have certain level of information security. Recognizing this, it is justified to presume that the increasing recognition and use of a widely accepted maturity criteria would increase the incentives of an organization to score high in terms of these IS security maturity criteria (see O’Connell and Saidian, 2000).
Spot focus Bollinger and McGowan (1991, p. 33) levelled the criticism that the focus of maturity inspection is on prefixed spots, the result being that the criteria do not pay any attention to a holistic overall maturity posture. To illustrate this problem, Bollinger and McGowan submit the case where one person painting, say, a car, does the job very well in certain spots, leaving other places without any paint (case A), scores equally high on the maturity scale as another person painting the whole car consistently well (case B); hence, the label spot focus. Clearly, the latter option (B) is more mature in terms of painting, but as the SW maturity criteria only examine certain spots, they lack the overall maturity estimation, and both A and B would scale equally high on the maturity scale, given that the measurement idea identical to software maturity criteria is used (Bollinger and McGowan, 1991).
Clearly, the problem of spot focus is relevant to the assessment information security maturity criteria. If such faults were a characteristic of security maturity criteria, it would result in organizations securing ISs/ software possibly concentrating heavily only on certain places in order to increase their maturity level, whilst bestowing less or no attention on certain other aspects not covered by the maturity model (as in the case A in the aforementioned example). The aim of this viewpoint is to explore whether the information security maturity criteria succumb to spot focus fallacy?
What degree of ambiguity is present in maturity criteria? Four categories of degrees of ambiguity are proposed (Pfleeger et al., 1994): referenceonly, subjective, partially objective and objective. Reference-only means a situation where the standard prescribes something which does not provide any concrete means for ensuring compliance. For example, a prescription: ``risk analysis should be carried out’’ without any indication of what the results of good risk analysis would be is an example of a reference-only practice. The fallacy is the lack of information concerning, for example, what good risk analysis practice should include and what is the scope of such risk analysis. The subjective situation is the case when the goodness/scope of a practice is left to be determined on the basis of an expert judgment. For example, a prescription ``risk analysis method needs to be applied with respect to relevant assets’’ is a case in point. Partially refers to a situation which is more concrete than that of subjective practice, but still entailing some ambiguous information (e.g. ``risk analysis needs to be accomplished with respect to relevant assets included’’). Objective denotes prescriptions which give explicit guidelines with minimum inarticulateness. The advantage of the objective practice is the exactness it has compared to other alternatives. The weakness is that such general standards, as the objective one exemplifies, is difficult to build ± and not wise, at least in all cases (as organizations, their business and security requirements/needs vary). The referenceonly alternative is ranked as the worst alternative owing to the several important open questions it does not address.
Analysis of the information security management-oriented maturity approaches The assumptions underlying the various maturity criteria will be analyzed next.
[ 215 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
Operational focus SSE-CMM SSE-CMM does not pay close attention to this problem. At any rate, it does not trumpet the idea that evaluators should seek innovations. However, it does leave evaluators with some freedom to tailor some parts of process areas using their professional judgement. Yet, SSECMM says that one ``may tailor some aspects . . . to satisfy particularly needs’’ (SSE-CMM, 1998b, p. 71). Unfortunately, it is not clear what this means exactly: what are some aspects? Or can one modify everything?
Information security program maturity grid At the earlier level, the focus of this criterion is operational, as the following two examples from the second maturity stage illustrate. First: ``End-users view security restrictions as an unnecessary hindrance’’ (Stacey, 1996). Second: ``The end-users’ productivity is affected now both by the security incidents and by the safeguards set in place to protect the system’’ (Stacey, 1996). These examples indicate that organizations at the second stage do not stimulate security innovation; on the contrary, their security practice seems to debar normal practices. However, the fifth stage seems to pave the way to innovations. Stacey (1996) requires that in order to qualify to the fifth stage, security people in organizations need to participate in research projects and ``its security professionals will be likely to achieve notoriety through presentation at . . . conferences, . . . journals . . .’’. We see this citation clearly implying that the fifth (the highest) stage imposes requirements for security innovations to be created by research/development processes selected results, which are reported to science through conferences and journals.
Murine-Carpenter maturity criterion On the one hand, this criterion does not support the idea of innovations for two reasons. First, it does not give direct hints in favour of innovations (as Stacey’s criterion does). Second, the milestones are based on an existing typical software development lifecycle including the stages ``system security requirements’’, ``software system security requirements’’, ``functional security architecture’’, ``modular security gating’’ and ``security testing’’ (Murine and Carpenter, 1984, p. 213). On the other hand, the MurineCarpenter maturity criterion does not rule out the possibility of innovation. In fact, the use of innovations may become reality as this criterion gives a lot of freedom for developers to choose particular techniques/methods within each milestone (Murine and Carpenter, 1984, p. 213). However, the fact
[ 216 ]
that the milestones are a must for all organizations hinder the use of fundamental innovations that are not in synch with the milestone concept.
Naturalistic-mechanistic assumptions SSE-CMM seems to contain ideas similar to those of behaviourism. For example, if A, then B, is an example of such a naturalisticmechanistic causal law. SSE-CMM exhibits this idea, as the following citation illustrates: ``The SSE-CMM was developed with the anticipation that applying the concepts of statistical process control to security engineering will promote the development of secure systems and trusted products within anticipated limits of cost, schedule, and quality’’ (SSE-CMM, 1998a). As seen in this citation, SSE-CMM takes Humphrey’s analogy of manufacturing and SW development, along with the need of statistical control, seriously (see Ferraiolo and Sachs, 1996). The next citation supports this interpretation: ``Process capability is defined as the quantifiable range of expected results that can be achieved by following a process’’ (SSE-CMM, 1998a, p. 22). Also the fact that the fifth (highest) maturity level is aimed at ``establishing quantitative goals . . .’’ (SSE-CMM, 1998b, D.4) indicates the role of the naturalistic-mechanistic view as it bears on SSE-CMM.
Information security program maturity grid We do not find any naturalistic-mechanistic assumptions underlying the information security program maturity grid. This criterion is not founded on observing industrial practices whilst trying to recognise cause-effect relations. Yet, this criterion does not state explicitly that the behaviour of users can be manipulated by cause-effect manner (such as whenever A, then B where, for example, A denotes ``awareness program accomplished’’ and B ``users are motivated’’). However, it uses an expression which one might regard as imparting the flavour of a naturalisticmechanistic assumption: ``because of the thorough security awareness training program, end-users are more vigilant and tend to initiate more incident reports’’ (Stacey, 1996). Even though Stacey in the citation above provides an ideal and simplistic picture of hoped results of awareness programs, we do not find his thinking couched in a naturalisticmechanistic view.
Murine-Carpenter maturity criterion SSM provides many hints of a naturalisticmechanistic view. On the one hand, it repeatedly clearly states the aim of laying
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
down a quantifiable maturity criterion (Murine and Carpenter, 1984, pp. 207-8). On the other hand, it does not directly assert the existence of certain mechanistic-causal relationships. However, the fact that SSM is concentrated wholly on technical aspects suggests that its worldview is very much of a naturalistic-mechanistic kind. The fact that the human component is not recognized or considered at all by SSM further supports our conclusion.
Assumption on stable, non-emergent, organization structures and functions None of these maturity standards fully recognizes the issues which secure SW/IS development in emergent organizations pose. Some of the standards do better than others, and we shall analyze each criterion in detail next.
SSE-CMM SSE-CMM adopts highly stable IS security development approach. In fact, the way IS security development is contrived, pursuant to SSE-CMM, makes it perhaps the most rigid of the alternative maturity approaches. The whole maturity approach itself is close to 1,000 pages long, the evaluation process is formal (and long), proceeding through all points and stages starting from the first stage. Moreover, the formulation and functioning of the appraisal process is formal and includes bureaucratic elements. For example, there are several appraisal/ appraised organizational roles that are recognized to be fulfilled in the evaluation process, such as appraisal facilitators, evidence custodian, voting members, observers (mentioned in appraisal organizations) and site coordinator, executives, executive spokesman, project leader and practitioner (mentioned in roles in the appraised organization (SSE-CMM, 1998b)). Such bureaucracy in evaluating the security level of an organization may be too much, particularly for small emergent organizations. Yet, the appraisal process includes 18 phases and sub-phases; planning (three sub-phases), preparation (four subphases), on-site (seven sub-phases), and reporting (four sub-phases). Things could be much worse from the point of view of emergent organizations. Fortunately, SSECMM gives the evaluators/target of evaluation freedom to choose the particular goals of the evaluations. Moreover, the assessment tool includes the concept of ``tailorable parameters’’ allowing the evaluators e.g. to decide autonomously the scope of the evaluation with respect to these parameters.
Information security program maturity grid Organizations scaling at the low level in terms of this maturity criterion are those not yet ready for emergent IS development. A few examples follow: ``Security is viewed as a commodity that can be bought on the open market’’. This example from the second maturity level indicates an assumption that general security solutions can be unearthed from standards, for example. Here is another example from the second level: ``The officer will identify the significant threats and develop policies and procedures in response to the most frequently occurring crises’’. This implies that a stable environment is assumed for organizations belonging to the second level ± they are waiting for ``most frequently occurring crises’’. Yet, Stacey (1996) also recognizes the potential complications if organizations follow the pattern of stable organizations: ``Losses may be high, especially when they do not follow the historical trend’’. In the third level, we see an increasing recognition of the idea of emergent organizations, as the following passage from the third level shows: ``security is no longer viewed solely as a commodity that can be purchased. Rather, information security must be designed consistent with an enterprise’s needs ± it must be designed from within.’’ This is a first step towards the recognition that in an emergent organization security needs to originate from the organization’s (unique) mission, not from outside in the form of a generic security package. To give a second example: ``Previously prepared risk analysis becomes stale and demonstrates loose applicability to the evolving environment’’. The recognition of the inadequacy of the previous risk analysis in evolving an environment in the latter citation also demonstrates a shift in thinking towards the idea of IS/SW development in emergent organizations. The fourth level continues by giving support for the requirements posed by an evolving business environment. Two examples from the fourth maturity level follow. First, in order for an organization to be at the fourth level, it should: ``closely reflect the enterprise’s environment and respond to the enterprise’s evolving needs’’. As a second example: ``Threats are continually reevaluated based on the changing threat population and on security incidents’’. Both these examples clearly demonstrate a readiness on the part of fourth stage organizations in an era of emergent organizations. Finally, we see that the fifth stage supports the idea of emergent organizations as well. Consider the following
[ 217 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
citation: ``continual information security process improvements through research and participation and the sharing of knowledge in public and professional forums’’. This is an example of an ongoing analysis of a natural situation for emergent organizations (see Truex et al., 1999). On the negative side, Stacey’s approach does not pay attention to the requirements arising from a fast pace of development in the case of emergent organizations.
Murine-Carpenter maturity criterion Regarding the question of stable versus emergent organizations, the MurineCarpenter maturity criterion lies somewhat between these two views. One the one hand, it suggests well-known aspects such as the five milestones. On the other hand, its prescriptions, such as milestones, lie at a very high level of abstraction and leave the choice of particular techniques/methods for the practitioners or users of the maturity criterion (Murine and Carpenter, 1984, p. 213). Yet, purely going through this criterion does not necessarily require a huge number of manpower/working hours, as opposed to SSE-CMM, and generally it is not very rigid (also as opposed to SSE-CMM). Our conclusion is that the Murine-Carpenter maturity criterion may be of use to organizations which purely wish to improve their software security in an emergent environment.
Double standard As we understand it, no maturity standards explicitly address this issue. The information security program maturity grid and the Murine-Carpenter maturity criterion do not come close to this issue at all. SSE-CMM comes closest to this issue, and therefore is the only one considered (see below).
SSE-CMM The closest recognition of double standard is the following statement: ``Evidence must be weighed by the appraisal team in that the source of the information is taken into consideration when the evidence is considered. For example, evidence from questionnaires or interviews may be considered less `valid’ in that they are more prone to misuse or misunderstanding. Thus, the team might look for a certain amount of corroborating evidence from other sources, such as documents’’ (SSE-CMM, 1998b, p. 80). Two things emerge from this citation. First of all, SSM-CMM does not provide any explicit guidance on recognizing and addressing the problem of the double standard, excepting evaluators’ intuitions. Second, we understand this citation to imply that the evaluators are advised to pay more attention
[ 218 ]
to documentation than interviews (consider: ``may be considered less valid’’). However, the existence of documentation per se cannot be regarded as meeting the case, since documentation is also easy to fabricate and tamper with in an era of computers. It is also easy to re-cycle documents without doing the actual things described in the documents.
Spot focus SSE-CMM In case of SSM-CMM, on the one hand the process areas (and particular detailed questions within each process area) are the generic and predefined spots: ``while the PAs are generic, the sponsor may tailor some aspects of PAs to satisfy particular needs’’ (SSE-CMM, 1998b, p. 71). In that light, SSMCMM succumbs to the fallacy of the spot focus. On the other hand, the organization can select the process areas to be included in the study. Yet, with respect to each process area there are a few ``tailorable parameters’’ which organizations may modify. However, these two points do not remove the spot focus fallacy; the process areas and tailorable parameters are prefixed, with the result that one cannot make one’s own maturity spots. SSE-CMM also includes a hint, which might be interpreted as allowing refinement: ``although the SSAM [the assessment method of SSE-CMM] is a defined method, . . . organizations may need to further refine particular aspects of the method to meet individual sponsor goals and expectations. All refinements must be documented and agreed upon by the sponsors and the appraisal organization.’’ However, it is not clear whether this is constrained by tailorable parameters, or whether one can modify anything as long as the modifications are agreed upon.
Information security program maturity grid In the fifth stage, the objectives are defined loosely with a result that Stacey’s information security program maturity grid eschews the fallacy of the spot focus.
Murine-Carpenter maturity criterion The Murine-Carpenter maturity criterion entails the spot focus fallacy. The five milestones and 11 security criteria are universal, i.e. they should be embedded in every mature secure software development endeavor. However, the fact that milestones are only mandatory at a very abstract level whilst concrete methods/techniques to be used at low levels are non-mandatory may decrease the problem of the spot focus.
Degrees of ambiguity The degrees of ambiguity with respect to SSE-CMM range from reference only to
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
objective. To some extent, degrees of ambiguity such as the subjective will suffice, and may even be necessary. Reference-only ambiguity is blameworthy, however. SSECMM resorts frequently to the use of reference-only practice. To give an example, we see the following point in Base Practice 1: ``manage security awareness, training, and education programs for all users and administrators’’ (SSE-CMM, 1998b, D.2), as an example of reference-only practice. SSECMM does not really indicate (not in the SSECMM base practice explanation part) what such concepts as ``awareness’’ or ``management’’ mean. Does the term awareness refer to ``being aware of something [security]’’, or to a state of affairs where employees are fully committed to security policy ± the latter is not stated by SSE-CMM. It results from the reference-only practice that SSE-CMM does not provide any guidance on how one can know that employees are aware of ± or committed to ± security guidelines.
Information security program maturity grid The degree of ambiguity of Stacey’s (1996) information security program maturity approach entails both reference-only and subjective views. Imperatives such as organize a ``thorough information security training program for end-users’’ are an example of a reference-only view. It does not imply what a good information security training program includes. We also found a subjective pattern ``users are empowered and encouraged to evaluate and develop their own risk-based management strategies and to customize the enterprise’s existing information security program to respond to their own needs’’. This example provides clear guidance with respect to customization, given that end-users know their own needs, but it does not give any indications regarding the process of ``risk-based management strategies’’, such as what a good process for accomplishing such risk-based management strategy might contain.
Murine-Carpenter maturity criterion The Murine-Carpenter maturity criterion incorporates all degrees of ambiguity. On the negative side, the criterion utilizes a great deal of reference-only practice, therefore offering no practical help to developers. Sometimes the objective practice may be fallacious, as well, as indicated in the second section. For example, consider the fifth security-testing milestone: ``System is tested for an illegal entry at all levels’’ (Murine and Carpenter, 1984, p. 213). This is an objective practice, at least insofar as the term ``illegal’’ is concerned. However, security people may
not want to prevent only ``illegal’’ entry into a system, but rather unauthorized or unwanted entry.
Discussion, limitations and implications of the findings In spite of the fact that information security literature abounds in discussion of standards, the existing maturity standards/ criteria for securing IS/software have largely been ignored. This paper analysed the existing maturity approaches for securing IS/software. In addition to self-assessment, the maturity approaches are aimed at demonstrating to organizations and the public/third parties/customers confidence in the security level/maturity of an organization. In fact, it is the latter factor which separates general information security management standards (mainly for inter-organizational self-assessment) from information security maturity managementoriented endeavours (inter-organizational self-assessment and public dimension assessment).
Limitations of the study The research approach of this study was conceptual analysis (see JaÈrvinen, 1997, 2000) combined with a hermeneutic research method favored by historians, philosophers and theologists to interpret texts. Hermeneutics is of Greek origin and can be defined as ``the art of finding something from the text that is not there’’ (Mautner, 1996, p. 188). As the interpretation of texts constitutes the essence of this study, hermeneutic analysis is a natural choice of approach. However, this means that the results are based on our interpretations (Gadamer, 1989). This is the source of the main limitations of this study. To deal with this issue, we have adduced citations where possible to indicate the origins and validity of our interpretations (see Iivari et al., 1998). Moreover, the results of this study should not be interpreted to mean that if an information security maturity criterion fails to tackle all six problems, then the criterion is consequently invalid. Instead, the application of a criterion needs to be pondered on a case-by-case basis bearing these weaknesses in mind. Finally, the problems presented in this study are not an exhaustive list of all the possible problems residing in information security maturity criteria.
Discussion of the key results The results are resummarized in Table III.
[ 219 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria
Table III The recognized problems of existing SW maturity approaches
Information Management & Computer Security 10/5 [2002] 210±224
SSE-CMM Information security program maturity grid Murine-Carpenter maturity criterion
Approaches
Operational Naturalisticfocus mechanistic ± + ±?
± + ±
Stable
Double standard
Spot focus
± + +?
± ± ±
± + ±?
Note: The sign + indicates that the approach can cope with the issue, whereas ± denotes that the approach in question succumbs to the problem. The sign ? indicates an open question
Operational focus (conventionalism) Like CMM (see Rifkin, 2001), the maturity models ± SSE-CMM and Murine-Carpenter maturity criterion ± are anti-innovative and uphold conventionalism. They tend to stress the use of the existing and workable practices for securing organizations’ ISs. As a result, these criteria come up against three problems that should be avoided. First, they encourage neither creative and innovative thinking nor change in paradigm/research program, but rather uphold the use of existing practices (conventionalism). For this reason, we see that they are not good candidates for university education, which is perhaps the most important forum for achieving reform through education. Second, organizations which adopt innovation as their main competitive strategy gain nothing from the adaptation of these maturity criteria. Third, organizations using state-ofthe art methods/techniques may perform badly in maturity estimations, since the old criteria do not recognize these new methods/ techniques. The information security program maturity grid, in turn, can cope well with innovations. In fact, on the highest level, it requires the organization’s security people to participate in research projects, thereby necessitating organizations to create innovations. The Murine-Carpenter maturity criterion does not support the idea of innovations as the milestones are based on an existing wellknown software development life-cycle and, more importantly, they are universal. However, the fact that this criterion gives developers the freedom to chose the particular techniques/methods within each milestone (Murine and Carpenter, 1984, p. 213) therefore paving the way to innovations, is a positive move towards the possibility of achieving security innovations.
Naturalistic-mechanistic SSE-CMM seems to succumb to the fallacy of the naturalistic-mechanistic view. It advocates a view according to which security phenomena should be quantified and controlled. In fact, the whole aim of this
[ 220 ]
maturity criterion is to identify industrial practices where there is an effort to recognize cause-effect relations, and turn these into form of maturity standard. Whereas such an approach might be adequate for pure computer systems having no social dimensions, it is inadequate for addressing information systems security, where there is a human or social component. It is also doubtful that SSE-CMM is able to live up to its own criteria. SSE-CMM would like to see quantifiable evidence when evaluating the maturity of IS security. However, the SSE-CMM criterion itself is not really based on quantifiable research. In fact, no evidence (references, interviews, questionnaires) has been shown for the existence of any research process behind SSE-CMM. This leaves us with the conclusions that the maturity of the process for developing SSE-CMM is rather low. The information security program maturity grid does not make naturalisticmechanistic assumptions. The Murine-Carpenter maturity criterion also is strongly coloured by naturalisticmechanistic assumptions as it aims to sketch a quantifiable maturity criterion. Even though it does not explicitly argue in favour of casual relationships, its underlying worldview is naturalistic-mechanistic since, for example, users are not recognized by this criterion.
Stable versus emergent SSE-CMM assumes a very stable environment. The overall process for securing IS/software, or evaluation security maturity level of IS/SW, prescribed by SSECMM, is a formal and unbending. The information security program maturity grid by Stacey (1996) is able to incorporate the requirement laid down for secure IS/software development in emergent organizations with increasing success in the higher stages (3-6). The Murine-Carpenter maturity criterion does not provide a clear answer to this issue. It prescribes universal milestones which, even though these are in conflict with the idea of IS/SW development in emergent
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
organizations, are only applicable at the very highest level of abstraction. At the lower levels of abstraction, developers are able to choose freely the techniques they prefer. Hence, it may suit organizations desirous only of improving their software security without any social concerns in an emergent environment.
Double standard We found no maturity standards that explicitly address this issue. SSE-CMM is the only one which can be interpreted to vaguely touch on this issue, but it does not provide any concrete guidance on this matter. The focus on form, instead of context (actual practice), may in fact increase the problem of the double standard. If the fulfilment of information security maturity criteria is demonstrated through documents only but not in actual practice, it is easy to retain the double standard.
Spot focus SSM-CMM: the process areas are the generic and predefined spots. Hence, SSM-CMM succumbs to the fallacy of the spot focus. However, an organization can select the process areas to be included in the evaluation, and it allows for modifiable parameters called ``tailorable parameters’’. Alas, these two do not remove the spot focus fallacy; the process areas and tailorable parameters are prefixed, with a result that one cannot create one’s own spots. Stacey’s information security program maturity grid avoids the fallacy of the spot focus, particularly in the highest stage. It requires organizations’ security development to be in synch with organizational business requirements and the prescriptions are expressed at a very high level of abstraction. The Murine-Carpenter maturity criterion entails the spot focus fallacy because the five milestones and 11 security criteria are universal, even though the methods/ techniques to be used at the low levels are discretionary. The problem of the spot focus entails a paradox. Is it possible to avoid predefined and generic prescriptions (spot focus) in the case of maturity criteria? The key idea behind having an information security maturity criterion is to sketch a universal, predefined and objective information security standard. But, then, ``universal’’, ``predefined’’ and ``objective’’ imply that we end up with a criterion having certain preestablished prescriptions. However, these predefined prescriptions are indeed predefined spots in our systems ± hence, what we have with predefined prescriptions is the spot fallacy. To make it worse, given
that such maturity criteria are to be of practical use, they need to include detailed prescriptions (or spots). If the security prescriptions are abstract then they are of less use to practitioners, since abstract guidelines fail to give precise guidance. Yet, loose abstract guidelines give you a lot of freedom in securing the system. So, it seems that the only way to avoid the spot focus fallacy is to stop short of a predefined, universal and objective criterion ± the reason, in fact, why we want to have an information security maturity criterion in the first place.
The degree of ambiguity Of the four degrees of ambiguity, the most lamentable one is the reference-only view. SSE-CMM embodies a great deal of referenceonly practice. The degree of ambiguity of Stacey’s (1996) information security program maturity approach entails both referenceonly and subjective views. The MurineCarpenter maturity criterion includes all degrees of ambiguity. The criterion makes much use of reference-only practice, thereby offering no practical help to developers. This criterion also utilizes objective practice in a detrimental manner. Pfleeger et al. reported with respect to software engineering standards that their ``effectiveness has not been rigorously and scientifically demonstrated. Rather, we have too often relied on anecdote, `gut feeling’, the opinions of expert or even flawed research, rather than careful, rigorous software engineering experimentation’’ (Pfleeger et al., 1994, p. 71). Furthermore, they continue that ``even when scientific analysis and evaluation exist, our standards rarely reference them’’ (Pfleeger et al., 1994, p. 72). It is our belief that this is also the reality in the realm of information security management maturity standards. At any rate, the findings of this study suggest that the information security management maturity standards have not learned from the their cognate software engineering maturity standards. The main suggestion is that information security management-oriented maturity standards should be revised to address the issues considered here. In addition, numerous empirical studies are needed to study the relevance and validity in reality of the individual prescriptions for alternative maturity standards. At present such studies are a few and far between. Moreover, we would like to see the inclusion in any information security maturity criterion of a complete reference list of related work, with respect to each of the processes’ areas, milestones (or whatever is they are called), from where the prescriptions originate. Currently, practitioners have no evidence on
[ 221 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
which to judge whether the prescriptions suggested by the information security management-oriented maturity criteria really make sense. With respect to the innovation vs operational focus, it is suggested that future information security maturity standards should consent to looking for innovative and novel ways of securing IS/ software. Future standards should avoid the naturalistic-mechanistic and the spot focus fallacies and include means for tackling the problem of double standard. Nonetheless, we would be glad if the future maturity standards would emulate the information security program maturity grid and allow organizations more freedom in choosing the process areas in maturity evaluations according to their own business requirements. It is suggested that evaluators ± particularly in emergent organizations ± take more liberties in modifying the evaluation process for their own purposes. One strategy for organizations would be the fabrication of their own ``in-house’’ evaluation criteria and practice, which would be less formal, more lightweight and pay particular attention to the most crucial aspects of the organization’s IS security practice. Finally, with respect to degrees of ambiguity, forthcoming maturity criteria should on the one hand avoid reference-only practice; and on the other hand, use objective practice judiciously to avoid the problems discussed.
Conclusions Information security management standards and checklists have received a lot of attention. However, little has been done to study the existing information security management-oriented maturity models. To fill this gap, this study analyzed the alternative information security management-oriented maturity criteria from the point of view of a framework synthesized from the IS and software engineering literatures. Implications for research and practice were presented.
Notes 1 Computer security refers to a viewpoint where the concern only lies with entities within a computer system (see Baskerville, 1993; Dhillon, 1997). 2 By saying this we do not claim that quantitative studies are without merit in the field of IS or software. It is, however, relevant to try to generalize phenomena. The naturalistic-mechanistic fallacy refers to the view according to which all social phenomena (e.g. human behaviour) can easily be quantified.
[ 222 ]
References Abrahamsson, P. (2001), ``Rethinking the concept of commitment in software process improvement’’, Scandinavian Journal of IS, Vol. 13, pp. 69-98. Abrams, M.D. and Podell, H.J. (1995), ``Evaluation issues’’, in Abrams, M.D., Jajodia, S. and Podell, H.J. (Eds), Information Security ± An Integrated Collection of Essays, IEEE Computer Society Press, Los Alamitos, CA. Baskerville, R. (1988), Designing Information Systems Security, John Wiley Information Systems Series, New York, NY. Baskerville, R. (1992), ``The developmental duality of information systems security’’, Journal of Management Systems, Vol. 4 No. 1, pp. 1-12. Baskerville, R. (1993), ``Information systems security design methods: implications for information systems development’’, Computing Surveys, Vol. 25 No. 4, December, pp. 375-414. Baskerville, R. and Pries-Heje, J. (2001), ``Racing the e-bomb: how the Internet is redefining information systems development methodology’’, in Fitzgerald, B., Russo, N. and DeGross, J. (Eds), Realigning Research and Practice in IS Development: The Social and Organizational Perspective, Kluwer, New York, NY, pp. 49-68. Baskerville, R. and Siponen, M.T. (2002), ``An information security meta-policy for emergent organizations’’, Journal of Logistics Information Management, special issue on information security. Baskerville, R., Levine, L., Pries-Heje, J., Ramesh, B. and Slaughter, S. (2001), ``How Internet software companies negotiate quality’’, IEEE Computer, Vol. 34 No. 5, pp. 51-7. Boehm, B. (2000), ``Unifying software engineering and systems engineering’’, IEEE Computer, pp. 114-16. Bollinger, T.B. and McGowan, C. (1991), ``A critical look at software capability evaluations’’, IEEE Software, Vol. 8 No. 4, July, pp. 25-41. Caplan, K. and Sanders, J.L. (1999), ``Building an international security standard’’, IT Professional, Vol. 1 No. 2, March-April, pp. 29-34. Chalmers, A.F. (1982), What Is This Thing Called Science?, 2nd ed., Open University Press, Mi;ton Keynes. Chokhani, S. (1992), ``Trusted products evaluation’’, Communications of the ACM, Vol. 35 No. 7, pp. 64-76. Crosby, P.B. (1979), Quality Is Free, McGraw-Hill, New York, NY. Dhillon, G. (1997), Managing Information Systems Security, Macmillan Press, London. Dhillon, G. and Backhouse, J. (2001), ``Current directions in IS security research: toward socio-technical perspectives’’, Information Systems, Vol. 11 No. 2.
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
Eloff, M.M. and Solms, S.H. (2000a) , ``Information security management: a hierarchical framework for various approaches’’, Computers & Security, Vol. 19, pp. 243-56. Eloff, M.M. and Solms, S.H. (2000b) , ``Information security: process evaluation and product evaluation’’, 16th Annual Working Conference on Information Security, Beijing. Ferraiolo, K. and Sachs, J.E. (1996), ``Distinguishing security engineering process areas by maturity levels’’, Proceedings of the 9th Annual Canadian Information Technology Security Symposium. Fitzgerald, K.J. (1995), ``Information security baselines’’, Information Management & Computer Security, Vol. 3 No. 2, pp. 8-12. Hardy, G. (1995), ``Standards ± the need for a common framework’’, Computers & Security, Vol. 14 No. 5, pp. 426-7. HarreÂ, R. (2000), ``Laws of nature’’, in NewtonSmith, W.H. (Ed.), A Companion to the Philosophy of Science, Blackwell, Oxford, pp. 213-24. Hefner, R. (1997), ``A process standard for systems security engineering: development experiences and pilot results’’, Third IEEE International 1997 Software Engineering Standards Symposium and Forum, Emerging International Standards (ISESS 97), IEEE Computer Society Press, Los Alamitos, CA, pp. 217-21. Herbsleb, J., Zubrow, D., Goldenson, D., Hayes, W. and Paulk, M. (1997), ``Software quality and the capability model’’, Communications of the ACM, Vol. 40 No. 6, pp. 30-40. Hirschheim, R. (1985), ``Information systems epistemology: an historical perspective’’, in Mumford, E. et al. (Eds), Research Methods in Information Systems, Elsevier Science, Barking.. Hopkinson, J.P. (2001), ``Security standards overview’’, Proceedings of the Second Annual International Systems Security Engineering Conference. Humphrey, W.S. (1988), ``Characterizing the software process: a maturity framework’’, IEEE Software, Vol. 5 No. 2, pp. 73-9. Iivari, J., Hirschheim, R. and Klein, H.K. (1998), ``A paradigmatic analysis contrasting information systems development approaches and methodologies’’, Information Systems Research, Vol. 9 No. 2, pp. 164-93. Iivari, J., Hirschheim, R. and Klein, H.K. (2001), ``A dynamic framework for classifying information systems development methodologies and approaches’’, Journal of Management Information Systems, Vol. 17 No. 3, pp. 179-218. Janczewski, L. (2000), ``Managing security functions using security standards’’, in Janczewski, L. (Ed.), Internet and Intranet Security Management: Risks and Solutions, Idea Group Publishing, Hershey, PA, pp. 81-105.
JaÈrvinen, P. (1997), ``The new classification of research approaches’’, in Zemanek, H. (Ed.), The IFIP Pink Summary ± 36 Years of IFIP, IFIP, Laxenberg, pp. 124-31. JaÈrvinen, P. (2000), ``Research questions guiding selection of an appropriate research method’’, Proceedings of the 8th European Conference on Information Systems (ECIS 2000), Vienna. Klein, H.K. and Myers, M.D. (1999), ``A set of principles for conducting and evaluating interpretive field studies in information systems’’, MIS Quarterly, Vol. 23, pp. 67-94. Klein, H.K. and Myers, M.D. (2001), ``A classification scheme for interpretive research in information systems’’, in Trath, E.M. (Ed.), Qualitative Research in IS: Issues and Trends, Idea Group Publishing, Hershey, PA, pp. 218-39. Krzanik, L. and SimilaÈ, J. (1994), ``Tuning process capability to assure required security levels’’, Proceedings of the International Invitation Workshop on Developmental Assurance. Kuvaja, P., SimilaÈ, J., Krzanik, L., Bicego, A., Saukkonen, S. and Koch, G. (1994), Software Process Assessment & Improvement ± The BOOTSTRAP Approach, Blackwell, Oxford. Mautner, T. (1996), A Dictionary of Philosophy, Blackwell, Oxford. Murine, G.E. and Carpenter, C.L. (1984), ``Measuring computer system security using software security metrics’’, in Finch, J.H. and Dougall, E.G. (Eds), Computer Security: A Global Challenge, Elsevier Science Publisher, Barking. O’Connell, E. and Saidian, H. (2000), ``Can you trust software capability evaluations?’’, Computer, Vol. 33 No. 2, pp. 28-35. Overbeek, P.L. (1995), ``Common criteria for IT security evaluation ± update report’’, Proceedings of the IFIP TC11 11th International Conference on Information Security, IFIP/SEC’95. Paulk, M.C., Curtis, B., Chrissis, M.B. and Weber, C.V. (1993), ``Capability maturity model’’, version 1.1, IEEE Software, Vol. 10 No. 4, pp. 18-27. Pfleeger, S.H. (1999), ``Albert Einstein and empirical software engineering’’, IEEE Computer, Vol. 32 No. 10, pp. 32-7. Pfleeger, S.H., Fenton, N. and Page, S. (1994), ``Evaluating software engineering standards’’, IEEE Computer, Vol. 27 No. 9, pp. 71-9. Ray, C. (2000), ``Logical positivism’’, in NewtonSmith, W.H. (Ed.), A Companion to the Philosophy of Science, Blackwell, Oxford, pp. 243-56. Rifkin, S. (2001), ``What makes measuring software so hard?’’, IEEE Computer, May/June, pp. 41-5. Shere, K.D. and Versel, M.J. (1994), ``Extension of the SEI software capability maturity model to systems’’, Proceedings of the 18th Annual International on Computer Software and Applications.
[ 223 ]
Mikko Siponen Towards maturity of information security maturity criteria: six lessons learned from software maturity criteria Information Management & Computer Security 10/5 [2002] 210±224
Siponen, M.T. (2001), ``An analysis of the recent IS security development approaches: descriptive and prescriptive implications’’, in Dhillon, G. (Ed.), Information Security Management ± Global Challenges in the Next Millennium, Idea Group, Macmillan Press, London. Siponen, M.T. and Baskerville, R. (2001), ``A new paradigm for adding security into IS development methods’’, in Eloff, J., Labuschagne, L., von Solms, R. and Dhillon, G. (Eds), Advances in Information Security Management & Small Systems Security, Kluwer Academic Publishers, Boston, MA. Solms, R. (1998), ``Information security management (3): the code of practice for information security management (BS 7799)’’, Information Management & Computer Security, Vol. 6 No. 5, pp. 224-5. Solms, R. (1999), ``Information security management: why standards are important’’, Information Management and Computer Security, Vol. 7 No. 1, pp. 50-8. SSE-CMM (1998a) , The Model, v2.0, available at: www.sse-cmm.org SSE-CMM (1998b), The Appraisal Method, v2.0, available at: www.sse-cmm.org Stacey, T.R. (1996), ``Information security program maturity grid’’, Information Systems Security, Vol. 5 No. 2. Truex, D.P., Baskerville, R. and Klein H. (1999), ``Growing systems in emergent organizations’’, Communications of the ACM, Vol. 42 No. 8, pp. 117-23. Truex, D., Baskerville, R. and Travis, J. (2000), ``Amethodical systems development: the deferred meaning of systems development methods’’, Accounting, Management and Information Technology, Vol. 10, pp. 53-79. Voas, J. (1999), ``Software quality’s eight greatest myths’’, IEEE Software, Vol. 16 No. 5, pp. 118-20. Voas, J., Ghosh, A., McGraw, G., Charron, F. and Miller, K. (1996), ``Defining an adaptive software security metrics from a dynamic software failure tolerance measure’’, Proceedings of the 11th Annual Conference on Computer Assurance, Systems Integrity, Software Safety, Process Security (COMPASS). Walsham, G. (1996), ``The emergence of interpretivism in IS research’’, Information Systems Research, Vol. 6, pp. 376-94.
Further reading Bartol, K.M. and Martin, D.C. (1994), Management, international ed., McGraw-Hill, Maidenhead.
[ 224 ]
Blakley, B. and Kienze, D.M. (1997), ``Some weaknesses of the TCB model’’, Proceedings of the 1997 IEEE Symposium on Security and Privacy, IEEE Computer Society Press, Piscataway, NJ. Blanco, M., Gutierrez, P. and Satriani, G. (2001), ``SPI patterns: learning from experience’’, IEEE Software, May/June, pp. 28-35. Born, M. (1949), Natural Philosophy of Cause and Change, Oxford University Press, Oxford. Curtis, B. (2000), ``The global pursuit of process maturity’’, IEEE Software, Vol. 17 No. 4, pp. 76-8. Department of Trade and Industry (1993), Code of Practice for Information Security Management, Department of Trade and Industry, DISC PD003, British Standard Institution, London. Department of Trade and Industry (1999), Code of Practice for Information Security Management, BS 7799-1, Department of Trade and Industry, London. Ernam, K. and Madhaji, N.H. (1996), ``Does organizational maturity improve quality?’’, IEEE Software, pp. 109-10. Ferris, J.M. (1994), ``Using standards as a security policy tool’’, ACM Standard View, Vol. 2 No. 2, pp. 73-7. Kajko-Mattsson, M. (2001), ``Motivating the corrective maintenance maturity model (CM/ sup 3/)’’, Proceedings of Seventh IEEE International Conference on Engineering of Complex Computer Systems. Nawrocki, J., Walter, B. and Wojciechowski, A. (2001), ``Toward maturity model for extreme programming’’, Proceedings of 27th Euromicro Conference. Pfleeger, S.H. and Rombach, H.D. (1994), ``Measurement based process improvement’’, IEEE Software, Vol. 11 No. 4, pp. 9-11. Solms, R. (1996), ``Information security management: the second generation’’, Computers & Security, Vol. 15 No. 4, pp. 281-8. Solms, R. (1997), ``Can security baseline replace risk analysis?’’, Proceedings of the IFIP TC11 13th International Conference on Information Security (SEC’97), 14-16 May, Copenhagen. Solms, R. and Van Der Haar, H. (2000), ``From trusted information security controls to a trusted information security environment’’, Information Security 16th Annual Working Conference on Information Security, Beijing. Swanson, M. (2001), Security Self-Assessment Guide for Information Technology Systems, NIST Special Publication 800-26, NIST, Gaithersburg, MD.