Enterprise Architecture is a model-based approach to business-oriented IT management. To promote .... with software development, rather than for enterprise.
Enterprise Architecture: A Framework Supporting System Quality Analysis Per Närman, Pontus Johnson, Lars Nordström Department of Industrial Information and Control Systems Royal Institute of Technology (KTH) {pern, pj101, larsn}@ics.kth.se Abstract Enterprise Architecture is a model-based approach to business-oriented IT management. To promote good IT decision making, an enterprise architecture framework needs to explicate what kind of analyses it supports. Since creating enterprise architecture models is expensive and without intrinsic value, it is desirable to only create enterprise architecture models based on metamodels that support well-defined analyses. This paper suggests a metamodel derived specifically with a set of theory-based system quality analyses in mind. The ISO 9126-based theory behind the system quality analysis is introduced in the shape of an extended influence diagram. Finally, an example illustrates that our theory-based metamodel does support system quality analysis.
1.
Introduction
Enterprise architecture is an approach to enterprise information systems management that relies on models of the information systems and their environment. The main idea is very old. Instead of building the enterprise information system using trial and error, a set of models is proposed to predict the behaviour and effects of changes to the system. The enterprise architecture models allow reasoning about the consequences of various scenarios and thereby support decision making. A number of enterprise architecture initiatives have been proposed, including The Open Group Architecture Framework (TOGAF) [2], the Zachman Framework [3], GERAM [4], CIMOSA [5], PERA [6], DoDAF [1], Intelligrid [7] and more. In order to predict whether enterprise architecture scenario A or B is preferable, three things are needed. Firstly, models over the two scenarios need to be created. Secondly, it is necessary to define what is desirable; the goals. If two scenarios offer the same functionality, do we want the systems to provide high service availability or is system modifiability more important? Is it more important with high system
performance than high information security or maintainability? Thirdly, we need to understand the causal chains from scenario choice to goals. Scenario A features hardware redundancy that positively affects the system reliability which in turn improves the service availability. However, scenario B is built on a loosely coupled technology, which promotes the modifiability of the system. In order to perform these kinds of analyses, the enterprise architecture models need to contain the proper information. In the above example, where the decision maker is interested in service availability and system modifiability, the models need to answer questions regarding hardware redundancy and component coupling, for instance. The kind of information contained in a model is given by its metamodel, so it is important that enterprise architecture metamodels are properly designed. In order to determine if a metamodel is amenable to the analysis of a certain quality attribute, e.g. information security or interoperability, it would be helpful with a structured account of that analysis. We will use a notation called extended influence diagram (EID) in order to formalize the analyses of various quality attributes. [13] Figure 1 depicts the relation between an enterprise architecture scenario, modelled using a metamodel, the analysis of the scenario, the formal specification of the analysis through an extended influence diagram and finally the output: the quality to be analyzed.
1.1. Purpose and scope The main contribution of this paper is a metamodel that supports the creation of enterprise architecture models amenable to a range of quality attribute analyses. Also introduced are formalizations of such analyses using extended influence diagrams. The influence diagrams are based on the ISO 9126 standard for software quality measurements [12] and further
detailed with information gathered from system quality analysis literature.
Figure 1: The relation between metamodels, enterprise architecture scenarios, analysis, formal specification of analyses and the result of the analysis [13].
1.2. Outline The remainder of this paper is delineated as follows; extended influence diagrams are introduced in section 2. Section 3 presents excerpts of the framework for system quality analysis in the shape of extended influence diagrams. Section 4 evaluates the usefulness of a number of common enterprise architecture metamodels. Section 5 proceeds to detail the content of, and the construction process for a metamodel that supports system quality analysis. The applicability of the metamodel is demonstrated in the subsequent section 6. Section 7 concludes the paper.
For instance, some authors define the term information security in terms of confidentiality, integrity and availability, [19] whereas others add the concepts of non-repudiation and accounting to the definition [20]. Definitional arcs capture the differences in language, allowing stipulative definitions [10] of nodes in terms of other nodes, thus reducing a common source of ambiguity. Utility nodes represent the goals of the decision situation, for example “increased system reliability”. Decision nodes denote the decision alternatives, typically the choice between scenarios. Extended influence diagrams support probabilistic inference in the same manner as Bayesian networks do; given the value of one node, the values of related nodes can be calculated. For more comprehensive treatments on influence diagrams and extended influence diagrams see [13],[14],[15],[16],[17],[18] and [21]. Extended influence diagram syntax
Node Type
Relationship Type
Decision Node
Chance Node
Causal Relation
Informational Relation
Definitional Relation
Example diagram Maintainability
System Size
2.
Utility Node
System Complexity
Extended influence diagrams
Extended influence diagrams are graphic representations of decision problems coupled with a probabilistic inference engine. These diagrams may be used to formally specify enterprise architecture analysis [13]. The diagrams are an extension of influence diagrams, [15] [16] which in turn are an enhancement of Bayesian networks [17][18]. In extended influence diagrams, random variables associated with chance nodes may assume values, or states, from a finite domain (cf. Figure 2). A variable could for example be “system coupling”, or “system maintainability”. These variables are connected with each other through causal or definitional arcs. Causal arcs capture relations of the real world, such as “lower system coupling increases the system maintainability”. With the help of a conditional probability matrix for a certain variable A and knowledge of the current states of the causally influencing variables B and C, it is possible to infer the likelihood of node A assuming any of its states. A common cause of confusion in system and software analysis is the lack of well-defined concepts.
Scenario Selection
Example conditional probability matrix
Scenario Selection Scenario X Scenario Y Small 0.1 0.9 System Size Medium 0.8 0.1 Large 0.1 0
Figure 2. An extended influence diagrams and a simple example. With a chosen scenario in the decision node, the chance nodes will assume different values, thereby influencing the utility node [22]. Extended influence diagrams may be used to capture definitions and causal determinants of system quality attributes such as maintainability, performance, and interoperability. These structures may then be employed for quantitative analyses on enterprise architecture scenarios, given that the enterprise architecture metamodels contain the required information.
3. A framework analysis
for
system
quality
This section presents an extended influence diagram that captures theory from the field of information system quality analysis.
3.1. ISO 9126 software quality metrics Several frameworks for evaluation of the quality of information systems have been proposed, see for instance McCall, [24] Boehm [23] or FURP [25]. One of the most commonly cited frameworks is the ISO 9126-1 standard for software quality measurements [12]. ISO 9126-1 defines software quality in terms of six quality characteristics. These are functionality, reliability, usability, efficiency maintainability and portability. The ISO 9126 series contains a number of technical reports numbered 9126-2 until 4 [26][27][28]. These technical reports lack the status of a standard and contain suggestions on metrics associated with the abovementioned quality characteristics. Despite the admirable breadth of these metrics, they are intended for testing in conjunction with software development, rather than for enterprise architecture analysis, and are as such not directly applicable for our system quality analysis framework.
Figure 3: The definition of system quality used in the framework presented. For the purpose of the system quality analysis framework presented herein, we have used an adapted version of the ISO 9126-1 framework, cf. Figure 3. The proposed framework places significant emphasis on the functional sub-characteristics: suitability, accuracy, security and interoperability. Also, the portability characteristic was deemed less interesting
from an enterprise architecture perspective and was excluded from the extended influence diagram.
3.2. The quality attribute analysis framework The metrics for system quality analysis presented herein and documented in extended influence diagrams were found in literature from fields such as IT security, performance engineering, data quality analysis, etc. The sources were scanned for formulations specifying a causal or definitional relation related to one of the sub-characteristics presented in Figure 3 above. The method used to establish the metrics was a semi-rigid variant of the text-interpretation approach to extended influence diagram generation presented in [14]. In the following sections the influence diagrams for the respective system qualities are presented. 3.2.1 Maintainability System maintainability is defined as the ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment [30]. Researchers and practitioners have proposed several frameworks for analysis of software maintainability, such as [31][32][33],[12][34], and [35]. The maintainability framework presented in this paper is largely influenced by the work of Oman et al.[31]. System maintainability is affected by the competence of the system’s development and maintenance staff, the maturity of the development and maintenance processes, the quality of the system’s supporting documentation, the system’s architectural quality, the quality of the system’s hardware and software platform, and finally the system’s source code quality, cf. Figure 4. Since the values of these variables are difficult to observe directly, they have been further broken down into more easily measurable attributes. For example, the competence of the system’s development and maintenance staff is measured by the staff’s experience with development and maintenance work, the staff’s language expertise on the programming languages used within the system, and the staff’s expertise on the system they are maintaining. Further information on the framework can be found in [29] and [22].
Figure 4: An Extended influence diagram capturing a theory for maintainability analysis [22]. 3.2.2 Security Information Security is here defined in accordance with Stonebrunner’s view [36] to be a measure of the security with respect to threats to the confidentiality, integrity, availability, accountability and assurance of an enterprise’s stored information. This is also the main source for generation of the extended influence diagram, for security. Additionally, the ISO/IEC 17799 standard [37] provided some supplementary measures. The categorization of security services is an adapted version of [38], where security services are grouped into three major categories; preventive, detective and responsive. Apart from that, some security services classified as “supporting” by Stonebrunner [36] were added, and a section on network security, dealing for instance with the quality of firewalls as inserted. Figure 5 presents the extended influence diagram displaying technical mechanisms, which if employed correctly can counter information security threats. The rightmost branch of the extended influence diagram of Figure 5 contains metrics dealing with system architecture. The use of security services may be of little use if the overall security architecture is sub par. If, for instance, the system does not clearly distinguish between users involved in distinctly separate work processes with different access rights, the confidentiality of information can be compromised. Interconnections with other information systems through Local Area Networks and the internet provide opportunities for unauthorized users to access information they are not privy to. A remedy for this
might be the use of firewalls or a strict compartmentalization of the intranet to limit the amount of exposed information. Supporting security services are the foundation on which the other services rest. Some examples are the protection of communications through encryption, as well as security management services to allow security administrators to monitor system behavior. Preventive security services prevent security breaches. Examples are identity management services that authenticate and authorize users according to their privileges. Preventive security services also mitigate security threats such as viruses, trojans and spy-ware by the use of malicious and mobile code detection software. Detective security services are used to determine whether the protected information has been tampered with in any way. An auditing service provides functionality to go through registries such as log files to spot hostile intrusion. Intrusion detection services use statistical signatures of intruders to tell if a user attempting to use the system is hostile or not. Finally, responsive security services can be used to mitigate an intrusion post factum. An example is intrusion containment services that may isolate for instance an infected laptop that was connected to the system. Unfortunately space limitations force us to limit our account of the remaining system quality influence diagrams. See [29] for a more detailed description.
Figure 5: An extended influence diagram for information system security analysis. 3.2.3 Reliability The reliability of a system indicates the proportion of time a system is able to deliver its services to its users. Some factors having an impact on system reliability according to [39][40][41][42] and [43] are the currency of data backup, the level of hardware redundancy, and the quality of the IT organization. 3.2.4 Usability The usability of a system reflects how easy it is for a user to interact with and perform his or her tasks in the system. ISO/IEC 9126-1 [12] defines usability as the understandability, learnability, operability and attractiveness of the system. Understandability is initially the ease with which a user determines that a system is suitable for a specific work task, and the ease with which the system may be used to solve a work task. Learnability refers to the ease of learning to use the system for a new user. Further, a system has high operability if it is designed so that it actually can solve the users’ designated tasks. Attractiveness measures whether users find the system’s user interface appealing or not. Based on the work of Nielsen, [44][45] the usability extended influence diagram specifies that important factors for usability are quality of user interface design and the quality of user documentation and help.
3.2.5 Efficiency Adopting the view of Smith & Williams [46], efficiency can be defined as the degree to which a software system can meet its objective in terms of scalability and responsiveness, where responsiveness is further broken down into response time and throughput. Some system properties having a positive causal effect on efficiency are the system hardware capabilities, the quality of system design and the overall system resource management. The extended influence diagrams are primarily designed to accommodate static analysis and cannot be used for dynamic simulations as are often conducted in the field of performance and efficiency assessments. Nevertheless, if need be, extended influence diagrams may be employed in conjunction with dynamic simulations importing the simulation output as input to the static analysis. 3.2.6 Interoperability Interoperability is the ability of two or more systems or components to exchange information and to use that information [30]. Some factors that have an impact of interoperability are the degree of standardization of interfaces, and the quality of system architecture. The interoperability extended influence diagram is founded on theories presented in [47] [48] and [49].
3.2.7 Suitability The suitability of an information system is the degree to which the functionality of the system supports the system’s intended work tasks. A suitable system offers the functions specified in the requirements specification, and meets user expectations with regard to functionality. Suitability can be measured by comparing the actual functions of a system with a functional requirements specification. [26][50] 3.2.8 Accuracy The accuracy of an information system is measured by the degree to which it, given correct input data, produces output data that is both accurate and precise. This is determined by how close the output value is to the expected or “real” value. Precision, on the other hand, refers to how repeatable the output is, i.e. if the same input data gives the same output data. Some factors having an impact on accuracy are the format of data attributes, and the existence of consistency checks in databases. [51]
Figure 6: The properties found in an extended influence diagram determine what classes and attributes should be present in an enterprise architecture metamodel. [13]
3.3. Extended Metamodels
Influence
Diagrams
and
With the requirement on enterprise architecture models to support enterprise architecture analysis follows a specific requirement on enterprise architecture metamodels. Specifically, all classes and attributes that are required for a complete analysis as
specified in an extended influence diagram must be found in the enterprise architecture metamodel, in order for the corresponding model to be amenable to analysis. See Figure 6.
4. Enterprise architecture frameworks for system quality analysis A substantial number of enterprise architecture frameworks have been proposed in recent years, including the Zachman Framework [3], the Department of Defense Architecture Framework (DoDAF) [1] the Open Group Architecture Framework (TOGAF) [2], the Federal Enterprise Architecture (FEA) [9], the Intelligrid [7], the General Enterprise Reference Architecture and Methodology (GERAM) [4], Architektur integrierter Informationssysteme (ARIS) [8], the Metis Enterprise Architecture Framework (MEAF) [52], and more. When considering the suitability of the metamodels related to these frameworks to the enterprise architecture analysis considered in preceding sections, we have found significant difficulties. Firstly, a number of metamodels are not detailed enough to provide the information required for the analyses. We are interested in information such as for instance the age of a system. This is information that would typically be represented as an attribute to an entity in a metamodel. Many metamodels, including the Zachman Framework, the TOGAF and the GERAM, do not systematically propose attributes, thereby underspecifying their metamodels with respect to the analyses proposed in the previous section. The frameworks that do specify attributes, e.g. DoDAF, FEA and MEAF, contain few of the specific attributes required for the analyses described in Section 3. Finally and perhaps most importantly, many of the frameworks do not contain the classes that would be required. Within the software architecture community, various analysis methods and tools for software quality exist, including the Architecture Tradeoff Analysis Method (ATAM) [54], Abd-Allah and Gacek [55], Wright [57] and the Chiron-2 Software Architecture Description and Evolution Language (C2SADEL) [56]. None of these are, however, applicable in the enterprise architecture domain.
Figure 7: The PERDAF enterprise architecture metamodel with its classes and their relations.
5.
The PERDAF metamodel
In this section, we present the Purpose-oriented EnterpRise system Decision-making Architecture Framework (PERDAF), metamodel constructed to satisfy the requirements of the preceding section, containing all of the entities and attributes necessary to conduct analyses of the various system qualities identified in section 3. In section 3, an extended influence diagram was developed for each system quality of the ISO 9126 framework. The first step of the PERDAF metamodel generation process consisted of creating metamodels for the each of these extended influence diagrams. Each chance node in an extended influence diagram denotes a variable. Such variables may be viewed as attributes of a class in a metamodel. As an example, the chance node “serviceability” is implicitly associated with an “information system” class. The metamodel development process entailed going through all nodes of the extended influence diagrams and for each node identifying the class to which it belongs. In addition to this, the relations between the classes were specified. The quality-specific metamodels were then consolidated into one comprehensive metamodel for system quality analysis.
Figure 7 presents an overview of the metamodel, and its classes are briefly presented in the next subsection.
5.1. Classes of the PERDAF metamodel The information system class refers to a collection of computer-based system components with a common purpose within an organization. The building blocks of information systems are called system components, which are well defined and delimited system entities, e.g. as a database. Information systems and components are generally accessed from other components or systems through interfaces. These are either internal (between components within the system) or external (between information systems). An information system has a user interface which allows users to interact with the information system through input and output operations. Standards and agreements are efficient ways of making systems and components interoperable and may be used to guide their design as well as that of their interfaces. Components implement and provide business functions. The business functions provide direct value to the business and are described in the functional requirements specification. Components may also store data entities that are made up of data attributes.
The IT organization manages the IT systems. Management basically consists of two processes. The software development process involves the first parts of an IT system’s life-cycle, including specification, design, implementation and validation. After deployment, the software operation and maintenance process manages day-to-day operations of the system, including upgrades and bug fixes, as well as the long-term strategic planning and evolution. The IT organization is manned with staff that may engage in both management processes. All components require a platform to execute. Platforms do not implement business critical functionality, but are rather geared towards supporting the smooth execution of systems and components. Platforms consist of a physical platform providing basic utilities such as air conditioning, cooling water and electricity. The hardware platform relies on the physical platform for its operation and consists of processing units, memory units, and input/output devices. The platform also includes networks, providing communication infrastructure such as Ethernet/IP/TCP.
The software platform implements services and functions primarily aimed at supporting the executing system components, e.g. operating systems. Of special interest in the system quality analysis are the security services with for instance antivirus applications and intrusion detection systems and the system administration services, which monitor system and network operations with respect to, for example, resource usage. Finally, in order to facilitate integration of heterogeneous data and business functions across an enterprise information system, there is a need for integration services.
5.2. Attributes of the PERDAF metamodel For the purpose of system quality analysis, the metamodel in Figure 7 is inadequate. In an enterprise architecture model, many important concepts are best captured as entity attributes. Figure 8 presents the attributes in a few example classes of the PERDAF metamodel.
Figure 8: All classes of the PERDAF metamodel contain attributes. Bold text attributes match attributes found in the security and maintainability frameworks from chapter 3 above. The age of an information system, for instance, is of importance for its maintainability (according to the
extended influence diagram of Section 3). Consequently, the information system class of our
model explicitly contains the attribute “age”. Analogously, the source code class contains the attribute “degree of commenting” found as a node in the extended influence diagram. Another attribute related to maintainability is the “competence” of the IT organization staff. The metamodel addresses security issues as well, which is illustrated by the Security
Service class containing attributes such as “quality of auditing”. Figure 8 also shows numerous other attributes used when assessing system quality attributes other than maintainability. An example is the system quality “Efficiency”, which is coupled with “Scalability”. .
Figure 9: The best-of-breed scenario of the example. The attributes of the objects in the model are all quantitatively assessed and aggregated into a final system quality index.
6. Modeling and analysis using PERDAF metamodel – an example
the
This section presents an example of a system quality analysis used as decision support for a Chief Information Officer at Green Meadows Distribution Inc., a large power distribution company. Green Meadows has initiated a business process reengineering program and defined a number of IT system requirements. A pre-study revealed that the current enterprise architecture was inadequate with respect to these requirements.
The company therefore considers making an investment in either a best-of-breed solution (cf. Figure 9) with asset management applications from vendors Minimo, GeoNet and BAA, or implementing the same functionality in an asset management module of the same suite as their current Enterprise Resource Planning (ERP) system, PAS. It was decided to base the decision of the candidate target architecture scenarios on a formal evaluation using system quality analysis and the PERDAF metamodel. To create the model, data was collected to assess the values of the attributes contained in the metamodel and the model. For instance, to assess the maintainability of the
information system “Minimo” there was a need to determine the size, the age and the stability of the system. This information was extracted from reference installations at similar enterprises. All collected variable values were then translated into discrete states, such as “Low”, “Medium” or “High” and used as input to the system quality analysis using Bayesian theory as described in section 2. In the example, the conditional probability matrices were set linearly. Although commonly employed, complex probability distributions are not meaningful to describe in this article. To calculate the results, the Bayesian network analysis tool GeNIe was employed. [53] See Figure 10. GeNIe allows a decision analyst to create large influence diagrams, to set conditional probability matrices and to perform the actual analysis using Bayesian theory.
with a 60 % probability and Medium-aged with a 40 % probability, where the difference in probability assignment can be attributed to the lower credibility of the younger developer. The uncertainty of the assessment is shown in Figure 11, where thin black bars indicate the range of values the result may assume.
Figure 11: The comparison between the system quality analysis of the scenarios. The black I-bars indicates the uncertainty of the assessments.
7.
Figure 10: A screenshot from GeNIe showing how quantified measures of the attributes are aggregated into an overall measure The analysis revealed that although the interoperability of the ERP-based scenario was very good, the levels of usability, security and availability fell below those of the best-of-breed scenario. With respect to the other investigated properties, the scenarios showed similar performance. The aggregated system quality assessment yielded the result presented in Figure 11, favoring the best-of-breed scenario. The collected data and the created models always contain a degree of uncertainty. The sources of information lack credibility, and oftentimes it is too expensive to collect enough data to dispense with all uncertainty. For instance, when estimating the age of the entire system, two system developers were interviewed. The more experienced developer claimed that the system was more than 10 years old. However, his junior colleague whom the interviewer perceived to be less credible, claimed it to be merely 7 years old. These answers were interpreted as the system being old
Conclusion
This paper has presented the PERDAF enterprise architecture metamodel supporting enterprise system quality analysis. The metamodel consists of classes with accompanying attributes that can be used to create enterprise architecture models from which it is possible to extract precisely the information that is needed for quantitative system quality analysis. Furthermore, this paper presented a system quality analysis framework in the form of an extended influence diagrams. The use of the metamodel and the extended influence diagram was further demonstrated in an example.
8.
References (1p)
[1] Department of Defense Architecture Framework Working Group, DoD Architecture Framework Version 1.0, Department of Defense, USA, 2004,
[2] The Open Group, The Open Group Architecture Framework, version 8 Enterprise Edition, The Open Group, Reading U.K. 2005, http://www.opengroup.org/togaf/ [3] J.A. Zachman, “A Framework for Information Systems Architecture”, IBM Systems Journal, IBM, vol 26(3), 1987 pp 454-470 [4] IFAC-IFIP Task Force on Architectures for Enterprise Integration Geram: Generalized enterprise reference architecture and methodology, version 1.6., IFAC-IFIP 1999. [5] K. Kosanke. “Cimosa Overview and Status”, Computers in Industry, vol 27(2), Elsevier, Boblingen, Germany 1995, pp. 101-109 [6] T. J Williams, “The Purdue enterprise reference architecture”, Computers in industry Vol 24(2-3), Elsevier, 1994 pp. 141-158 [7] Hughes, J. The Integrated Energy and Communications Systems Architecture. Electric Power Research Institute (EPRI), Palo Alto, 2004 [8] Scheer, A.-W., Business Process Engineering – Reference Models for Industrial Enterprises 2nd Edition. Springer Verlag, Heidelberg, Germany, 1994 [9] Office of Management and Budget, FEA Consolidated Reference Model Document Version 2.1, OMB, USA 2006, May http://www.whitehouse.gov/omb/egov/a-1-fea.html, 2007 [10] M. Scriven. “Definitions in analytical philosophy”. Philosophical Studies 5(3), Springer, Netherlands, 1954. pp 36-40 [11] G. Stoneburner, NIST Special Publication 800-33, Underlying Technical Models for Information Technology Security - Recommendations of the National Institute of Standards and Technology, National Institute of Standards and Technology, Gaithersburg, USA December 2001 [12] International Organization for Standardization. ISO/IEC 9126-1 International Standard - Software Engineering – Product Quality - Part 1: Quality Model, ISO/IEC, 2001. [13] P. Johnson, et al, Enterprise Architecture Analysis with Extended Influence Diagrams, Information System Frontiers vol 9(2),Springer, The Netherlands, to appear 2007 [14] P., Johnson, R., Lagerström. and P., Närman, Extended Influence Diagram Generation, Enterprise Interoperability II – New Challenges and Approaches, Springer, London 2007, pp. 599-602 [15] R. Shachter. Evaluating influence diagrams. Operations Research, 34(6), Institute for Operations Research and the Management Sciences, Hanover Maryland, 1986 pp 871-882 [16] RA Howard, JE Matheson, Influence Diagrams, Decision Analysis Vol. 2(3), Institute for Operations Research and the Management Sciences, Hanover Maryland, 2005, pp. 127–143 [17] . Neapolitan R. Learning Bayesian Networks. PrenticeHall, Inc. Upper Saddle River, NJ, USA 2003. [18] Jensen F. V.. Bayesian Networks and Decision Graphs. Springer New York, Secaucus, NJ, USA, 2001 [19] P. Liu, P. Ammann, and S. Jajodia. Rewriting histories: Recovering from Malicious Transactions. Distributed and Paralell Databases vol 8(1), Springer Netherlands, 2000. [20] S. Poslad and M. Calisti. Towards improved trust and security in fipa agent platforms, Autonomous Agents 2000
Workshop on Deception, Fraud and Trust in Agent Societies, 2000 [21] R. Shachter. Probabilistic inference and influence diagrams. Operations Research, 36(4), 1988, pp 36-40 [22] R. Lagerström, Analyzing System Maintainability Using Enterprise Architecture Models, Proceedings of the 2nd Workshop on Trends in Enterprise Architecture Research (TEAR’07), St Gallen Switzerland June 2007, pp. 31-39 [23] Boehm B. et al, Characteristics of Software Quality, Amsterdam: North Holland, 1978 [24] McCall, J. A., Richards, P. G., and Walters, G. F., Factors in Software Quality, Vols I-III, NTIS, Springfield, USA, 1978 [25] Grady, RB., Caswell, DL., Software metrics: establishing a company-wide program, Prentice-Hall, Inc. Upper Saddle River, NJ, USA, 1987 [26] International Organization for Standardization. ISO/IEC TR 9126-2 Technical Report - Software Engineering – Product Quality - Part 2: External Metrics, ISO/IEC, 2003 [27] International Organization for Standardization. ISO/IEC TR 9126-3 Technical Report - Software Engineering – Product Quality - Part 3: Internal Metrics, ISO/IEC, 2003 [28] International Organization for Standardization. ISO/IEC TR 9126-43 Technical Report - Software Engineering – Product Quality - Part 4: Quality in use metrics, ISO/IEC, 2004 [29] Johnson, P., Ekstedt, M., Enterprise Architecture – Models and Analyses for Information System Decision Making, Studentlitteratur, Lund, 2007 [30] IEEE, IEEE Standard Glossary of Software Engineering Terminology, 610.12-1990, IEEE 1990. [31] Oman, P., Hagemeister, J., Ash, D., A Definition and Taxonomy for Software Maintainability, SETL Report #91-08 TR, University of Idaho, Moscow, 1992. [32] T., Chan, S.L., Chung, T.H. Ho, , An Economic Model to Estimate Software Rewriting and Replacement Times, IEEE Transactions on Software Engineering, Vol 22(8) IEEE Computer Society, Los Alamitos, CA, USA 1996 [33] J.C., Granja-Alvarez, M.J. Barranco-García, A Method for Estimating Maintenance Cost in a Software Project: A case study, Software Maintenance: Research and Practice, Volume 9, John Wiley & Sons, Ltd., 1997. pp 161-175 [34] K., Aggarwal, Y., Singh, J.K. Chlabra, , An Integrated Measure of Software Maintainability, Proceeding of the Annual Reliability and Maintainability Symposium, IEEE, Seattle, WA, USA, 2002. [35] M., Matinlassi, E., Niemelä, The Impact of Maintainability on Component-based Software Systems, Proceedings of the 29th EUROMICRO Conference: New Waves in System Architecture, IEEE, 2003. [36] Stoneburner, G., Underlying Technical Models for Information Technology Security, National Institute of Standards and Technology, Gaithersburg, 2001 [37] ISO/IEC, ISO/IEC 17799 Information technology Security techniques - Code of practice for information security management, ISO/IEC, Geneva, Switzerland 2005 [38] Johansson, E., Assessment of Enterprise Information Security – Doctoral Thesis, Industrial Information and Control Systems, Stockholm, Sweden, 2005
[39] Xie, M., et al., Computing Systems Reliability: Models and Analysis, Kluwer Academic Publishers, 2004 [40] Hecht, H., Systems Reliability and Failure Prevention, Artech House, 2004. [41] Lyu M. Eds, R., Handbook of Software Reliability Engineering, IEEE Computer Society Press and McGrawHill Book Company, 1996 [42] J., Gray, D. P., Siewiorek, High-Availability Computer Systems, IEEE Computer, vol 24(9), 1991, pp 39-48 [43] Hawkins, M., Piedad, F., High Availability: Design, Techniques and Processes, Prentice Hall, Upper Saddle River NJ, USA 2001 [44] Nielsen J., Usability Engineering, Academic Press, San Diego CA, USA, 1993. [45] Nielsen, J., Ten Usability Heuristics, www.useit.com/papers/heuristic/heuristic_list.html, accessed March 2007 [46] Smith, C.U., Williams, L.G., Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software, Pearson Education, Indianapolis IN, USA, 2001 [47] Kasunic, M., Anderson, W., Measuring Systems Interoperability: Challenges and Opportunities, Technical Note, CMU/SEI-2004-TN-003, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, 2004 [48] Brownsword, L., Current Perspectives on Interoperability, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, 2004 [49] Johnson, P, Enterprise Software System Integration: An Architectural Perspective, Doctoral Thesis, Industrial Information and Control Systems, KTH, Stockholm, 2002 [50] Sommeville, I., Sawyer, P., Requirements Engineering, Wiley, Chichester, England, 2004 [51] Redman, T., Data Quality for the Information Age, Artech House, Norwood, MA, USA, 1996 [52] Troux Technologies, Metis Architect – Datasheet, http://www.troux.com/products/metis_architect/, May 2007 [53] Decision System Laboratories, About GeNIe and SMILE, University of Pittsburgh, http://genie.sis.pitt.edu/about.html, May 2007 [54] Clements, P., R. Kazman, M. Klein, “Evaluating Software Architectures: Methods and Case Studies”, Addison-Wesley, 2001 [55] Gacek, C., “Detecting Architectural Mismatch During System Composition”, PhD. Thesis, University of Southern California, 1998 [56] Medvidovic, N., D. Rosenblum, R. Taylor, ”A Language and Environment for Architecture-Based Software Development and Evolution”, Proceedings of the 21st International Conference on Software Engineering, 1999 [57] Allen, R., R. Douence, D. Garlan, “Specifying and Analyzing Dynamic Software Architectures” Proceedings of the 1998 Conference on Fundamental Approaches to Software Engineering, 1998