Int. J. , Vol. x, No. x, xxxx
1
Developing Taxonomy and Model for Security Centric Supply Chain Management Xiangyang Li*, Charu Chandra Department of Industrial and Manufacturing Systems Engineering, University of Michigan – Dearborn, 4901 Evergreen Road, MI 48128, USA Tel: +1-313-5836416 Fax: +1-313-5933692 Email:
[email protected],
[email protected] *Corresponding Author
Jiun-Yan Shiau Department of Industrial Engineering, Chung Yuan Christian University, Chung-Li, 32023, Taiwan Email:
[email protected] Abstract: Security management of modern supply systems raise challenges to various aspects of supply chain management (SCM) researches, demanding an integrated and holistic solution framework. In this article we concentrate on two fundamental instruments to support the pursuit of a security centric supply chain management (SecSCM) mission. Firstly the security taxonomy within the SCM context provides a comprehensive knowledge map of relevant artifacts and issues for players to communicate and collaborate in this special field. Ultimately it supplies building blocks for qualitative modeling required by security management. Secondly security management in SCM seeks practical computation and analysis capabilities, thus requiring a migration from qualitative models to quantitative models. We utilize dependency modeling as the bridging facility and probabilistic modeling as the inference engine in our solution to deal with complexity and uncertainty dominant in SecSCM. In addition we explore the proposition within essential application of healthcare supply chain security management. Noticing the enormous challenge presented by the new research mission, we emphasize identifying and examining generic issues and potential solutions, rather than particular problems or algorithms with limited usage and significance. Keywords: Supply chain and network (SCN); security centric supply chain management (SecSCM); design/management for security (DMfSec); security taxonomy; probabilistic modeling. Biographical notes: Dr. Xiangyang Li is an Assistant Professor with the Department of Industrial and Manufacturing Systems Engineering of the University of Michigan – Dearborn. He received a Ph.D. degree with research in information security from Arizona State University, a M.S. degree in systems simulation from the Chinese Academy of Aerospace Administration. He is the member of IEEE, the Association for Computing Machinery, the Chinese Association for Systems Simulation, and Academic Advocate to the Information Systems Audit and Control Association (ISACA). Dr. Li has published extensively in peer-reviewed journals and conferences. His research Copyright © 200x Inderscience Enterprises Ltd.
X. Li, C. Chandra and J.-Y. Shiau interests include quality and security of enterprise and information systems, knowledge discovery and engineering, human machine studies, and system modeling and simulation. Dr. Charu Chandra is an Associate Professor in the Department of Industrial and Manufacturing Systems Engineering at the University of MichiganDearborn. Prior to that he was a Postdoctoral Fellow at Los Alamos National Laboratory, Los Alamos, and at the University of Minnesota. He is involved in research in Supply Chain Management, and Enterprise Integration issues in large complex systems. Specifically, his research focuses on studying complex systems with the aim of developing cooperative models to represent coordination and integration in an enterprise. He has published several papers and book chapters in leading research publications, in areas of supply chain management, enterprise modeling, inventory management, and group technology. His Ph. D. degree is in Industrial Engineering and Operations Research from the Arizona State University. Dr. Jiun-Yan Shiau is an Assistant Professor with the Department of Industrial Engineering of Chung Yuan Christian University at Taiwan. He received a MS and PhD in industrial engineering from Arizona State University. Dr. Shiau has published more than 20 papers in peer-reviewed journals and conferences. His research interests include CM, PDM, new product development, software engineering, and distributed problem solving.
1 Introduction Modern supply chains and networks have been facing more and more security threats at increasing frequency. These threats are in forms of natural disasters such as fire, flood, earthquake, and other disastrous weather conditions, or artificial events such as product quality issues, labor shortage, market fluctuation, and epidemics. Terrorist activities prove to be an omni concern to national infrastructures and supply systems after 9/11 while information security gains more and more attention as shown by regulations such as Sarbanes Oxley Act (PCAOB, 2002) and Health Insurance Portability and Accountability Act (HIPAA) (US Congress, 1996). We can also have other categories of external and internal threats according to the enterprise boundary in consideration, or controllable and observable threats based on their interaction features. Moreover threats usually emerge in complicated or combinatory forms that may blur the above classifications. With security threats and concerns constantly emerging, handling security risks becomes the normal state of daily operations. These security threats, if not carefully contained, can cause significant loss of profit and business opportunity, not bearable any more for modern supply systems (Hendricks and Singhal, 2003). Various threats and security concerns prove too much to be still considered as a trivial or peripheral task by supply chain management (SCM). It is time to systematically examine relevant studies and to push forward a rigorous research agenda for an integrated security management paradigm. Motivated by these observations, we have proposed the establishment of the security centric supply chain management (SecSCM) paradigm and the Design/Management for Security (DMfSec) theme that focus on security management requirements and methodologies in SCM.
Developing Taxonomy and Model for Security Centric Supply Chain Management This article will further the discussion with two key topics: the taxonomy and the modeling/analysis. Therefore it serves as a bridge from the identified vision toward a SecSCM discipline to the various methodological issues and practical tasks to be pursued in the future. Again we first provide the context within which relevant terminology is used in this article: A supply chain or network (SCN) spans the entire product life-cycle involving a confluence of systems belonging to the members of the extended enterprise, i.e., supplier, supplier’s supplier, customer, and customer’s customer. Therefore, it exceeds beyond the narrow scope of logistics systems to encompass a comprehensive system-of-systems view of enterprise. In this definition it is equivalent of enterprise systems. Security Management (SecM) for this complex enterprise is more than mere physical protection of its assets. We take a comprehensive view of security entailing events likely to adversely impact supply chain operations, leading to potential legal liabilities and loss of business. Therefore, we do not just focus on risks traditionally related to time, cost and quality issues of product and process although in a broader sense we do not need to distinguish these issues from security. We consider security centric supply chain management (SecSCM) as the operational decision making with a view to identify and assess the presence of security risks in and to mitigate their impact on a supply chain through the design, deployment and management life-cycle of this system. Figure 1 The research topics to be investigated and the proposed methodologies Supply Chain Management
Security Centric Supply Chain Management
Information and System Security
Security Management Taxonomy
Security Modeling/ analysis Referential dependency model Hybrid probabilistic model analysis
As in Figure 1 we start with a discussion on comprehensive security management taxonomy. We have witnessed the history where security considerations entered the matrices of system design and management only after unbearable threats emerged constantly. It is not vision-inspired but threat-driven. We need to plan ahead for in supply chain security management, avoiding becoming victims again. Building SecSCM taxonomy is critical and an imminent necessity. Then we can have a common working terminology for all potential players in this area destined to be a multidisciplinary playground. Moreover such referential taxonomy can actually function as the starting point for qualitative models and eventually later practical quantitative analysis. The requirement on modeling and analysis and capable techniques are examined thereafter. Complexity is inherent dominant in modern SCN settings, shown in three aspects of (1) huge system mass represented by their geographical size and the number of
X. Li, C. Chandra and J.-Y. Shiau participants, (2) complicated dependency by coupling and correlation among these participants at presence locations, and (3) adaptive dynamics by emerging states in terms of the participants, the configurations, and the structure, constantly evolving in both temporal and spatial dimensions. Facing these challenges we concentrate on solutions based on generic dependency model and hybrid probabilistic inference/simulation. Finally we need to integrate these solutions into the SecSCM framework. Again we follow a systematic technical roadmap to navigate the labyrinth of SecSCM with an emphasis on generic constraints, requirements and methods in the gigantic field. At the same time we will carefully examine what we propose here with a case study on healthcare delivery. We will connect preliminary models within this application context to deal with two essential security concerns, the emergency management of operation rooms and the security assurance of patient information.
2 Metaphor in Information Security Management Information is centric to modern SCNs and can cause various security concerns. It needs to be taken care of in SCM. A metaphor from relatively mature information security field can be well borrowed here in this study for comparison. Such a metaphor helps us understand the challenges and tasks in SecSCM.
2.1 The Story of Information Security Information security has long passed the original narrow view focusing on just password protection, security policy, and encryption/decryption. There are many books covering the extensive topics in information security and warfare (Bishop, 2005; Denning, 2004). Disciplines contributing a variety of methodologies and practices to information security include computer science and engineering, mathematics, social science, human computer interaction, management information systems, electrical engineering, industrial engineering, system engineering, etc. A comprehensive but still evolving reference notation and classification system is now in place to guide various studies among information security researchers. We can describe the components of information security in several dimensions. •
Constraints and Requirements – System assets, vulnerabilities, and threats are genuine constraints that literally make information security an issue. Assets are physical hardware or software that supports the normal functionality of an information infrastructure. Vulnerabilities describe the security strength of an asset. Threats are potential attacks, and are what we protect the assets from. In addition, security measures are defined and categorized to describe the classes of security problems, such as confidentiality, integrity, and availability.
•
Instruments – The security constraints require mechanisms, services, and policies. They can either eliminate threats or fix vulnerabilities. Security services apply security mechanisms to improve security of data processing and information transfer. Generic security services include authentication, authorization, non-repudiation, privacy, intrusion-tolerance, etc., while security mechanisms include specific techniques like encryption, digital signatures, traffic padding, routing control, access control, firewall, redundancy, etc. Security policies define high level of assumptions
Developing Taxonomy and Model for Security Centric Supply Chain Management and rules, such as the Bell-LaPadula confidentiality model in military systems and the Clark-Wilson integrity model in commercial systems. •
Tasks and Operations – Based on security constraints and requirements, various decisions on tasks and operations can be determined, regarding security tools, standards, and legal regulations to support specific business functions.
Information assurance largely consists of a set of essential tasks including instruction prevention, detection and diagnosis, reaction, and evaluation of the security level of the information system. These tasks can relate to and connect various security components in all the above dimensions. For example, in prevention various security mechanisms, services and policies are utilized to counter a broad spectrum of attacks. Thus a variety of key subtasks can be defined in intrusion prevention efforts, such as risk assessment, vulnerability management, etc. Given any information security applications, we can easily navigate through the whole process of information security assurance and the above security taxonomy components. Further knowledge is already identified in each specific security artifact that can function as the basis for developing practical solutions.
2.2 Supply Chain Security and Information Security Modern service enterprises without exception are built on advanced cyberinfrastructure and technology. Besides the system complexity in terms of size and nonlinear dynamics, uncertainty is inherent in any information life cycle of collection, transmission and processing. Therefore the security taxonomy needs to examine the security challenges raised at different phases including incompleteness and inaccuracy of data in information sources due to the reliability issue of acquisition instrument and the privacy and security concern (Li et al, 2001), information distortion, e.g. the bullwhip effect (Lee et al, 1997a) in a supply chain, in information transmission, and various security aspects in terms of integrity, availability, confidentiality and privacy in information exchange and processing. We have easily seen the similarity of the information security topic to the security management in supply systems. Actually an initial taxonomy, or at least a knowledge map, of supply chain security management is analogous to the above taxonomy of information security. The counterparts of the above security artifacts and tasks can in many cases easily be identified for SecSCM. At the same time, we have to be careful about two general misunderstandings in using the notations of information security and supply chain security. The first misunderstanding is the isolation between information security management and the rest of supply chain security. This is easily shown in the organizational structure of many companies, where information security is only taken care of by the IT department while separate risk management departments may exist in other divisions. The other misunderstanding is to replace supply chain security by information security. Security in a supply chain is much broader than just the security in the underlying information infrastructure. If with a narrow view to single out either information security or supply chain security, a solution very likely misses the crucial coupling between them and thus is inferior and prone to failure. An integrated and systematic solution, coined SecSCM in
X. Li, C. Chandra and J.-Y. Shiau our articles, has to be in place in order to efficiently deal with the complicated interaction among security challenges in modern supply systems.
3 Decomposition of a Supply System and Characterization of SCM A supply chain system links the operations of plants, warehouses, carriers, stores and customers in matching demand with supply. The complexity in the supply chain environment is due to structures relating systems, problems, models, decision-making, and knowledge which impact, (a) how supply chain information systems are designed? (b) what type of information is captured? and (c) how this information is used? However decomposing such a system into different layers that consist of components of distinct features and fulfill diverse functions can better help us elaborate the problems and design appropriate solutions.
3.1 Decomposition of a Supply Chain In this section we discuss the decomposition of supply chain (SC) that links the above inherently present structures. We identify the features and requirements of supply chain decomposition as it relates to managing risk due to investment in inventories. From the system management perspective, SC is viewed as an organizational system (Kast and Rosenzeig, 1972). As an organizational system, SC has its managerial issues, which can be classified into three levels, viz. strategic, tactical, and operational. At the strategic level, long-range SCM issues are planned in order to accommodate the varieties of policies and objectives across the SC network. At the tactical level, SC mid-range activities are planned and synchronized. At the operational level, day-to-day tasks and operations in the SC are managed. SC can also be characterized as a complex system (Agostinho and Teixeira, 2003), which is defined as an organization of large number of simple, mutually interacting parts (such as SC individual members), capable of sharing information among each other, as well as with its environment and adapting its internal structure as a consequence of such interactions. From the perspective of interaction with its environment, SC can be considered as an open system. In order to survive, SC, alike biological organisms maintains steady state by continuous inflow and outflow from and to its environment. Steady state implies that its system requirements are fixed for a specific period of time so as to make the system manageable. In designing a complex system such as the SC, several issues related to system design arise, viz. (1) How should a complex system be designed? Should it be designed top-down or bottom-up? (2) How should the complex relationships between various components of a system be coordinated and managed? Should this relationship be modular with process inflow interface? (3) How can the stability and controllability of a system be guaranteed?
Developing Taxonomy and Model for Security Centric Supply Chain Management For answers to above questions, we turn to axiomatic design theory proposed by Suh (1998) in discussing the role of axiomatic design principles in the development of supply chain system problem taxonomy. In dealing with issues (1) and (2) above, the axiomatic system design principles (Suh, 1998) provide the framework for describing the complete structure of the system and relationships among the decomposed functional requirements (FR), design parameters (DP), and process variables (PV). The basic postulate of this approach is that there are fundamental axioms that govern the design process. Its two main axioms are as follows: Axiom 1. The Independence Axiom. This axiom maintains the independence of the functional requirements (FRs). Axiom 2. The Information Axiom. This axiom minimizes the information content of the design. FRs are requirements that specify a function which a system or its component, such as processes, and tasks must be capable of performing in solving a problem. DP is a set of parameters that are chosen to describe the physical domain of a system. Once DPs are chosen, PVs are identified, which are resources; or means for meeting FRs. The next step in axiomatic design is to map FR with DP and DP with PV, as a result of which models are generated to support a FR through a set of DP by utilizing a set of PVs. During the mapping process while transitioning from functional domain to logical or physical domain, one must make the correct design decisions using the Independence Axiom. When several designs that satisfy the Independence Axiom are available, the Information Axiom can be used to select the best design. We focus on the first part of axiomatic system design, which is the identification of FR and mapping them to DP. We do so by (a) defining the supply chain system structure, and (b) mapping between the domains in order to create the SC system architecture. We describe these concepts below. Supply Chain System Structure – A fixed system with time-variant FR: The assumption that the SC system is a fixed system (as having a steady state, but not equilibrium (von Bertalanffy, 1968)) helps consider SC as a system with specific requirements whose components do not change as a function of time. The opposite of fixed system, whereby SC can also be considered as a large flexible system, where FRs are time variant. The design of a large system, however, resembles that of a fixed system with the difference that some FR may have new constraints over time. Accordingly, in order to have SC problem domain manageable, we rely on this latter assumption. Mapping between the domains – creating the SC system architecture reflecting the problem taxonomy: The role of axiomatic design in FR identification is proposed as a process of building two taxonomic hierarchies by mapping between domains through: (1) problem classification, for capturing problems and tasks in SC, and (2) problem solving methods classification for representing various techniques that can be adopted for modeling and controlling problem-solving tasks. For DP representation, system taxonomy, which offers a comprehensive methodology for capturing and representing physical domain characteristics, is proposed by Chandra et al. (2002) and Chandra and Tumanyan (2003). The mapping between FR and DP is a process of building problem models, since according to axiomatic design principles (Suh, 1998), a model is a row of a design matrix that yields a FR when it is provided with the input of its corresponding DP. Problem models deal with SCM issues that SC members need to share to participate in
X. Li, C. Chandra and J.-Y. Shiau collaborative problem solving. After identification of these issues, their information model can be built and shared among other members in the SC network. Thus, as can be seen above, the axiomatic design principles facilitate embedding of various levels of problems encountered in supply chain management in the inherent SC system structure. This leads to the development of appropriate decision-making models suited to solving various supply chain problems.
3.2 Taxonomy, Ontology, and System Integration in supply chain To see linkages between problems and decision-making models utilized in a complex enterprise such as a supply chain, it is imperative that these components be formally represented. Taxonomy and ontology provide the means to classify the supply chain problems and represent formal knowledge, which is used in decision-making. Taxonomy is a systematic representation of a system’s existence (McKelvey 1982). Accordingly, taxonomy is built based on principles of system theory. It is a mechanism for structuring the knowledge about a certain system domain. The process of taxonomy development consists of information collection, systematic analysis, and classification of system attributes. Problem taxonomy provides the overall framework under which problem-oriented information system components can be designed and implemented. Supply chain problem taxonomy comprises: (a) classification of supply chain problems, (b) classification of problem solving methodologies for supply chain management, and (c) hierarchical classification of variables or factors necessary for dealing with such problems. We explain this concept with the help of one of the fundamental problems of managing SC inventory risk in the supply chain management literature― the bullwhip effect. The most downstream supply chain unit observes an external demand, transmitted up on a supply chain as inventory replenishment orders move from one unit to another. It has been observed that substantial information distortion may occur during this transmission. This information distortion, known as the bullwhip effect, appears as an order variance increase as one moves up the supply chain causing order quantities to be artificially inflated, thereby resulting in excessive inventories at various supply chain echelons. The bullwhip effect is a prime example of problems encountered in a complex system, such as the supply chain. In these systems, problems are multi-faceted with a primary problem and many related sub problems. For instance, the bullwhip effect, one of the fundamental problems in supply chain management literature (Lee et al. 1997a), has several secondary problems, such as order management, demand forecasting management, inventory management, and shipment consolidation. Further, Lee et al (1997a) formally identify the main causes of the bullwhip effect, while Lee et al. (1997b) discuss their managerial implications. They state that if the following conditions hold― 1) demand is mean stationary and no signal processing is used, 2) lead time is zero, 3) fixed ordering cost is zero, and 4) no price variation occurs ― then the order variance increase does not occur. However, if some of these conditions are relaxed, the bullwhip effect may be observed. We discuss some of the published techniques utilized in managing the bullwhip effect to highlight their classification of techniques. Chen et al. (2000a) use the simple moving average forecasting technique to obtain forecasts and investigate the bullwhip effect according to lead time and information sharing. Chen et al. (2000b) and Xu et al. (2001)
Developing Taxonomy and Model for Security Centric Supply Chain Management use the exponential smoothing technique in forecasting. Chen et al. (2000b) also show that, if a smoothing parameter in exponential smoothing is set to have equal forecasting accuracy for both exponential smoothing and moving average methods, then exponential smoothing gives larger order variance. Graves (1999) demonstrates the presence of the bullwhip effect, if external demand, which is the first-order integrated moving average process, is forecasted using exponential smoothing with an optimally set smoothing parameter. Metters (1997) measures the impact of the bullwhip effect by comparing results obtained for highly variable and seasonal demand against the case with low demand variability and weak seasonality. Cachon (1999) proposes methods to reduce the bullwhip effect using balanced ordering. Problem model taxonomy is a projection of system taxonomy, and thus inherits system structure and vocabulary. A problem domain is presented at two levels― generic problem domain and specific problem domain. Generic problem domain taxonomy is a class of problems that can occur in a supply chain, such as coordination of production activities. It is a highly generic problem that comprises several tasks, such as scheduling of production or inventory replenishment, which are problems describing more specific issues. Usually, specific problem domain taxonomy is represented by domain-dependent (or specialized) model(s). Splitting problem representation modeling into the above defined two parts provides the means for developing generic and specific problem models. The process of problem model taxonomy development starts with problem domain space identification. This involves analysis and design of functional requirements for the problem and proposing a structured representation of relevant information. For example, for a scheduling of production problem, the model comprises its input and output variables, underlying sub tasks or activities, tools and mechanisms for solving the problem, problem-oriented goals, roles and agents involved in performing them in accomplishing tasks to achieve identified goals, and external environmental issues. The purpose of problem taxonomy (PT) is the systematic representation of supply chain domain constituents, such as problems and their content. Different problem models have the same representation format and characteristics vocabulary, thus providing standardization of information representation in the supply chain domain. Problem model taxonomy serves as a meta-model for knowledge model generation and ontology engineering. Ontology inherits concepts, subsumption relationships, and characteristics from the problem model, thus providing consistency in representing various problems. Ontology development components enrich the problem model with constructs, thus turning it from an abstract problem representation into a knowledge model by formulating rules and regulations related to the problem domain. These constructs are: (1) axioms, defining rules specific to the problem domain; (2) algorithms, providing step-by-step procedures for approaching the problem and solving it; and (3) commitments, linking characteristics to data and assigning variables with values. The first two components are modeled through a comprehensive analysis of the problem. System analysis and design techniques, such as process modeling and objectoriented design, are applied for this purpose. The identification of the first two components is the most important part of ontology development. Ontology by itself is a vocabulary with rules on its use. Real world applications require data to operate. Ontological commitments provide these data. Object model generation is a software engineering practice. If parallels are drawn with software engineering, ontologies can be considered as classes, while object models
X. Li, C. Chandra and J.-Y. Shiau are their instances encapsulated into software entities. Object models are tangible software constructs, where problem-specific data are represented in a common programming language, encapsulated in a formal model, and accompanied with descriptions of what to do with the data and how to do it.
4 Development of Supply Chain Security Taxonomy In this section after the importance of taxonomy is discussed, we present the initial SecSCM taxonomy of various security components and their positioning in the DMfSec paradigm. Then the future work will emphasize the different semantic layers and usage of this taxonomy.
4.1 Usage of Taxonomy Taxonomy is the ontology map that defines and describes the generic elements of a system or process. It is more than a classification or a dictionary because dynamic behaviors of the elements can be also captured. Using information security taxonomy as the example, we can navigate through the process of information assurance given any security issues. •
The taxonomy helps research and development by positioning problems and identifying gaps. For example, emphasizing the human aspect of security constraint and requirement, a human-centred paradigm has drawn more and more attention, as in human computer interaction for security (Eloff et al., 2003), semantic attacks (Schneier, 2000) and cognitive hacking (Thompson, 2003).
•
The taxonomy can greatly aid developing systematic solutions for security engineering. For example, threats can be labeled according to their consequence as information disclosure, deception, service disruption, and usurpation. Then no matter how complex a new attack is, we can efficiently find the solution by its categories and associated countermeasures in this ontology map.
•
The taxonomy fosters standardization and security tools. For example, the ITU-T recommendation X.800 “Security Architecture for OSI” standard specifies common security services and security mechanisms, and the mapping between these two and security tools can be developed with clear focus (SEMPER, 2006).
•
This security taxonomy is always evolving with the development of information security frontier. For example, concepts of network-centric warfare have been developed in response to cyberinfrastructure and cyber defense (DOD, 2001). Then new research topics have been rapidly identified in assured information security and integrity (ONR, 2003).
A well-defined taxonomy is crucial to the research in supply chain security management. First, the referential type for emerging issues and applications can be determined in this taxonomy and then possible solutions can be sought efficiently. Researchers have a better playground to examine and compare their efforts. Second, research topics critical to supply chain security management can be better positioned for systematic solutions by examining components and their dependency. For example,
Developing Taxonomy and Model for Security Centric Supply Chain Management detection of threat presence is a research topic that involves constraint and requirement groups. More specifically, it covers the interaction among asset, vulnerability, threat, security mechanism, and security service. Lastly, we can easily identify important topics for future research by locating gaps and holes of existing research efforts of threat assessment, prevention, detection and mitigation in this taxonomy. A list of candidates for this include threat forecast and risk assessment, secure information communication service, protocol and architecture, information symmetry and privacy, robust and interruption-tolerant supply chain systems, and standardization of security components. And essentially we should pursue a similar “SCN assurance” framework in order to develop and deploy secure supply systems in a world full of threats and challenges.
4.2 An Initial Taxonomy As shown in Figure 2, the preliminary taxonomy has four types (groups) of security components, i.e. constraint, requirement, instrument and management, each having various components involved in supply chain modeling and management. These four groups cover all the relevant terms and concepts used in security management of supply systems. Figure 2 The initial taxonomy of security artifacts in SecSCM Security Management Constraint: causes of security problem in a supply chain. Asset: physical and human constructs of supply chains. Vulnerability: various weaknesses existing in assets. Threat: natural disasters and man-made threats faced by supply chains. Requirement: demand on security performance and service. Measure: confidentiality, integrity, availability, quality of service, privacy, etc. Security compliance: legal and law enforcement guidelines and compliance. Business goal: goals and functions of the supply chain. Instrument: security needs and means corresponding to specific constraints. General technology: surveillance, RFID, ERP, EFT, etc. supporting supply chains. Mechanism: generic security techniques such as cryptography and access control. Service: generic services including authentication, authorization, delegation, auditing, etc. Policy & model: definition of high level of required goal, trust, rule, and procedure. Management: operations and decisions to make in managing security. Tool: facility and computer hardware/software with built-in security functions. Standard: common structures and protocols improving cooperability. Business process: sequences of business functions and their configurations.
These security components are dependent on each other in managing a SCN. Such dependency exists both within each of the groups and between these groups. For example, any security vulnerability is always specific to a SCN construct while a threat exploits certain vulnerability. Security will not be a problem if any of these three is not in place. In the security requirement, a security policy puts in place the foundation of the specification of security service and mechanism. Security services are supported by one
X. Li, C. Chandra and J.-Y. Shiau or several security mechanisms while one security mechanism can support more than one service. The security mechanism is generally an enhanced version of a general technology. The components in the security management group are also related to each other in the form of offering guidelines or support. Dependency is often bidirectional between two groups. For example, analysis of constraints provides insight into key security breaches and thus supports the requirement analysis and definition; at the same time, requirement definition results will change the priority of constraints. The requirement definition limits the available management operational options while the top-down decomposition of management operations generates another input to requirement specification. Figure 3 The security taxonomy positioned in the supply chain life cycle Requirement and Analysis Constraints Threats
Security vulnerabilities
Assets
Requirements Business goals
Security compliance
Security measures
Design and Management Instruments Security policies & models
Security services
Security mechanisms
Management Business processes
Security standards
Security tools
Technology
More important we aim at the practical implementation of DMfSec. The key issue is how to appropriately organize related modeling/analysis and design/management tasks in a well-defined paradigm. A system and process oriented perspective should be followed to fully understand and develop this methodology. In the process view, we in fact naturally map the security taxonomy into the two most significant stages (phases), i.e. requirement/analysis and design/implementation, as shown in Figure 3. In the system view, we take into account all constructs of a supply chain structure at physical, data and knowledge layers. Security components are correlated to supply chain constructs at all the three layers. At the same time, the dependency along the supply chain hierarchy also propagates back into the security components. Therefore, we follow both a bottom-up evaluation, based on the hierarchical decomposition of the system and security management structure, and a top-down decomposition along virtual, logical and physical system levels in the design process. Interaction among the supply chain constructs, the security components, and the corresponding development phases can be better identified and examined. A set of design matrices identify various construct and security elements and their dependencies. These matrices designate necessary design and management decisions about business goals/processes, supply chain constructs/operations, and security
Developing Taxonomy and Model for Security Centric Supply Chain Management components. Then we can define performance or cost functions for different design/management alternatives over these matrices. Mature system design technologies such as classical system modeling and analysis methods or the group technology (GT) can be applied to prioritize and optimize the design/management based on the matrices.
4.3 The Semantic Layers of the Taxonomy Obviously this initial taxonomy just outlines necessary and essential security components and mostly the “words” in a SecSCM “language.” Using these words, the mature taxonomy should be able to provide the knowledge structure capturing the overall system and effectively supporting the design and management tasks. In the order of ascending magnitude of meaning and knowledge, the taxonomy should deliberate useful domain and security knowledge at three levels. (1) Vocabulary. As we describe already in the initial taxonomy, the terms and concepts with their meaning within the context of SecSCM. So we have a complete list of components A, B, C… (Who are involved) at this level. (2) Semantic. At this higher level, the direct interactions among basic terms are mapped that can support syntax verification. This level does not suffice domain and security logic modeling. We have facts such as “A is needed for B” or “if B then C” (What to do) at this level. (3) Knowledge. Domain and security logics are represented at this level. “A needs input in format of D, and takes steps E and F to generate output G for B, because of the requirement H and the theory I…” So we have knowledge about how and why at this level, which can be directly used by security management services, models, algorithms and tools. Therefore this taxonomy is not only a static reference for understanding and describing various aspects of security components, but a supporting knowledge base for efficiently developing actionable solutions. This referential taxonomy has the potential to function as the starting point of a DMfSec methodology, leading directly to qualitative structures in modeling the targeted SCN constructs and the relevant security components. Because of the dependency commonly existing among the security components, as we discussed in the next section, a fusion framework based on dependency models is a suitable solution for SecSCM modeling. Then the qualitative model could evolve into computational models with the instantiation of parameters to describe the systems variables’ states and their dependencies.
5 Supply Chain Security Modeling and Analysis The complexity of modern SCNs requires scalable and efficient modeling and analysis solutions. We first review existing relevant literature before as potential candidates we discuss generic qualitative and quantitative models.
X. Li, C. Chandra and J.-Y. Shiau
5.1 State of the Art of Security Modeling and Analysis in SCM Traditional supply chain management has extensively used mathematical models from operations research (Graves et al., 1998), and techniques such as the game theory from AI (Shubik, 2002). Risk assessment and management in SCM studies consider risks in many forms, viz., business risk, financial risk, technological risk, and physical risk. The first three types of security have been extensively studied in supply chain risk management literature. Some examples of these are risks of stock outs due to inventory policies, risk due to investments in facilities, risks of losing market share due to adoption of a particular product manufacturing strategy (Agrawal and Nahmias, 1997; Anupindi and Akella, 1993; Kouvelis and Milner, 2002; Parlar and Perry, 1996). Recently there have been a few research works with emerging topics that reach out for broader applications, including mainly disruption modeling and management, risk analysis, disaster response, information sharing and privacy, trust service and support, transportation security inspection, and so on. Atallah et al. (2003) use secure multiparty computation from information cryptography to design supply chain collaboration protocols that can reduce information asymmetry while keep privacy. Reliability of a supply chain is analyzed when it is under contingency impact, e.g. by unexpected disasters, in the paper by Thomas (2002). Pai et al (2003) provides the conceptual modeling and analysis framework for assessing business risk using Bayesian networks. In a paper on security and trust management in supply chain, Kolluru and Meredith (2001) discuss the different security requirement and service for different levels of collaboration among companies. Blackhurst at al (2005) summarizes the common themes and issues around supply chain disruption based on interaction with industry practitioners. A set cover location model is used to identify the minimum number and possible locations of off-site storage facilities for supplies in disaster preparation by Hale and Moberg (2005). Lee and Whang (2005) compare the security inspection for goods such as explosives in transportation with total quality management in manufacturing. A technical report by RAND Corporation focuses on the impact of terrorist attacks to global container supply chain performance and advocates the importance of fault-tolerant or resilience in supply chains (Willis and Ortiz, 2004). The motivation for our study is based on the following observations: •
These emerging researches are sparse and isolated. They do not have even common terminologies or standards all players agree on. There is not an integrated solution or a clear agenda which foresees a security-centric design paradigm.
•
Researchers generally hold a narrow view focusing only on traditional risk and disruption management. These studies only deal with single pieces without the context in the big area to account for all interlaced security concerns.
•
Supply chain security management is still at the very beginning. Most of these studies are conceptual and qualitative in addressing modern security challenges. The techniques and solutions are often unfulfilling with limited applicability for nonlinear and complex supply systems.
This lag in research becomes obvious and significant when latest enterprise paradigms, including the system-of-systems in supply alliance and the reconfigurable supply chains are increasingly implemented. These paradigms raise further challenges in terms of the complexity, exemplified by the large size and the flexible
Developing Taxonomy and Model for Security Centric Supply Chain Management structure/operation. Emerging network structures of modern enterprises raise challenges especially in the context of security management. A SCN is a typical temporal-spatial system where the synchronization of data over time and the integration of distributed data, often heterogeneous, can be an enormous challenge. A huge number of participating entities in a SCN interact and evolve concurrently, generating emerging states and chaos as shown in an adaptive complex system (Kiel and Elliott, 1996). Uncertainty and nonlinear features of such systems may prevent us from deriving an efficient solving strategy and such a complex system induces inevitably a NP-hard problem for information fusion tasks. Therefore, we have to rely on a modeling and analysis framework that can efficiently capture and handle this type of complex system, such as in the exploratory study on modeling logistics systems in (Ye et al., 2000). The above challenges and effects are amplified in service operations and processes that are deployed around individuals or enterprises, such as commercial transportation, logistics, health care delivery, electronic markets, and product maintenance and customer service. These service sectors may have unique security concerns and requirements. For example, privacy is a top concern for health care systems where sensitive information about patients is stored and exchanged continuously. Unfortunately, information leaking is very common in health care information systems according to various studies. Integrity including non-repudiation and data/transaction verification/validation, as another example, may be the top concern for finance service and electronic commerce. These security requirements are not well addressed by a few studies on supply chain disruption, where the traditional time and cost measures are still the major concerns.
5.2 Requirements on Knowledge Integration and Relevant Background Imagine that we have an accurate model of an enterprise. This model is dynamically learned and updated to reflect the most current status. Many important applications, including enterprise situation surveillance, risk assessment and migration, computer attack prevention, logistics monitoring and control, can be built upon this model. Evidences in terms of reports from physical, software or human sensors indicate emerging or potential problems by error messages, going-down facilities, delays, machine starvation alarms, etc., and enter into this model constantly. We suspect that something is wrong in this system. Facing an uncertain and complex operational environment, before we try to generate a solution about how to invest capital and manpower to mitigate the problems, we have to ask ourselves first that given these evidences what the big picture really is and where the critical positions of the enterprise system are located. We will do this in an active and timely manner where we are able to handle incomplete and uncertain information systematically. An efficient security modeling and analysis solution must efficiently represent and integrate information pieces collected over the whole SCN scope in consideration. It should posses the following capabilities within the context of complex and dynamic SCN systems to handle nonlinear dynamics and complexity. •
The modeling representation should be able to support a family of models at different levels of detail, and be able to accommodate integration of other useful analysis techniques (Bradbury, 2006). Thus, for different application purposes and design stages, these security models can vary in representation scales such as subsystems, components, units, and entities of a SCN.
X. Li, C. Chandra and J.-Y. Shiau •
Secondly, we have to efficiently support the migration from referential taxonomy through qualitative models to quantitative analysis. The approach has to provide baselines in the initial qualitative models and support parameterization for quantitative analysis. Therefore, it is able to aid practical deployment and optimization of security management.
•
Quality assurance in security management is a demanding task in complex systems. How to efficiently characterize the state of such a system can simply become overwhelming. To guide optimization we also need to associate quality indices with the knowledge integration results.
As shown in Figure 4, modeling and analysis could be considered as information integration based on three types of knowledge. The raw data are collected by inherent or embedded facilities in SCN components. The extracted information may characterize linear or nonlinear features through transformation of the raw data in spatial, temporal or frequency domains. The knowledge of specific applications, such as risk management or disruption detection, will guide auditing and employ the extracted information features in terms of special algorithms and models. To support the proposed knowledge integration task, essential knowledge bases of dependency modeling, computational probabilistic and stochastic analysis engines, and information/utility theory can be employed and will be reviewed below. Figure 4 Modeling and analysis for DMfSec in terms of information Linear or nonlinear features • Spatial, temporal or frequency domains •
Risk management • Disruption detection • … •
Information Extraction
Application Knowledge
Data & Measure Auditing
Supply Chain/Network (SCN)
Generic dependency models are directed graphs that use nodes (vertexes) to represent system variables and links (edges) to represent dependencies among these variables (Glymour and Cooper, 1999). Different types of dependencies can exist in the same model, e.g., causality, support, impact, traffic, information/material flows, transportation and logistic links. Weights can be associated with the dependence link to represent probability, confidence, severity, etc. These weights are especially useful when developing computational models later. Each node (variable) in the dependency model may be further decomposed into sub-nodes and its dependency flows into sub-flows. The decomposition continues until the lowest level we can fathom are reached. At this level, the system can be described concisely in terms of the most basic variables. Such qualitative modeling can take forms of semantic map (such as the referential taxonomy), dependency model (such as a dependency matrix), or network graph.
Developing Taxonomy and Model for Security Centric Supply Chain Management Dependency models have many advantages. Their hierarchy can hide details and offer multi-resolution models. The dependency is localized to handle complexity and to support flexible growth. For quantitative analysis, the dependency models can be specialized and parameterized into many specific types of model, including the Bayesian inference model and the Markov process model for probabilistic calculation and propagation (Pearl, 1998; Ciardo et al., 1990) or the Generalized Semi-Markov Processes models to support stochastic analysis using simulation (Shedler, 1993). Temporal links can be added to handle dynamic relations and to support both prediction and diagnosis. And last, it is obvious that the dependency structure is easy to be represented and interpreted. Dependency is dominant among supply chain constructs and security components. In quantitative analysis, computing engines propagate evidences about the states of system variables and generate better composite beliefs of the states at any given variables. For example, a node can be simply yes/no for presence of a threat, or normal/abnormal for the function of a supply chain construct. We are especially interested in hypothesis variables, e.g., of which the states describe the different impacts on a supply chain construct by the threats from its environment and the vulnerabilities within the system. Computational models come from different disciplines of statistics, probability theory, machine learning, and pattern recognition. A partial list includes rule based system (Pantic et al., 2002), fuzzy rules (Hudlicka and McNeese, 2002), case/instance based learning (Scherer, 1993), neural networks (Petrushin, 1999), Bayesian learning (Qi and Picard, 2002), HMM model (Cohen et al., 2000), and Bayesian networks (Heckerman et al., 1996), Dempster-Shafer theory, and fuzzy logic (Jameson, 1996), applied in various applications. These models can be grouped into two main categories. Models in the first category employ available information (evidence) and attempt to estimate the value of target variables without prior context about evidence. Such models lack the ability to handle uncertainty and complexity involved in data. Computational models in the second category, such as probabilistic Bayesian networks and HMM models, represents the prior knowledge and context clearly using graphical networks. Probabilistic and stochastic models have several advantages for dynamic and complex system. These models maintain the balance between global and local representations. The built-in causal and uncertainty structure provides powerful inference capabilities, offering the most popular solutions in knowledge integration. •
First, they provide a hierarchical framework to systematically represent information from different modalities at different abstraction levels and account for their uncertainties. With the dependencies coded in the graphical dependency model, such networks can handle situations where some data entries are missing.
•
Second, decision making in a dynamically changing environment requires an analysis approach that can not only capture the beliefs of current events but also may predict the evolution of future scenarios. Probabilistic models provide a coherent probabilistic framework for sensory information integration and inference over time.
•
Third, these models can predict future evolvement or perform diagnosis based on the given evidence to update the probabilistic beliefs of the hypothesis variables. Thus implementation of such models can be unified within the same working paradigm.
X. Li, C. Chandra and J.-Y. Shiau These computing engines can be employed in forms of statistical indices, stochastic Petri net (Rozenberg, 1991), Bayesian network, coupled hidden Markov model, simulation model (Fishman, 2001), etc. Use of multiple models is necessary in this knowledge integration framework due to the complex system to represent. For example, Bayesian network models are good at representing large size systems while Markov process models are good at characterizing systems with temporal evolvement such as aging or decay processes, as used for description in this article. Bayesian networks (BNs) are acyclic probabilistic graphs representing joint probabilities of random variables and their conditional independence relations (Pearl, 1998). The nodes characterize the hypothesis, hidden state and evidence/observation variables in a physical system, while the arcs linking these nodes represent their causal dependencies with conditional probabilities. The evidence nodes are components that one can easily observe. The hypothesis nodes are often hard to estimate directly and accurately. The dynamic Bayesian networks, an extension of BN, can model systems that evolve over time (Li and Ji, 2005). Influence diagrams strengthen BNs with decision making capabilities by incorporating special types of nodes, e.g., utility and decision nodes, into the network (Pearl, 1998). Stochastic process models, such as Markov Reward Process and Semi-Markov Reward Process have been developed to analyze the performance of information systems (Ciardo, et al., 1990; Muppala et al., 1991; Davis, 1985). Such models as nonhomogeneous Markov process model have been generalized to capture dynamics of multi-stage diseases and also incorporate disutility functions for intervention evaluation (Liu and Kapur, 2006). Another knowledge base, information and utility theories provide the metrics in extracting suitable features for nonlinear characterization and defining qualities of knowledge integration in complex systems. Information theory is developed to describe special concepts of uncertainty and complexity in information processing and transformation (Mackay, 2003). In current research, we find especially helpful the application of information entropy (Shannon, 1948), the definition of complexity (Cover and Leung, 1978), and the utility theory (Mackay, 2003) in large-scale systems. Entropy and complexity are quantitative measures defined to characterize the level of certainty, consistency or simpleness. They have different definitions and calculations, but relevant functionality. For example, in risk management, utility-theoretic theory can employ relative entropy to calculate quantitative utility measures to compare benefits and costs of resource allocation or evidence collection alternatives; for quality assurance, the complexity definition is used to determine the clarity of integration outcome and the change in system dynamics.
5.3 Proposition of an Adaptive Knowledge Integration Solution In our proposition we largely look at an appropriate framework that integrates various types of dependency models, and probabilistic models, such as Bayesian network (Li and Ji, 2005), coupled HMM model, Monte-Carlo simulation (Fishman, 2001), etc., aided by multi-agent architectures connected through probability mass function and soft evidence update (Kim et al., 2004), to handle the following challenges.
Developing Taxonomy and Model for Security Centric Supply Chain Management •
Uncertainty. Inherent probabilistic representation and inference is able to deal with uncertainty dominant in these systems. The uncertainty is captured with the probability assigned to variable states and their dependencies.
•
Dynamic evolvement. Many such probabilistic models have built-in mechanisms to deal with the dependency and temporal evolvement, such as in dynamic Bayesian networks and Markov process models.
•
Complexity. Many of these models can be implemented into agent-based architectures because of the independency present in the SCN and thus the model nodes. These agents can run inference in parallel and then synchronize their results wherever necessary.
The rest of this section describes the proposed framework in the context of risk management. The vital mission of risk management is to detect and quantify the impact of the threat events and evaluate countermeasures in reactive or proactive mode to reduce their negative effects. The generic tasks explained here include (1) correlating the available information in evidence nodes for better evaluation of the potential impacts represented by state beliefs of assets, (2) quality assurance to decide (a) how good the diagnostic and prediction outcomes are, (b) where and what information to collect (e.g. component of network and type of evidence), and (c) when to engage and stop knowledge integration, and (3) investment portfolio comparison to optimize resource allocation. A learning system serves as an intermediate layer between real-world information sources and the proposed models that will be used to perform various analysis tasks. This system is responsible for processing all types of input data into a uniform internal format, providing mechanisms for identifying the structure of the dependency models, and parameter estimation in probabilistic/stochastic model construction. The learning system is also responsible for tuning model structure and parameters to minimize its diagnosis/prediction errors over time, and for updating resource planning and allocation subsystem for decision making on risk mitigation. The learning system can apply two methods: acquisition of domain knowledge through the interaction with the experts and with the refined security taxonomy, and using various data-based learning algorithms, e.g., algorithms available for learning Bayesian networks (Heckerman et al., 1995; Cooper and Herskovits, 1992). Based on the security taxonomy instantiated to specific domains and applications, dependency modeling with hierarchical inter-connecting structure can represent the dependency among supply network constructs and security components. The links between can describe the inter-connection and dependency among constructs and security components, such as logistics topology, information/material flow, threat, etc. They can also describe the deployment of auditing and evidence collection facilities, and their reliability. Models with different details can be used for different applications on different abstraction levels of system, subsystems, or basic units. For knowledge integration, the dependency models can be quantified and parameterized into computational models. The nodes represent different physical constructs in terms of their associated security component type, e.g. assets, vulnerabilities, and threats. States in asset nodes indicate various potential damages and the probabilistic belief associated with their states represent the severity of damage to the assets (A). These asset nodes are hypothesis nodes in risk assessment.
X. Li, C. Chandra and J.-Y. Shiau Given certain observation, i.e. evidence E=e at evidence nodes, knowledge integration is accomplished by two types of probabilistic inference, i.e. belief inference and most probable explanation (MPE). Belief inference assesses the posterior probability (belief) of the hypothesis (V) states, e.g. pposterior(Vi=0 or 1) = p(Vi=0 or 1|E=e). The MPE task is to determine the assignment of the states for all the nodes that best explains the observed evidence. Resource allocation and evidence collection planning manage two important decisions for DMfSec. Resource allocation directs actions to take for minimizing threat impacts and evidence collection for maximizing clarity/confidence of integration outcomes. Special resource nodes added to the model represent the constraint on available management options to fix vulnerabilities. The utility node in influence diagrams (or reward in the Markov process model) accounts for the tradeoff between the benefit of damage reduction and the cost of each resource allocation alternative. During updating the model dynamically predicts the utility for each action option over time. Thus, we are able to compare resource allocation alternatives and identify the sequence of actions that generates the highest utility at the end of planning horizon. The utility of an evidence node is defined as its potential to clarify the status of asset damage, measured by the mutual information of this evidence node to the hypothesis nodes: I ( N ; S ) = ∑∑ p( S = ek , N = hl ) log( p( S = ek , N = hl ) ( p( S = ek ) P( N = hl ))) k
l
where sensor S provides as evidence the kth state ek, and hypothesis node N has the lth state hl. The higher this mutual information score is, the better this evidence. This calculation can be extended to the case of engaging multiple evidence nodes at the same time for multiple asset nodes. In practice the utility should be offset by the associated cost to reconfigure the supply chain network in order to engage the corresponding sensory action. Then we choose the best evidence nodes of highest utility for information collection in order to achieve promised quality of this adaptive knowledge integration, as detailed below. Quality assurance is required to optimize the above knowledge integration. For example, if the beliefs of all the possible states at an asset are equal, we are not clear at all about the true state of this asset. We rely on a quality assurance model to accurately characterize and improve the certainty or confidence about integration results at each time instance. We are comfortable after that for resource allocation planning. We set up thresholds on a set of quality measures summarized across a huge number of assets. For example, a global complexity score can be examined as a confidence score, defined using information entropy over the posterior state beliefs of all asset nodes. This score becomes larger when these nodes have less uncertainty. The evidence collection planning is repeated adaptively until we obtain sufficient confidence about the integration outcome. Moreover, we want to capture the change point when the system undergoes external disturbance, such as the occurrence of certain incidents. Therefore, we calculate the relative entropy for M hypothesis nodes between two time instances: RE (t | t − 1) = D(t | t − 1) =
1 MT
∑ ∑ p (N t
j
lj
j
= hlj ) log( p t ( N j = hlj ) p t −1 ( N j = hlj ))
Developing Taxonomy and Model for Security Centric Supply Chain Management where pt and pt-1 are the state beliefs at current and last time instance respectively and T is the time interval. This relative entropy is nonnegative and equals zero only when there is no change in the complexity between two time instances. More discussion about this framework and preliminary experimentation results using Bayesian networks are reported in other papers (Li and Chandra, 2007). The main challenge here is the computation cost with the expansion of SCN scope. This challenge can be alleviated by various approximation inference algorithms such as sampling or Monte Carlo methods, parametric approximation methods and bounded cutset conditioning methods such as in (Chavez and Cooper, 1990; Karp et al, 1989). As we discussed before, the hybrid probabilistic modeling uses various analysis models such as the Bayesian networks, the Markov process models or the simulation models. Simulation is a helpful technique complementary to other models in the proposed framework, as used for risk root cause analysis in a hospital (Yi et al., 2006). The different models can be synchronized through subscription and publication of the probability distributions at overlapping system variables with the help of multi-agent structures.
6 Migrating to Design/Management for Security We first discuss the requirement of integrating security management in a practical SecSCM framework. Then we use a case study in healthcare sector to illustrate basic modeling and analysis capabilities relevant to the proposition of modeling and analysis.
6.1 Integrating Security Management into Supply Chain Management Naturally two strategies exist to integrate security management capabilities into the SCM systems: a fully integrated method and an embedded one. For new facilities such as an ERP system that are built up from the scratch, SecM should be reflected in every design stage and inherent at each component of the supply systems. For example, security relevant auditing and collection functions should be specified at design and implemented in each management module with a SecM subcomponent. Auditing standards based on security compliance regulations and implementation protocols, should be followed for complete security management functionality and better adaptation control. However, in most cases SecM capability has to be incorporated later into legacy systems with embedded adapters. The raw data required by SecM will be relayed to and processed at the adapter or a dedicated SecM module. Nonetheless mixture of these two strategies will be the most common in real world cases. The embedded strategy is worth further examination here. As shown in the Figure 5, the SecM adapter essentially provides data auditing capability and if necessary, basic information extraction capability as well. It should work with preset security protocols and standards. For example, IT practitioners have available generic facilities such as the inherent Basic Security Module in Solaris operating system, and many devoted third party tools such as the Mu Security’s security analyzer that can be hooked up with IP devices to probe their network performance (Sturdevant, 2007). Similarly SecSCM needs similar functionality to efficiently record and aggregate data about the SCN system. The SecM adapter should constantly monitor the SCN location it is in charge of and report
X. Li, C. Chandra and J.-Y. Shiau findings in terms of security scores defined on common security measures, together with information specified for other relevant security taxonomy items. Figure 5 Integrate security management into an enterprise knowledge network Security Management (SecM)
MRP
…
SecM Adapter
Enterprise Knowledge Network
SecM Adapter CRM
Facility Management Component
Managerial & Financial Accounting
…
Essentially we need to incorporate security management at different levels of SCM. For example, we can largely decompose a manufacturing enterprise management system into four levels of routine application systems, workflow control, process planning and control, and process engineering at the physical, software or management layers (Scheer, 1999). SecM should also handle security concerns at these four levels. The application level takes care of functions of routine security data auditing, collection, and communication. The workflow control should decide and activate the automated security monitoring and response actions according to preset control logics. The capacity planning needs to dynamically assess security response resources to schedule for and optimize SecM performance. Then the business process engineering level is responsible for continuously analyzing and optimizing the above security management processes. Many interesting topics will be inspired by investigating the integration in these different levels. Lastly SecM has to consider both business and security goals. On one hand, security management in supply systems, as in many other systems, is still a service task, although we have come to the point that the cost of security problems is unbearable any more. The security management decisions should accommodate well other facilities and protocols in direct support to the mission-critical business. On the other hand, security management always has the tendency to advocate “separate” or “secretive” solutions. While simplify control, information, and material flows relevant to SecM, such a solution can isolate the hardware, software and human facilities that support the security assurance from those that support business processes. These two aspects of consideration can cause conflicts if the solution is not discreetly analyzed and designed. An open and multi-agent architecture of implementation is necessary to balance the “humbleness” of security management to business goals and the “seclusiveness” of security management. As in interoperable distributed computing standards such as CORBA or DCOM, service oriented architectures should be in place to support service discovery and migration. One study relevant to this requirement is the development of SecCORBA in information security (Chandramouli, 1999). Consequentially, we should apply multi-agent structures that decouple original nodes into relatively independent subsets in modeling and analysis. It is obvious that a lot of independencies exist in a supply chain network, e.g. relatively independent companies or
Developing Taxonomy and Model for Security Centric Supply Chain Management plants. Thus parallel computation becomes possible with the help by techniques such as soft evidential update in probabilistic modeling and inference (Kim et al., 2004).
6.2 A Case Study in Healthcare Delivery Still taking the important risk assessment and management as example for SecSCM, we briefly examine the use of the above modeling and analysis framework in healthcare delivery systems. The development of secure healthcare supply chains justifies the challenges we plan to address and the significance of the proposed methodology. Healthcare sector has seen alliances among health systems. For example the M-Care healthcare management system in Michigan, USA, has healthcare providers at offices and hospitals affiliated with University of Michigan, Oakwood, William Beaumont, St John and other hospitals. Such alliances help these medical institutions to take care of special patients and relocate excessive patients. Figure 6 Decomposition of risks in healthcare systems Decomposition
Healthcare Alliance Risk
Dependency
Operational Risk
Physical Risk
Staffing Risk
Information Risk
…
Information Availability
Information Leaking
…
…
A “healthcare supply chain” should assure that patients receive prompt and accurate medical care regarding service availability, and that the patients’ privacy in terms of their personal and medical information is preserved confidential to unauthorized personnel. Security emergencies, such as sudden epidemics, decrease of available staff or facility caused by unforeseen disasters (e.g., earthquake, hurricane or traffic congestion), or attacks on purpose (e.g., by terrorists or hackers) could result in the breakdown of service availability. The patients’ personal and medical information transmission through the information infrastructure among different affiliates is getting more frequent. As shown in Figure 6, the overall security risk in a healthcare system can be decomposed into a hierarchy according to their types. The availability concern in the above mainly lies in the line with operational risk. This operational risk can be further broken down into physical facility risk, staffing risk, etc. Information risk can be further categorized into risks with information availability, information leaking (privacy or confidentiality), etc. These risks can be decomposed recursively and finally mapped to healthcare constructs. Same constructs may exist in the models for different risks and thus impact them concurrently. Thus there is dependency among the branches in this hierarchy, requiring integrated security management representation and analysis. In the adaptive knowledge integration solution for risk assessment, the models will concentrate on the operations room (OR) capacity and the information security. A hidden
X. Li, C. Chandra and J.-Y. Shiau Markov process model predicts the state of normal power supply. The prediction enters a Bayesian parameter model that estimates the states of various variables of staff, patient, OR, and computer network, with the help of other available evidences. A simulation model then can run to collect statistics on the current OR capacity. Another probabilistic model in the form of a network graph accounts for computer topology and user configuration. It can calculate risk of information leaking and availability for different collaboration scenarios among staff members utilizing the computer network. Actually the network availability prediction can be fed back to the OR simulation model if relevant. Lastly a decision model will be able to calculate the utility based on performance information for different allocation actions for given resource constraints. Various management tasks can be performed here. The nodes with highest severity beliefs reveal possible bottlenecks or critical paths in the system. Resource allocation plans based on these patterns can be generated. Inference runs again in these models and the utility of each resource allocation plan is estimated in terms of risk reduction and its associated cost. The best plan will be picked for management operations. Imagine that we have an accurate model of a healthcare SCN involving more than one institution. This model is dynamically learned and updated to reflect the most current status. Many important applications, including healthcare delivery surveillance, risk assessment and migration, and logistics monitoring and control, can be built upon this model. Evidences in terms of reports from physical, informational or human sensors indicate emerging or potential problems by error messages, going-down facilities, delays, facility starvation alarms, etc., and enter into this model constantly. We suspect that something is wrong in this system. Facing an uncertain and complex operational environment, before we try to generate a solution about how to invest capital and manpower to migrate the problems, we have to ask ourselves first that given these evidences what the big picture really is and where the critical positions of the enterprise system are located. We do this in an active and timely manner where we are able to handle incomplete and uncertain information systematically.
7 Conclusion The design and modeling of the supply chain is a complex endeavor considering the embedded structures relating systems, problems, models, decision-making, and knowledge present in its form. A supply chain design that integrates these structures must also incorporate security as one of its key features. In this paper, we have highlighted the significance of designing for security in supply chain. A case has been made to capture security needs from a holistic rather than a piece-meal perspective of the supply chain. We have done so by developing taxonomies which capture essential features of various problems and the modeling needs of problem-solving techniques applied under different decision-making environments. Supply chain structures have become the norm rather than the exception for organizing business operations in this global economy, where its implementation is via Internet through various manifestations of eCommerce portals. Security of the supply chain becomes an imperative to stay competitive.
Developing Taxonomy and Model for Security Centric Supply Chain Management
References Agostinho, M.C.E. and Teixeira, G. (2003) ‘Co-creating a self-organizing management system: A Brazilian experience’, Complexity, Ethics and Creativity Conference, London School of Economics. Agrawal, N. and Nahmias, S. (1997) ‘Rationalization of the supplier base in the presence of yield uncertainty’, Production and Operations Management, Vol. 6, pp. 291–308. Anupindi, R. and Akella, R. (1993) ‘Diversification under supply uncertainty’, Management Science Vol. 39, pp. 944–963. Atallah, M.J., Elmongui, H.G., Deshpande, V., and Schwarz, L.B. (2003) ‘Secure supply-chain protocols’, Proceedings of the IEEE International Conference on E-Commerce. Bishop, M. (2005) Introduction to Computer Security, Addison-Wesley. Blackhurst, J., Craighead, C.W., Elkins, D., and Handfield, R. B. (2005) ‘An empirically derived agenda of critical research issues for managing supply-chain disruptions’, International Journal of Production Research, Vol. 43, No. 19, pp. 4067-4081. Bradbury, D. (2006) ‘Modeling network security’, Computer & Security, Vol. 25, No. 3, pp. 163164. Cachon, G.P. (1999) ‘Managing supply chain demand variability with scheduled ordering policies’, Management Science, Vol. 45, pp. 843-856. Chandra C., Kumar, S. and Smirnov, A.V. (2002) ‘E-Management of supply chain: General models taxonomy’, Human Systems Management, Vol. 21, pp. 95–113. Chandra, C. and Tumanyan, A. (2003) ‘Supply chain system taxonomy: development and application’, in Proceeding of Twelfth Annual Industrial Engineering Research Conference. Chandramouli, R. (1999) ‘Implementation of multiple access control policies within a CORBASEC framework’, Proceedings from the 22nd National Information Systems Security Conference, Arlington, Virginia, USA Chavez, R. and Cooper, G.F. (1990) ‘A randomized approximation algorithm for probabilistic inference on Bayesian belief networks’, Networks, Vol. 20, pp. 661–685. Chen, F, Drezner, Z, Ryan, JK, and Simchi-Levi, D. (2000a) ‘Quantifying the bullwhip effect in a simple supply chain: the impact of forecasting lead times and information’, Management Science, Vol. 46, pp. 436-443. Chen, F, Ryan, J.K., and Simchi-Levi, D (2000b) ‘Impact of exponential smoothing forecasts on the bullwhip effect’, Naval Research Logistics, Vol. 47, pp. 269-286. Ciardo, G., Marie, R.A., Sericola, B., Trivedi, K.S. (1990) ‘Performability analysis using semiMarkov reward processes’, IEEE Transactions on Computers, Vol. 39, No. 10, pp. 1251-1264. Cohen, I., Garg, A., and Huang, T.S. (2000) ‘Emotion recognition using multilevel-HMM’, NIPS Workshop on Affective Computing, Colorado. Cooper, G.F. and Herskovits, E. (1992) ‘A bayesian method for the induction of probabilistic networks from data’, Machine Learning, Vol. 9, pp. 309--347. Cover, T.M. and Leung, S.K. (1978) Some equivalences between Shannon entropy and Kolmogorov complexity, IEEE Transactions on Information Theory, Vol. 24, pp. 331-338. Davis, R. (1985) ‘An assessment of models of a health system’, The Journal of the Operational Research Society, Vol. 36, No. 8, pp. 679-687. Denning, D. (2004) Information Warfare and Security, Addison Wesley. Department of Defense (DOD), Network Centric Warfare, Report to Congress, 2001. Eloff, J.H.P., Johnston, J., and Labuschagne, L. (2003) ‘Security and human computer interfaces’, Computers & Security, Vol. 22, No. 8. Fishman, G.S. (2001) Discrete-Event Simulation: Modeling, Programming, and Analysis, New York: Springer-Verlag.
X. Li, C. Chandra and J.-Y. Shiau Glymour, C., and Cooper, G. (Eds.) (1999) Computation, Causation & Discovery, Menlo Park, CA: AAAI Press/The MIT Press. Graves, S.C., Kletter, D.B. and Hetzel, W.B. (1998) A dynamic model for requirements planning with application to supply chain optimization, Operations Research, Vol. 46, No. 3, pp. 35-49. Graves, S.C. (1999) ‘A single-item inventory model for a nonstationary demand process’, Manufacturing & Service Operations Management, Vol. 1, pp. 50-61. Hale, T. and Moberg, C.R. (2005) ‘Improving supply chain disaster preparedness: A decision process for secure site location’, International Journal of Physical Distribution & Logistics Management, Vol. 35, No. 3, pp. 195-207. Heckerman, D., Breese, J.S., and Rommelse, K. (1996) ‘Decision-theoretic: Troubleshooting’, Communications of the ACM, Vol. 38, No. 3, pp. 49-57. Heckerman, D., Geiger, D., and Chickering, D. (1995) ‘Learning Bayesian networks: The combination of knowledge and statistical data’, Machine Learning, Vol. 20, No. 3, pp. 197243. Hendricks, K.B.V and Singhal, V.R. (2003) ‘The effect of supply chain glitches on shareholder wealth’, Journal of Operations Management, Vol. 21, pp. 501-522. Hudlicka, E. and McNeese, M.D. (2002) ‘Assessment of user affective and belief states for interface adaptation: Application to an Air Force pilot task’, User Modeling and User-Adapted Interaction, Vol. 12, pp. 1-47. Jameson, A. (1996) ‘Numerical uncertainty management in user and student modeling: An overview of systems and issues’, User Modeling and User-Adapted Interaction, Vol. 5, No. 34, pp. 193-251. Kast, F.E and Rosenzeig, J. E. (1972) The Modern View: a System Approach. System behavior, London: Harper & Row. Karp, R., Luby, M. and Madras, N. (1989) ‘Monte-Carlo approximation algorithms for enumeration problems’, Journal of Algorithms, Vol. 10, pp. 429–448. Kiel, L. D. and Elliott, E. (ed.) (1996) Chaos theory in the social sciences: Foundations and applications, Ann Arbor, MI: University of Michigan Press. Kim, Y.G., Valtorta, M., and Vomlel, J. (2004) ‘A prototype system for soft evidential update’, Applied Intelligence, Vol. 21. Kolluru, R. and Meredith, P.H. (2001) ‘Security and trust management in supply chain’, Information Management and Computer Security, Vol.9, No.5, pp. 233-236. Kouvelis, P. and Milner, J.M. (2002) ‘Supply chain capacity and outsourcing decisions: The dynamic interplay of demand and supply uncertainty’, IIE Transactions, Vol. 34, pp. 717–728. Lee, H.L., Padmanabhan, V., and Whang, S. (1997a) ‘The bullwhip effect in supply chains’, Sloan Management Review, Vol. 38, No. 3, pp. 93-102. Lee HL, Padmanabhan V, Whang S (1997b) ‘Information distortion in a supply chain: The bullwhip effect’ Management Science, Vol. 43, pp. 546 -558. Lee, H. L. and Whang, S. (2005) ‘Higher supply chain security with lower cost: Lessons from total quality management’, International Journal Production Economics, Vol. 96, pp. 289-300. Li, X. and Chandra, C. (2007) ‘Efficient knowledge integration to support complex supply network management’, International Journal of Manufacturing Technology and Management, Vol. 10, No. 1, pp. 1-18. Li, X. and Ji, Q. (2005) ‘Active affective state detection and user assistance with dynamic Bayesian networks’, IEEE Transactions on Systems, Man, and Cybernetics-Part A, Vol. 35, No. 1. Li, J., Shaw, M., and Sikora, R. (2001) ‘The effects of information sharing strategies on supply chain performance’, IEEE Transactions on Engineering Management, Oct. Liu, Y. and Kapur, K.C. (2006) ‘Reliability measures for dynamic multi-state systems and their applications for system design and evaluation’, IIE Transactions, Vol. 38, No. 6, pp. 511-520. Mackay, D. (2003) Information Theory, Inference, and Learning Algorithms, Cambridge, UK: Cambridge University Press.
Developing Taxonomy and Model for Security Centric Supply Chain Management McKelvey, B. (1982) Organizational Systematics Taxonomy, Evolution, Classification, Berkeley, CA: University of California Press. Metters, R. (1997) ‘Quantifying the bullwhip effect in supply chains’, Journal of Operations Management, Vol. 15, pp. 89-100. Muppala, J.K., Woolet, S.P., Trivedi, K.S. (1991) ‘Real-time systems performance in the presence of failures’, Computer, Vol. 24, No. 5, pp. 37-47. Office of Naval Research (ONR), Taxonomy of technology limitations to support the five enabling functions required for navy network centric Operations, Report by Network Centric Operations Working Group of the Office of Naval Research, 2003. Pai, R.R., Kallepalli, V.R., Caudill, R.J., and Zhou, M. (2003) ‘Methods toward supply chain risk analysis’, IEEE International Conference on Systems, Man and Cybernetics, Vol. 5, pp. 45604565. Pantic, M., Patras, I., and Rothkrantz, L.J.M. (2002) ‘Facial mimics recognition from face profile image sequences’, Technical Report DKS-02-01. Data and Knowledge Systems group, Delft University of Technology, Netherlands. Parlar, M. and Perry, D. (1996) ‘Inventory models of future supply uncertainty with single and multiple suppliers’, Naval Research Logistics, Vol. 43, pp. 191-210. Pearl, J. (1998) Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, CA: Morgan Kaufmann Publishers. Petrushin, V.A. (1999) ‘Emotion in speech: Recognition and application to call centers’, Artificial Neural Networks in Engineering 99, St. Louis. Public Company Accounting Oversight Board (PCAOB). (2002) Sarbanes-Oxley Act of 2002, [Online] Available: http://www.pcaobus.org/rules/Sarbanes_Oxley_Act_of_2002.pdf (Retrieved March 12, 2007) Qi, Y. and Picard, R.W. (2002) ‘Context-sensitive Bayesian classifiers and application to mouse pressure pattern classification’, Proceedings of International Conference on Pattern Recognition, Quebec City, Canada. Rozenberg, G. (Ed.) (1991) Advances in Petri Nets, New York: Springer-Verlag. Scheer, A.W. (1999) ARIS – Business Processing Frameworks, New York: Springer-Verlag. Scherer, K.R. (1993) ‘Studying the emotion-antecedent appraisal process: An expert system approach’, Cognition and Emotion, Vol. 7, pp. 325-355. Schneier, B. (2000) Semantic attacks: The third wave of network attacks, [Online] Available: http://www.counterpane.com/crypto-gram-0010.html (Retrieved March 12, 2007) Secure Electronic Marketplace for Europe (SEMPER). (2007) Standards in security and cryptography. [Online] Available: http://www.semper.org/sirene/outsideworld/standard.html (Retrieved March 12, 2007) Shubik, M. (2002) ‘Game theory and operations research: Some musings 50 years later’, Operations Research, Vol. 50, pp. 192-196. Shannon, C.E. (1948) ‘A mathematical theory of communication’, Bell System Technical Journal, Vol. 27, pp. 379-423, 623-656. Shedler, G.S. (1993) Regenerative stochastic simulation, San Diego, CA: Academic Press. Sturdevant, C. (2007) ‘Mu-4000 tests IP equipment’, eWeek Enterprise News & Reviews, Feb. 12, 2007. Suh, N.P. (1998) ‘Axiomatic design theory for systems research’, Engineering Design, Vol. 10, pp. 189-209. Thomas, M.U. (2002) ‘Supply chain reliability for contingency operations’, Proceedings of Annual Reliability and Maintainability Symposium. Thompson, P. (2003) ‘Semantic hacking and intelligence and security informatics’, NSF/NIJ Symposium on Intelligence and Security Informatics, June 1-3, Tucson, Arizona.
X. Li, C. Chandra and J.-Y. Shiau U.S. Congress. (1996) Health Insurance Portability and Accountability Act (HIPAA) of 1996, [Online] Available: http://www.cms.hhs.gov/HIPAAGenInfo/Downloads/HIPAALaw.pdf (Retrieved March 12, 2007) Von Bertalanffy, L. (1968) General System Theory, New York: George Braziller. Willis, H.H. and Ortiz, D.S. (2004) Evaluating the Security of the Global Containerized Supply Chain, Technical Report, RAND Corporation. Xu, K, Dong, Y, and Evers, P.T. (2001) ‘Towards better coordination of the supply chain’, Transport Research E-Logistics, Vol. 37, pp. 35-54. Ye, N., Choi, T., Dooley, K., and Cochran, J. (2000) ‘Modeling and simulation of SN enterprise’, INFORMS 2000. Yi, P., George, S., Paul, J., and Lin, L. (2006) ‘Hospital capacity planning for emergency management in disaster mitigation’, Socio Economic Planning Sciences, in press.