Integration Frameworks for Command and Control Gary O. Langford1 John S. Osmundson2 Horng Leong Lim3 1
Department of Systems Engineering, Naval Postgraduate School, Monterey, California, United States and Doctoral candidate, Defence and Systems Institute, University of South Australia, Mawson Lakes, Australia 2 Department of Information Science, Naval Postgraduate School, Monterey, California, United States 3 Defence Science and Technology Agency, Singapore 1
[email protected] and
[email protected]
1
ABSTRACT Developers of military command and control (C2) systems presuppose that swarms of remotely piloted vehicles are manageable through increased performances derived from advances in technology. Network-centric reachback systems such as the Distributed Common Ground System (DCGS) for the new C2 designs are thought to be manageable likewise. However, system of systems solutions emphasizing new technologies have resulted in progressively more demanding software architectures. There is increased integration risk. This paper develops the functional nature of C2 system of systems within a framework of systems engineering integration that organizes technology artifacts into manageable functional and process structures. Based on functional, performance, and quality requirements (triadic decompositions of Value Systems Engineering) message traffic latency was defined as a metric of systems engineering integration and examined. A perspective of C2 emerged that exposed inherent integration risks in traditional architecture paradigms. The results suggest an alternative solution outside of the existing architecture paradigms may be warranted. KEYWORDS: Command; Control; Systems Engineering Integration; Value; Frameworks.
1.
INTRODUCTION
The general notions for designing future command and control (C2) military systems (e.g., Australia [1]) are premised on technology solutions to aid in managing thousands of
objects in “real-time” that emphasize an adaptive response to changing circumstances. Technological advances in software, materials, and processing speeds increase computing power to keep up with network bandwidths message routing; improved computer memory architecture to facilitate faster read/write speeds and efficient data storage/retrieval; and enhanced detection and identification performances decrease the timeline for change detection processing. Ultimately, technology may support varying degrees of autonomous robotic – behaviors considered to be of great benefit to future C2 systems [2]. Embedded in the physical C2 architecture is the software architecture that structures and relates the instructions that enable processes to manage data and its dispositions. In this respect, managing data means planning, directing, controlling, organizing, communicating, and coordinating processes. Software architectures have evolved within three primary paradigms 1. They are the Object Oriented Architecture (OOA), Component-Based Architecture (CBA), and Service-Based Architecture (SBA). Software development is substantially different between the three architectural paradigms [3]. The aim of this paper was to investigate the relationship between system of systems engineering integration and the functional nature of C2, with reference to the architectural 1
Thomas Kuhn used the word paradigm to describe a set of theories, instruments, standards, values, methods and assumptions, that in totality represent the organization of knowledge. As applied to architecture, a paradigm is the partitioning concept that distinguishes the design.
paradigm regarding the risk of integration. Based on the development of a system of systems engineering integration framework the dominant factors in the process and functional domains were analyzed. These factors are indicative of the key drivers of architecture. The key drivers were modeled to relate data latency to the implementations of functions within the three architectural paradigms.
2.
FUNCTIONAL NATURE OF COMMAND AND CONTROL
The general notions for designing future command and control (C2) military systems are premised on the decomposition of ‘to command’ and ‘to control’. In brief, ‘to command’ is to assign missions; provide resources (analyze, prioritize); direct subordinates (guide, set policy, focus the force to accomplish objectives; analyze risk (identify and assess). ‘To control’ is to define limits; negotiate; deal with constraints; determine requirements; allocate resources; report; maintain performance (monitor, identify, correct deviations from guidance). Command is different from control. Their implements in software are driven by different skill sets, and their code structures are different. Even at the decision making level, there are profound differences between command and control, often implemented through different people. The differences in software are found in their heterarchies2 of processes and functionalities. The physical incarnations of C2 serve as the underlying structure in which to integrate its various processes. The juxtaposition of the physical entities, and the processes and 2
Heterarchy is multi-dimensional. In a group of related items, a heterarchy is an organizing topology wherein a pair of items is related in some way to each other. Connectivity, coupling and cohesion determine the relation(s). In the context of functional analysis, Heterarchy can be multidimensional and multi-simultaneous. Hierarchy is defined as a partially ordered set, that is, a collection of parts with ordered asymmetric relationships inside a whole.
functionalities of C2 can be developed into a framework for integration. To provide the systems integration framework in which to organize and evaluate the functional nature of C2, applying the methods of Value Systems Engineering [4] identified value structures and formulated the losses related to the processes and functions inherent in the C2 architecture. Specifically, the system of systems integration framework was used to measure risks of the cross system integrations. As a process, integration is the combining of a systematic series of activities that take place in a definite manner, directed to bring about a particular interaction between system elements and sets of system elements. As a function, integration is the relationship between the mechanistic intentions expressed through the design and the performance of the product when delivered. Functions have mechanisms that result in various performances of the system of systems, while processes have activities that have various results. Mechanisms have controls that moderate or manage their action, while activities are guided by enablers (policy and rules) and limitations/constraints (budget and schedule). Both functional and process classes of integration must be considered when discussing integration of elements into a whole.
3. INTEGRATION FRAMEWORKS The organizing framework of integration is the domain and result of the behavior that results from the act of mechanisms and activities within a common physical structure. The structural properties and the experiences of systems are without premise and consequence outside the framework, but are observed through activities (for processes) or actions (for functions) that result from the framework. Process and Function Integration Process integration (Figure 1) is the amalgamation of activities and tools that combine ideas into a product.
Iii
Oii
Activities
Process B Oi
Ii Compare Process A to Process B based on the comparison of losses between Oi and Oii Process A Fig. 1. Process Framework for Integration
Functional Integration The other class of integration is functional integration (Figure 2). Functions relate the product’s design to its use. Functions are characterized by an input, an output (related through the performance of the function), a mechanism that converts the input to output performance, and a loss that results from achieving the output performance. Functions describe the physical effect(s) imposed on an energy or material flow by a stakeholder (designer) without regard for process, i.e., the working principles or physical solutions used to accomplish this effect. Processes are the enactments that realize functions. Frameworks These frameworks are the measurement entities of integration. They capture the physical domains, relationships, interfaces, form, fit, functions, and processes of a system. A framework is both the context and an analytical space from which to view the consequences of compromise between the relevant pieces of the design and architecture: the requirements, functions, form, simplicity, affordability,
constraints, and the needs of humans operating the system. Design is the relationships between the functions and processes. How the functions and processes group together and interact is termed architecture. Architectural instances and artifacts correspond to the capacity to integrate causal elements. The U.S. Department of Defense (DoD) Architecture Framework (DoDAF) guides descriptive structures and views for describing and depicting architecture, development, and integration. Focusing specifically on integration, the integration framework described in this paper is lifecycle based (through processes and their related functions), rather than view-based. DoDAF represents a system in terms of Operational View – OV (information flow), System View – SV (communications), Technical Standards View – TV (convention), and All View – AV (central aspects that pertain to all views but are not specific to any one). While DoDAF discusses integration in the context of closely relating individual architecture products across the OVs, SVs, TVs, and AVs, there is little discussion of integration other than to point out the relationships and a few critical connections that exist in the architecture
products. These connections and relationships are restatements of system interfaces and linkages between standards and functions, all represented through different views. The DoDAF “framework does not address this representation-to-implementation process but reference policies that are relevant to that process” [5]. DoDAF supports the data that is needed for integration, but does not deal with integration at either the theoretical level that promotes understanding or at the knowledge level that guides an integration process. The integration framework proposed in this paper can be used in conjunction with DoDAF, by extending the architecture products into the process and functional domains, or the proposed integration framework can be used in conjunction with another architecture paradigm to provide an equivalent structuring of data, interfaces, and architecture products. DoDAF is a highly evolved, mature ontology that facilitates a clear understanding of architecture, its composition, and key drivers. Regardless, the proposed integration framework requires and architecture from which to derive its artifacts, processes, and functions. Architecture provides the determinant partitioning that shows how (1) the system will operate within its boundaries and
boundary conditions, (2) interfaces facilitates or restricts information flows, (3) information is related to system functionality, (4) standards and policy implementations constrain system behaviors, and (5) changes in the operational and other environments (e.g., political, economic) impact the system’s performances and capabilities. In essence, the integration framework refers to a means of organizing the system heterogeneities that determine the performance, impacts, and effectiveness of the design and the architecture. System heterogeneities matter when they affect both the output performance of the system of systems framework as well as the mechanism and control engines in each system that drives the achievement of the system of systems performance. This framework is also a convenient way to view the various facets of the system of systems to simplify decision making and support planning and evaluation for the risks of integration. The frameworks are constructed within the context of scenarios that support a common theme that illustrate a consistency from topic to topic within the structure of the three frameworks. The frameworks could be modular so that individual framework can be sewn or patched.
Function Control Input (energy or material) Mechanism
Output (performance)
Loss to achieve performance Fig. 2. Functional Framework for Integration
This conjoining is accomplished through a scenario or matrix of vignettes. New frameworks can be inserted into existing scenarios to
compare with previous frameworks. In this manner, functionality can be included or deleted. Following the interactions between functions
that are spliced in or deleted provides fundamental information with which to determine the desired degree of integration between functions.
4.
INTEGRATION RISK
Risk and risk management are central themes and concerns to system of systems integration. Risk is defined as an uncertain event which may cause a failure to perform as desired. To quantify risk, we determine the likelihood and consequence of an uncertain event, i.e. risk is measurable. Yet as is often the case with technology that is newly developed and likely misunderstood, there may be insufficient information to determine the probability that an uncertain event may occur. It is both the introduction of uncertain interactions as well as the sheer number of interactions that create problems for system of systems integration. Risk is generally viewed as manageable. However, the notion that risk is manageable is only to the extent that the processes of managing an activity (in this case the six processes of planning, directing, controlling, communicating, organizing, and consensus building) contribute an understanding about the likelihood component of risk. This is to say that likelihood of an uncertain event is more than a simple probability based on either historical or modeled data. Sparrow [6] defines operational risk management as the systematic assessment and management of the trade-offs made between risk and opportunity to run an efficient and effective organization. A primary design goal for building and sustaining a lifecycle solution is to perform the required C2 missions while preventing the undesirable consequences of processing data for thousands of objects in a target rich environment. These consequences include overwhelming the C2 network capacity and throughput capabilities. The end result of a reduction in capacity (or an increase in caching) is data latency. Any technology solution must
assume that the systems engineering and management processes and their respective enactments are sufficient to build and sustain a final C2 lifecycle solution. Changes in technology have resulted in systems that have become more difficult over time to build and to integrate. That difficulty can be expressed as a risk associated with technology development, and specifically, the relationship between technology and integration. To a first order, the risk of systems integration can be measured by two variables: the number of system elements and the number of connections between elements. Al Mannai and Lewis [7] derive the overall system risk of a network of nodes and links as R =
n+m
∑X i =1
i
(1 − ai ) g iWi
(1)
in which X is the set of events that could impact an element, n denotes the number of elements, m is the number of links between elements, g denotes the degree of the ith element, ai is the availability of the node or link as a percent of total availability (i.e. probability of being designed and implemented in a manner that is consistent with the intent of the designer), and Wi is the worth of each element. Worth (W), or equivalently, the use of a product or service, as represented by the functions and their related functional attributes – performance, quality, and investment is defined as the Value (V) of the ith element multiplied by the Quality (Q) of the ith element. Value (V) is defined as the ratio of performance (P) to investment (I), the fundamental premise of Value Engineering as defined by Miles [8]. Value compares what one receives with what one has invested. For example, if two products with factually comparable features and quality are offered for different prices, the value of the lower-priced product is higher than that of the other product as indicated by Langford [9]. The measureable worth of a system is the actual and expected use of a product or service relative to the investment made to obtain the system and the losses
incurred as a result of the product’s lifecycle. The Value Engineering equation relates the sum of value for each element to the lifecycle sums of performance and cost for the system. The investment can be incremental and summed to equal the lifecycle investment or partitioned and considered a unit or item of investment. The delineation of a function in terms of its performance and the quality of that performance is termed the triadic decomposition of the function F ( t ) . The system value, V(t) for all functions, is given by Equation 2, as PF (t )
∑V (t ) = ∑ I (t ) F
(2)
F
where F is a function or non-linear aggregation of functions that are performed by the system, PF (t ) is the performance measure (units of energy, material, or wealth) of the function(s) F ( t ) , IF (t ) is the investment (e.g., dollars or other equivalent convenience of assets that are ‘at-risk’) and the time, t, measured relative to the onset of initial investment in the project. The units of VF(t) can be expressed in terms of energy divided by cost, or material divided by cost, or wealth divided by cost. The summation in (Equation 2) is simplified for the purposes of this discussion, and thus shown over all functions, performances, and investments. Performance indicates how well a function is performed by the system. Performance is an objective measure of its related function. The change in performance of a system element due to the transfer of energy from another element is equal to the work done by the system. Performance is accomplished with reference to the cost/unit time as well as to the total time over which the performance occurs. Incorporating the variable of time and then factoring it results in the value equation which portrays the metric of performance per rate of investing (e.g., spending).
PF (t ) 1
∑V (t ) = ∑ I (t ) / t . t F
(3)
F
In essence, this formulation of Value Systems Engineering implies that functions result in capabilities; where performance differentiate competing products and quality affects the lifecycle cost of the product. For each function, there is at least one pair of requirements ― a set of performance requirements for each function commensurate with a set of quality requirements for each performance requirement. The quality requirement reflects the variation and impact of the variation of the performance requirement of a function. Quality indicates how well a function is accomplished (through its performance) by the system. Taguchi [10] introduced the concept that quality can be thought of as a measure of the variation and impacts of the variation of the performance requirement(s), or that achieved by the performance, associated with its related function. Most importantly, quality can be thought of as a measure of the loss due to the performance of the system. The performance requirement is measurable and testable. The quality requirement derives from the view of the system throughout its lifecycle and characterizes the system losses due to pretermitted functions, i.e., non delivery of the system’s functionality; or operations beyond the range of specified performance tolerances. A system function may thus have any number of performance parameters and likewise several quality requirements can be associated with a given level of performance. Worth is the summation of Value for the system elements multiplied by the quality achieved by the enactments of performance of the system. Multiplying the numerator and denominator in Equation 3 by P(t) results in a performance metric which indicates quality as a measure relative to performance.
= W (t )
PF (t ) PF (t ) Qp (t ) (4) t PF (t )
= [V (t ).Q (t )] ∑ ∑ I (t ) / t F
P
F
where Qp (t ) is the quality (which can be considered as a tolerance attributable to PF (t ) ). Stipulating the units of Qp (t ) to be the same as that of IF (t ) , determines the unit of W(t) to be that of PF (t ) . The summation in Equation 4 is simplified for the purposes of this discussion, and thus shown over all functions, performances, quality, investments and temporal notions. Equation 4 is referred to as the Systems Engineering Value Equation with Risk (SEVER). Substituting W from Equation 4 into R (Equation 1) yields the overall system risk of integrating a network of nodes and links. Technology is any invention, discovery, improvement, or innovation of property that is real or intellectual that is conceived or reduced to practice. Technology is concerned with how society’s resources can be combined to yield economic goods and services [11]. Integration can be thought of as achieving a degree of interoperability and interconnectivity of elements through physical, functional, and process artifacts. The nemeses in building a C2 system of systems that must handle swarms of objects is formulating the abstractions and granularity of the design blocks to prevent overlapping and non-overlapping blocks with inappropriate or inadvertent connectivity, couplings, cohesion within and between the blocks. The consequence is an increase in development costs, notably, integration. The general notion that design is a hierarchy of blocks fails to account for the requirements for integration. The simple model for decomposition is an ordered set of processes, functions, or physical entities that are concatenated both in abstraction (vertically, according to increasing levels of detail) and in granularity (horizontally, codifying the contents of levels by role). The aim of abstraction is to isolate the appropriate level of interaction by omitting details while still capturing the essence of the logical layers necessary for integration. Granularity, however,
deals with the organization of data and information. Errors in abstracting (i.e. poorly defining the level of detail) manifest as design flaws which encumber work packages with inefficiencies in labor to accomplish work. Granularity deals with grouping and bounding the domain of the block. Membership in one and only one block results in “smooth” (complete, discrete, and contiguous) concatenations between and across the blocks that comprise the C2 system mission threads [12]. Overlapping blocks create duplication and inefficiencies, which result in higher development costs. Overlapping processes may also inhibit activities due to overlapping jurisdiction and enforcement enactments. By definition, non-overlapping blocks do not account for all activities or events that are necessary to accomplish the C2 mission threads. The effect is to create increased costs due to the need for processes to complete ad hoc tasks.
5.
CONCLUSION
Applying the six factors of integration (abstraction, granularity, connectivity, coupling, and cohesion) to the software architecture of a C2 system reveals risks that can be characterized by increases in data latency, regardless of the architecture paradigm. These risks derive from the functionalities and processes that are inherent in the C2 structures. Changes in technology have led to increased processing capabilities which in turn has challenged software to be more efficient in its design and constructs. Viewing the problem of increased data latency from the perspective of the gamut of technology, rather than on the underlying causalities of software constructs serves to combine the hardware/software issues into the genre of technology. From a risk perspective, technology can then be characterized by three types: Type 1, change in operational methodologies or innovation to enable new uses of an existing product; Type 2, addition, removal, or reclassification of structures
(physical, function, or process) to improve effectiveness of an existing product; and Type 3, creates novel solutions, that are new adaptations existing technology or development of new technology. Type 1 changes in technology center around the properties of interfaces: (connectivity (the means to express the joining of physical entities, functions, or processes); coupling (the properties of interdependence); and cohesion (relation between elements). Type 2 includes Type 1 changes with additional focus on level of detail in a design hierarchy (i.e. abstractions). Type 3 includes Type 2 changes with the inclusion of changes in partitioning within a level of the design hierarchy (i.e. granularity). Enactments of Type 1 events that change operations carry low risk for systems integration since the product remains the same, but facilitates new or improved methods or uses. Carefully done, credit cards can be used to scrape ice off windshields, with minimal risk to rendering the card unusable for financial transactions before its expiration date. Type 2 changes pose a moderate risk in systems integration due to structural changes in the design hierarchy, specifically the vertical nesting of detail. Removing, redefining, or missing a layer’s detail results in modifications to connectivities (resulting in changes in relations between affected functions and processes; alternations to the coupling between elements (requiring a review of the degree of interdependencies); and potential differences in the cohesion between elements (impacting domain relations). Combining the impacts of increasing or decreasing detail impacts the properties of interfaces across some or all elements. The consequences of increasing the details that must be reporting to regulatory agencies presents the possibility of extensive discussion to provide the proper context of data and information that is presented, and perhaps intensive investigations into newly disclosed considerations. One can appreciate the impact of a missing detail that must be discovered and ameliorated with new code, unit testing,
verification, and low-level integration with other work packages. If the missing detail is discovered early in the development process, the impact is low. However, if the discovery and remediation is not discovered until subsystem integration, the cost and time to fix the problem is much higher. Given the number of missing items that are discovered (or dealt with) during integration the overall risk for Type 2 changes is moderate. The ability to predict the cost and schedule of Type 2 changes in technology integration is problematic. Type 3 changes challenge the accuracy of systems integration with regards to time and cost to complete, and pose a substantial risk to systems integration. In addition to changes in the properties of interfaces (Type 1) and the removal or additions of detail in a product’s design combined with Type 1 (Type 2), reclassification and regrouping of physical entities, functions, and processes alter the organization of types, concepts, relations, and constraints. A substantial portion of a product is determined by granularity and is impacted (Type 3). A technology that changed operations dramatically was the introduction of the workplace computer. The modeling of data latency showed a correlation between changes in technology (by Type) and the risks associated with system of systems integration. The fundamental driver of this risk was founded on the ability to characterize the relationships between system elements through the six factors of integration. This paper suggests that an analysis of essential six factors of integration (abstraction, granularity, connectivity, coupling, cohesion) may forecast risks in integration, including system of systems integration.
ACKNOWLEDGEMENTS This research was made possible by support from Singapore (Defence Science Technology Agency and Temasek Defense Systems Institute); The Defence and Systems Institute at University of South Australia, Mawson Lake Campus; and the Naval Postgraduate School.
REFERENCES 1. Unewisse, M., Grisogono, A., (Australian Department of Defence), “Adaptivity Led Networked Force Capability”, Twelfth International Command and Control Research and Technology Symposium (12th ICCRTS), Newport, RI, Paper #1-200, 1921 June 2007. 2. U.S. Naval Postgraduate School, “An Integrated Command and Control Architecture Concept for Unmanned Systems in the Year 2030,” Systems Engineering and Analysis Project (SEA-16), NPS-SE-10-003, (Monterey), June 2010. 3. Wang, G., and Fung, C., “Architecture Paradigms and Their Influences and Impacts on Component-Based Software Systems”, Proceedings of the 37th Hawaii International Conference on System Sciences” 0-76952056-1/04, IEEE, 2004.
4. Langford, Gary O., “Foundations of Value Based Gap Analysis: Commercial and Military Developments,” Paper #342, 19th Annual International INCOSE Symposium (Singapore), 2009. 5. U.S. Department of Defense, DoD Architecture Framework Working Group, DoD Architecture Framework, Version 1.0, Volume 1: Definitions and Guidelines, 9 February 2004. 6. Sparrow, A., “A Theoretical Framework for
7.
8.
9.
10.
11.
12.
Operational Risk Management and Opportunity Realization”, U.S. Treasury Department, October 2000. Al Mannai, W. I. and T. G. Lewis, “Minimizing Network Risk with Application to Critical Infrastructure Protection, Journal of Information Warfare,” Volume 6, Issue 2, pp. 52 – 68, 2007. Miles, L. D. Techniques for Value Analysis and Engineering, 2nd Edn. McGrawHill, New York 1972 [first published 1961]. Langford, G.O. Reducing risk of new business start-ups using rapid Systems Engineering (Paper # 140). Proceedings of the Fourth Annual conference on Systems Engineering Research, April 7-8, 2006. Taguchi, G; Introduction to Quality Engineering, Asian Productivity Organization, 1990. U.S. National Aeronautics and Space Administration, “Executive Summary: Quantifying the benefits to the national economy from secondary applications of NASA technology”, NASA-CR-145963, 30 June 1975. Gagliardi, M., Klein, J., Wood, W.G., Morley, J., “A Uniform approach for System of Systems Architecture Evaluation,” CrossTalk – The Journal of Defense Software Engineering, Mar/Apr, 2009.