Proceedings of the ASME 2015 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA
DETC2015-46702 FUNDAMENTALS OF A MEREO-OPERANDI THEORY TO SUPPORT TRANSDISCIPLINARY MODELING AND CO-DESIGN OF CYBER-PHYSICAL SYSTEMS Imre Horváth Faculty of Industrial Design Engineering Delft University of Technology The Netherlands
[email protected]
ABSTRACT The main statement of this paper is that synergetic modeling and co-design of the hardware, software and cyberware parts of complex cyber-physical systems (CPSs) are yet not solved, even from the perspective of an underpinning transdisciplinary theory. CPSs contain functionally tightly connected analog and digital hardware, control, and application software, and knowledge, data, and media contents as cyberware. The lack of a unified theoretical framework and an all-inclusive system conceptualization methodology can be traced back to professional, methodological and cultural differences between the abovementioned domains of development. The objective of our research is to make a step towards a theoretical framework that can support transdisciplinary modeling of CPSs. Architectural and operational modeling have been identified as two principal and interrelated dimensions of system modeling, and a mereo-operandi theory (MOT) has been identified as target. Mereotopology has been considered as the basis of architectural modeling. Operational modeling has been based on parameterized representation of the underlying physical principles, the morphological characteristics, the operation elements, and the overall operation flows. A demonstrative case study is presented to evidence the practical feasibility and utility of the proposed MOT. Our follow up research will focus on using this as a conceptual framework and computational basis for specification of system manifestation features and on a computational implementation to support embedded customization. KEYWORDS Cyber-physical systems, architecture modeling, operation modeling, mereo-operandi theory, units and flows of operation
Shahab Pourtalebi Faculty of Industrial Design Engineering Delft University of Technology The Netherlands
[email protected]
1. INTRODUCTION 1.1. Introducing the problem and the objectives A comprehensive study of the current literature informed us about the fact that synergetic modeling and co-design of cyberphysical systems (CPSs), which contain countless analog and digital hardware components, control and application software components, and knowledge, data and media contents as cyberware components, are yet not solved [30] [24] [40]. Preembodiment design of hardware, software and cyberware presently happens in different manners, apart from the advancement achieved in certain areas such as embedded systems and collaborative robotics. The reasons can be found partly in the differences in the design and implementation concerns, and partly in the cultural (methodological) differences between the abovementioned areas of development [38]. We believe that the current disciplinary separation is also caused by the lack of a unified theoretical framework and a comprehensive system conceptualization methodology. Instead of a unified theory, disconnected partial theories are typically used for a multi-disciplinary ideation and conceptual modeling of cyberphysical systems. CPSs are not only heterogeneous, but also complex systems that may show various emergent characteristics and behaviors [3] [25]. They typically have many interactions with their surrounding environments and elicit the information controlling their operation from real life processes [23]. A concurrent, trans-disciplinary modeling and co-design of their hardware, software and cyberware parts in the early phase of development can lead not only to process advantages, but also to quality enhancements and market benefits. Towards these, our objectives have been stated as: (i) development of an amalgamating theory and a theoretical framework for
1
Copyright © 2015 by ASME
supporting pre-embodiment design, and (ii) conceptualization, implementation and testing of a methodology for a transdisciplinary development of CPSs, starting out from the notion of system features [34]. The issues related to the abovementioned differences, complexity, heterogeneity, and computability added up in a considerable challenge for this work. 1.2. Initial assumptions of our research work Our intension differs from that was followed by many other researchers. For instance, the developers of Ptolemy 2 created a modeling environment in which various hierarchically combined software tools are used to support heterogeneous system design by allowing diverse computational models to coexist and interact [6]. The objective of Ptolemy was to define and produce embedded software together with the systems within which it is embedded [9]. Typically, each model represents only one aspect of the entire system, and thus only part of its total behavior [19]. We took on a more challenging undertaking to develop a unified theory that allows similar representation and simulation for all heterogeneous components and their constituents. The latest literature advised us on where to look for the roots of a new theory, how to formulate it to be reasonably comprehensive, and how to operationalize it in a computer aided practical methodology. The starting point was the theory of mereotopology and the recent efforts to extend it from the spatial domain to the spatiotemporal domain and beyond [4] [11] [12] [14]. Our theory development has been based on the following four fundamental assumptions: Assumption 1: Cyber-physical systems are aggregates of system components, which are coarse grained building blocks with inside cohesion among the included entities (constituents) and with minimal interfaces to the outside. ⌂ Components of a system can be homogeneous, when they include only hardware (HW), software (SW), or cyberware (CW) constituents, or heterogeneous (aggregated), when they are formed by intertwining HW, SW, and/or CW constituents. Components are typically self-contained entities from an operation point of view, while constituents are not. Systems are regarded as aggregates of components, and components as aggregates of components or constituents. This reflects a pure physical view, rather than a logical (set-theoretic) or abstract (functional) view. The advantage of the pure physical view is that identification of systems components and the distinction of HW, SW and CW constituents are straightforward in the physical realm. In the rest of the paper, the terms ‘system’, ‘component’, ‘constituent; and ‘element’ are also used as semantic identifiers. Assumption 2: In order to provide a proper preembodiment model of cyber-physical systems, it is necessary and sufficient to describe and characterize them from an architectural and an operational perspective. ⌂ The architectural part of such a system model makes it possible to describe what components the system consists of,
while the operational part captures what the system as whole and the components do. However, the two perspectives are mutually dependent on each other. Therefore, they should be combined no matter if a comprehensive system model or a specific component model is considered. As an implication, we regard CPSs as complexes of interrelated architecture and operations in the rest of the paper. Assumption 3: Instead of an exhaustive description of all operations, which would consider each and every primary operation (e.g. mechanical transformation by a gearbox), secondary operation (e.g. thermal waste due to friction), and tertiary operation (e.g. dislocation of atoms towards a crack), a non-exhaustive operational description is sufficient for system or component development. ⌂ By using the term ‘operation’ we refer to those transformations and/or changes of a system, which happen in the physical (spatiotemporal) space, and that can be recognized by scientific observations, measurements, or any other means. We hypothesized that CPSs can be sufficiently characterized by their primary operations in the pre-embodiment phase of development. Secondary and tertiary phenomena, which are not related to, or have only an infinitesimal influence on the primary operations, can usually be neglected in this phase. Assumption 4: All systems, including CPSs, can be decomposed into a finite number of semantically meaningful units that capture both architectural and operational aspects. The units are called system manifestation features. ⌂ The idea of form and application features is well-known in the engineering literature and practice. By definition, features are regions of an artifact or process that have semantic meaning in a specific context, as well as significance from a particular aspect. However, the well-known theory and methodology of shape-induced mechanical part features cannot be adapted straightforwardly to system features. One reason is that, in the case of CPSs, system features can be interpreted on multiple deaggregation levels. Therefore, system features need to be defined and implemented on multiple levels of the structural hierarchy. Moreover, the principles of feature-based design should be extended to be able to manage with heterogeneity of constituents and manifestation levels. 1.3. What is presented in this paper? The next section discusses various aspects of knowledge synthesis and the pillars of the proposed mereo-operandi theory, including the implications of the assumptions, and the opportunities offered by an extended mereotopological approach. The third section deals with capturing and representation of the operations of CPSs. It presents how the physical phenomena, the morphology of components, the physical operation of the HW, SW, CW and hybrid components, and the flow of operations are modelled and integrated for an entire system. The fourth section introduces an application case with the intent to show the practical feasibility and utility of the proposed MOT. Finally, in Section 5, we reflect on the progress
2
Copyright © 2015 by ASME
achieved so far, formulate some propositions, and provide further details on our follow up research. For the sake of completeness, in this paper we only introduce, but do not discuss the concepts and implementation of system manifestation features. 2. IMPLICATIONS OF THE ASSUMPTIONS FOR THE MEREO-OPERANDI THEORY The presented work focuses on a general theory that supports trans-disciplinary modeling of cyber-physical systems by combining architecture and operation modeling, and by blending the principles of mereotopology and modus operandi. That is, while the previous efforts were made to capture internal and external object-change relationships from a spatiotemporal perspective, our work tries to capture and represent architecture-operation relationships by means of a physical extension of the mereotopological theory [28] [48] [21].The long term objective is to develop a system manifestation feature-based early conceptualization approach, which can be utilized in embedded customization, preventive failure analysis, internal interoperation analysis, and implementation and operational costs forecasting of cyber-physical systems. The starting point of the theory development was the four assumptions introduced in sub-section 1.2. Assumption #1 implies the need to consider a CPS as a heterogeneous composition of HW, SW and CW constituents. HW constituents comprise three primary (next lower level) constituents: HW := { AHW, DHW, APS } where: AHW = analogue hardware, DHW = digital hardware, and APS = analogue physical signals. The term software is used to refer to a set of computer programs (or executable codes, computational procedures, and associated digital documents). SW constituents comprise three primary (next lower level) constituents: SW := { SSW, ASW, DSW } where: SSW = system software, ASW = application software, and DSW = development software. In the presented first version of the developed theory, we considered the following types of SSW, based on their roles in CPSs, as secondary level constituents (in alphabetical order): SSW := { AC, CA, DS, DT, EI, IC, NM, RE, SC, SM, SU } where: AC = actuator control, CA = collaborative agent, DS = data storage, DT = data transmission, EI = external interaction, IC = internal cooperation, NM = network management, RE = reasoning engine, SC = sensor control, SM = system monitoring, and SU = system utility system software. ASW is used to solve specific communication, computational, or visualization tasks, and/or provide various other user services. From the perspective of CPSs, we considered the following types of application software (in alphabetical order): ASW:= { AM, CM, DA, DM, EM, GE, GV, MH, NB, PC, SE, SG }
where: AM = access management, CM = content management, DA = data analytics, DM = data mining, EM = energy management, GE = game engine, GV = graphical visualization, MH = multimedia handling, NB = network browser, PC = process control, SE = specific engineering, and SG = service generator application software. An ASW can be user designed as well as ready-made. We do not regard DSW as part of a CPS. The CW constituents comprise three types as primary (next lower level) constituents: CW := { SEK, RDS, DMW } where SEK = system enabling knowledge, RDS = repository data structures, and DMW = digital media ware. Note that some of the secondary lower level constituents may not necessarily present in a specific CPS. Assumption #2 hypothesizes that an adjoined representation of the architecture and the operation of a system, that are mutually dependent on each other, are necessary and possible. In order to decompose the architecture of a CPS, we introduced the notion of architectural domain. It is a finite and bounded part within the whole physical extent of a CPS, which lends itself to a particular set of operations. It can be represented by one single system component of an aggregate of operationally related components. Let’s identify a particular CPS by the symbol Σ, denote the whole physical extent of Σ by DΣ, and a given architectural domain of it by Di. Likewise, let us denote the overall operation of Σ by OΣ, and the set of operations occurring on, or related to, a particular Di by Oi,j. In addition, let us introduce the symbol « » to denote the mutual relations between the domains Di and the related operations Oi,j of a Σ. With these, a CPS can be defined by the following symbolic formula: Σ := DΣ « » OΣ where DΣ = ⋃ Di, and OΣ = ⋃ Oi,j. In the above formula, the symbol ⋃ represents the union operation, which is seen as logical equivalent of the aggregation of Di and proper combinations of Oi,j. Note that the notions of domain and operation are considered inseparable in the logical space L and in the physical space R3, as discussed above. Each architectural domain has at least spatial relations with others. Considering the HW, SW and CW constituents of Σ, these relations will be elaborated on in sub-sections 3.2 and 3.3. Assumption #3 postulates that the working of a system can be captured by a necessary and sufficient subset of operations. Let’s call it characteristic operation and denote it by COΣ. Let us denote the pertinent sub-set of primary operations by OΣ’ , the secondary operations by OΣ’’ , and the tertiary operations by OΣ’’’. With these: COΣ := OΣ’ ⋃ OΣ’’ ⋃ OΣ’’’ where: COΣ ⊆ OΣ, Oi,p’ ⊆ OΣ’, Oi,q’’ ⊆ OΣ’’ , Oi,r’’’ ⊆ OΣ’’’. With regard to performing these operations, it is assumed that operations are enabled by: (i) one or more interacting physical, computational, informing, or synergic phenomena, φi,k, appearing on a Di, which has got: (ii) a particular spatial
3
Copyright © 2015 by ASME
arrangement and/or extent of a Di, namely µi, and (iii) performs particular transformative actions, αi,l, or changes, ζi,m, and (iv) αi,l or ζi,m happen in a logically proper and physically feasible manner as observable operations, OΣ’, or OΣ’’, or OΣ’’’. A more specific discussion of the specification of system operations, including HW, SW and CW constituents, will be provided in subsections 4.3 and 4.4. Finally, assumption #4 postulates that CPSs can be deaggregated into a finite number of distinct (semantically identifiable) entities depending on their actual physical manifestations. The result of the subsequent de-aggregation steps is a set of interrelated system components. The opportunity of composing hierarchical structures from a set of components has already been utilized in various approaches of component-based design (CBD). CBD has become a kind of universal approach in the current practice of HW, SW and CW development. CPSs usually consist of a large number of heterogeneous components (compound of HW and SW, or SW and CW constituents), but CBD does not have restrictions in this regard. Nevertheless, CBD separates component definition from component composition. Components represent a mesolevel between the macro-level formed by subordinate systems, system modules and system units, and the micro-level of compound parts, elementary parts, and other basic entities. Typical representatives of homogeneous system components are commercial mechanical parts, software units, and ontology concepts. While HW, SW and CW constituents may appear as distinct components on a lower level of decomposition, they may become operationally connected on a higher aggregation level. 3.
MODELING OF SYSTEM ARCHITECTURE
3.1. A mereotopology-based approach of modeling Transdisciplinary system modeling raises the need for concurrent handling of architectural and operational aspects, relations, and parameters. At the same time, capturing and representation of architectural and operational aspects of a CPS need different means. The architectural representation is supposed to identify all physical entities and their mutual relationships. The operational description should capture: (i) the enablers (causes) of operations, (ii) the actors of operations, (iii) the working space of operations, (iv) the changes caused by the effects of the enablers, (v) the changes caused by the interaction(s) of the enablers, (vi) the effects of the working space, and (vii) the logical and chronological flow of operations. Several efforts have been made towards an application context independent description and integration of the architecture and the operation of electromechanical and electronics systems. Among other, publications, such as [18] [37] [42] [44], reported on results that are relevant in the context of our research. However, the overwhelming majority of the published approaches did not specifically consider unification of the representation and handling of HW, SW and CW constituents [32] [33] [41]. Mereology, mereotopology (MT) and spatiotemporal
mereotopology (ST-MT) have recently been used to a formal representation of metric spaces. It is also indicated by the literature that efforts have been made towards various purposeful extensions of the mereotopological theory in order to capture physical aspects of the existence of spatiotemporal entities. In its broadest meaning, mereology captures how physical and conceptual parts relate and what it means for a part to be related to the whole and other parts, while MT extends these with connectivity relations among the entities [49]. Addressed by classical mereology, part-whole relations essentially involve a notion of mereological fusion, or sum. As defined by Smith, MT combines mereology with a topological component, thereby allowing the formulation of ontological laws pertaining to the boundaries and interiors of wholes, to relations of contact and connectedness, to the concepts of surface, point, neighborhood, and so on [45]. The topological extension of mereology provides an ontological specification of a conceptualization of the spatiotemporal entity relations of a system [8]. MT offers multiple theories that differ from each other in terms of the underpinning concepts (for instance, they use either points to reason about spatial relations of physical entities, or regions). Most often, regions are used as entities of MT [47]. Regions are proper parts of the whole, and have various (not necessarily permanent) relations with each other in space. Based on regions, MT can provide a manifestationindependent logical specification for computational processing of architectures of artifacts [7] [31]. The mereotopological abstraction makes it possible to describe domains without considering their actual metrics (sizes), morphology (shape), and materialization (attributes). However, regions and their relationships may change in time, for instance, in the case of assembly modelling of non-steady spatial structures [1]. Specification of temporal characteristics of regions and reasoning about their relationships in time need temporal logic, not only predicate logic. This has been included in the theory of spatiotemporal mereotopology [36]. In addition, various theories are available to handle multi-dimensional mereotopology [46] [20]. Our investigation has shown that further ‘physicalization’ of the entity relationships is possible. It has been argued that ST-MT can capture complex operational relations in the physical domain [43]. As mentioned earlier, this is an objective of our research. MOT is developed to allow concurrent architectural and operational modeling of CPSs (including hardware, software and cyberware constituents). A graphical abstract of the conceived constructs to be captured by the theory is given in Figure 1. We are aware of the fact that operationalization of MOT in practical design projects cannot be done effectively without having multiple (e.g. entity, relations, phenomenon, etc.) ontologies. However, due to the needed development efforts and time we did not include it in our research conducted so far. Actually, it turned out that intellectualization of MOT can be done without defining and having these ontologies.
4
Copyright © 2015 by ASME
Figure 1
The architectural and operational aspects of system modeling
3.2. Interpretation of architectural entities and relations According to IEEE 1471, architecture is: (i) the fundamental organization of a system embodied in its components, (ii) their relations to each other and to the environment, and (iii) the principles guiding its design and evolution. Consequently, an architectural model of a CPS is supposed to define the high level configuration of all system components, as well as the connections that coordinate the operations of the components. Striving for an aggregationdriven (i.e. an abstraction-free) representation of the architectures of CPSs in the physical space, we use semantic abstraction only in the form of naming conventions for the components aggregated at various complexity levels. HW, SW and CW constituents, and the components aggregated thereof, have different spatiality and occupy the space differently. Being three-dimensional (3D) artifacts, HW constituents can be seen from both external and internal views. In an external view, they are finite regions of the 3D metric space with closed boundaries (i.e. real subspaces of the E3 space). In an internal view, they are finite point-sets of the R3 attribute space with specific internal continuity properties and attribute distributions. The position, orientation, shape, and motion of an artifact change in the E3 space, while other physical properties change in the R3 space. SW constituents and system components are quasimorphed, that is, they do not have a specific geometric form. However, they have physical extent, which plays a role in their relations to hardware and/or cyberware. This ‘dimensionality’ can be understood as an implicit spatiality of software and considered from both external and internal views. In an external view, SW is a code structure that needs a part of the 3D metric space to be stored and processed (e.g., as a sequence of dipoles on a magnetic disk). In an internal view, SW is a set of instructions for a processor that is seen as part of a onedimensional attribute space. Cyberware is of similar nature and reflects both the external and the internal aspects of SW constituents. A
cyberware (e.g., a set of data in a file or record) may exist as a physically stored signal or code sequence, but also as digital values for a processor. Like in the case of HW components, spatial position of SW and CW components can be identified by reference points in the geometric and the attribute spaces. In our interpretation, atomic entities of software and cyberware are not separately distinguishable parts – they are handled as a mass in SW and CW constituents, no relations are identified among them. These allow considering HW, SW and CW components, as well as compound HW-SW, SW-CW, HW-CW, and HW-SWCW components (which are together named as aggregated-ware - AW) as operational system domains, with specific physical properties and operation capabilities (affordances). A domain capturing the architectural manifestation (spatiality) of a component can include subdomains of HW, SW and CW, and the relationships among them. 3.3. Modeling of architectural entities and relations For architectural modeling, we introduced specific instances of ‘part-of’ and ‘connected-to/with’ relations. The latter describe neighborhood, remoteness, and functional relations, respectively, between various system domains. These relations are used in system-part, part-part, and part-element contexts. It has to be noted that these relations lend themselves to a kind of circularity. Namely, a software can be a component of (a whole) systems, a component of another (compound) component, and a constituent of a component. Considering this, internal and external relations have been distinguished. The ‘part-of’ relations between the system and the domains representing its components, and within a component (with a view to included components and constituents), are regarded as internal relations (Figure 2). They have been specified as: • part-of-system-as-component (PSysCom) • part-of-component-as-component (PComCom) • part-of-component-as-constituent (PComCon) • part-of-constituent-as-constituent (PConCon) In order a constituent to become a component of a system, it should be in PComCon relation. The nature of components and constituents can be indicated by using indices h, s, c and a, standing for HW, SW, CW or AW. They are applied after the abbreviations denoting components or constituents, (e.g., PComaConh,s).
Figure 2
5
Internal and external relations within and between components and constituents
Copyright © 2015 by ASME
The ‘connected-to’ relations are among the components of a system and among the constituents of a component, respectively, and are regarded as external relations. ’Connectedto’ expresses one-directional relation, while ‘connected-with’ expresses bi-directional relation. They have been formalized as: • component-connected-to-component (CComtCom) • component-connected-with-component CComwCom) • constituent-connected-to-constituent (CContCon) • constituent-connected-with-constituent (CConwCon) Lower level ‘connected-x’ relations have also been devised by considering the nearness and remoteness, and explicitness and implicitness of the relations. The relation ‘connected’ is further articulated as adjacent connected (‘a.connected’), remotely connected (‘r.connected’), and indirectly connected (‘i.connected’). This articulation is helpful from the aspects of specifying operational relationships. They are textually and symbolically represented as exemplified below: • component-x.connected-to-component (x.CComtCom) • constituent-x.connected-with-constituent (x.CContCon) where: x can be one of a, r, and i. Connected-to’ indicates onedirectional operational relation and ‘connected-with’ indicates two-directional operational relation. ‘Connected-x’ relationships are the architectural basis (or carriers) of operational relationships. A domain representing an aggregate component from an architectural viewpoint integrates domains representing HW, SW and CW components/constituents, which occupy subspaces of the 3D geometric and attribute spaces, respectively. Due to operational considerations, the abovementioned architectural relations may exist permanently (when they are called all-time relation), and can be established when the system is working (when they are called run-time relation). These play an important role in case of SW and CW constituents, which typically have different idle-time relations and run-time relations. The durations of these relations are described in both logical time (as precedencies) and physical time (as points in time). In our conceptualization, a SW or a CW domain cannot be embedded in HW domain, and HW, SW and CW parts cannot intersect, but can be adjacent to them in the physical space. Therefore, SW and CW domains can have only ‘connected-to’ as architectural relations with HW domains. It has to be noted that atomic entities of SW and CW are considered to be spatially undistinguishable parts. Consequently, no architectural or operational relations are supposed to exist among them. Since every domain may have relations with multiple other domains, and the relations can also be multiple, it is reasonable to include the concept of layering, which is logical, physical, and representational layering in one. This leads us to a stratified mereotopology (SMT) [39]. A layer includes the set of domains and the set of pertinent relations that belong together in a particular architectural/operational situation [16]. SMT lends itself to the realization of situated analysis and constraint
management. Consider a software tool, which offers multiple alternative interfacing, e.g., command line interpreter, graphical user interface, menu driven interface, form-based interface, touch panel manager, and natural language interface. In this case the relations among the domains of the concerned HW, SW and CW constituents, and their architectural and operational relations are placed on different layers. As a bottom line, the operational relations should be checked when architectural relations are changed in the pre-embodiment design phase, and vice versa. In case of dynamically operating open systems, existing domains may become disconnected from the system architecture, while other domains may become connected to it. Therefore, existence of a domain with respect to the system is the most basic operation of architectural components. 4. CAPTURING AND REPRESENTATION OF OPERATIONS 4.1. General considerations The complex operation of a CPS can be captured either by a top-down approach, or by a bottom-up approach. In the first case, the intended overall operation of the whole system is systematically de-aggregated (decomposed) to a set of interlinked operations of system components. This needs particular solutions for the intrinsic compositionality challenge. In the second case, the overall operation of the system is aggregated (composed) of the specific operations of off-the-self or custom-made components. This raises the need for addressing the composability challenge, which boils down to operation and interface matching issues. In our current research, we have concentrated on component-based pre-embodiment research. Hence, we discuss capturing and representation of operations of CPSs only in this context. Technical systems are transformational systems, the operation process of which can be characterized by the states of the systems and the transformation between subsequent states. We considered only linear CPSs for which the overall operation can be regarded as the sum of the operations of the domains of the system at a point in time, or in a period of time. The state of the operational domains is described by a set of state parameters. The transformations can be physical or computational, resulting in different changes in the states of the domains. In the first case, the transformations happen in material, energy and information flows and the outcome of the transformations should be described concurrently in the geometric space and the attribute space. In the second case, the transformations happen in information flows, and the outcome should be described in the attribute space. While architectural relations specify a kind of ‘vertical’ relationships (i.e. containment relationships between the existence of the whole and the existence of certain domains, operational relations express a kind of ‘horizontal’ relationships as dependences (causalities) between the states (properties), effects, constraints, and vice versa. The totality of operations should guarantee the integrity of the whole and the domains.
6
Copyright © 2015 by ASME
Operational modeling should cover and provide parameterized representation of: (i) the morphological characteristics of the domains, (ii) the physical effects underlying the operations happening on domains, (iii) the conduct of operations and their outcomes, and (iv) the flow of operations as blending of the particular operations. MOT implies complex data and relation structures, which should extend to architectural, geometric, morphological, material, physical, temporal, and behavioral aspects. Below we provide details on some important aspects of capturing and representation of operations. 4.2. Simplified modeling and representation of morphology of components The morphology of domains is a dual-constrained overall variable. On the one hand, morphology determines how the physical effect can manifest in operation, and, on the other hand, operation assumes specific (proper) morphology-effect relationships. In the case of hardware components, morphology captures the physical extent, form, shape, dimensions, texture, and other geometric attributes. Considering that morphological modeling should effectively support pre-embodiment design, a minimal morphological representation (MMR) has been stated as adequate target. While MMR varies for different objects, it should carry both geometric and topological information needed to model the operations of physical components. As one form of MMR, skeleton modeling of artifacts has been widely studied in the literature, in particular, unique and complete-able skeleton representations [13] [26]. Uniqueness means that a particular skeleton model represents only one and exactly one 3D artifact, while completeness means that the information structure associated with a particular skeleton model carries all pieces of information that are necessary and sufficient for a proper 3D reconstruction (representation) of the morphology of components in both E3 and R3. Skeleton modeling uses two basic modeling entities, namely bones and ports. Bones specify the position and orientation of ports in the modelling space, arrange and interconnect a set of ports in space, provide reference points for ports, and transfer and transform physical effects between ports [27]. There is a surface patch assigned to each port. On the one hand, this serves as the seed of the boundary surface in the vicinity of the reference points of ports and defines its shape. On the other hand, the defined surface patch acts as external interfaces for physical effects, and forms a physically coupled pair together with the surface patch of another port. Thus, ports play a dual role: From inside, they form material, energy and information interfaces. From outside, in operational contacts with the port/ports of other skeletons, forms physically or computationally coupled pairs. Assigning 3D geometric information (actually surface patches) to the ports makes it possible to generate boundary surfaces, which intersect and, after cropping, result in the 3D shape of the domain. From a structural point of view, a skeleton can be of a branching type, an enfolding type, or a hybrid type, involving both previously mentioned types as parts. Bones are eventually
the connectors of ports, according to the logic of the presumed energy transfer and morphological changes. They are brought together to define the possible streams (ways and directions) of energy flows within or upon system domains. Should some external or internal physical effect be acting on a skeleton, it always needs at least one port for interfacing. At the same time, multiple effects may act on one single port. Depending on the energy flow, a port can play the role of in-port or out-port. In a branching skeleton, bones may form a chain, star or tree formation. The branching of the energy flows can happen only at virtual mid-ports. Mid-ports can be considered as nodes connecting the bones. Generally mid-ports are those material centers of mechanical parts where the presumed energy flow ramifies. Thus, a mechanical part may have one or more midports. In the case of an enfolding skeleton, there is at least one bone that forms a loop. A particular configuration of ports, or more precisely, a particular relation of idealized contact surfaces at the ports, defines the kinematic degrees of freedom of a skeleton. Ports can describe active places or interfaces of HW, SW and CW constituents, while skeleton elements can represent their ‘body’ (including physical objects, software algorithms, or data structures). Capturing the morphology of software constituents is much less straightforward. Difficulties emerge because software concurrently exists as a thing and as a process. It is a thing when it exists in the form of pseudo-code of algorithms, a computer language code, or compiled instructions stored on any storage device. It exists as a process when it is a set of executable processor instructions and is executed by the processor. The architecture can be precompiled or run-time compiled. The thing-like existence can be characterized by the measure ‘extent’, which expresses the physical size of the software. The extent makes software physical thing and manageable in the physical space. The skeleton representation can be used to define the space that is needed for storage of a particular software, but shape is not a concept that can be associated with a software. Representation of the morphology of cyberware is handled by the theory based on the same considerations. An overview of the aspects of HW, SW and CW constituents is shown in Figure 3. 4.3. Describing elementary operations of components First of all, it has to be mentioned that, as interpreted by MOT, elementary operations does not mean the absolute lowest level operation of a component. Let us take the example of microprocessors, which are among the most complex engineering systems ever created by humans. However, they are actually aggregate components, containing components, such as the data processor, controller, cache memories, interface, etc. The processor concurrently executes multiple sequences of instructions - each of them is included in their instruction-set architecture (ISA), which is different for most of the processors. The byte-level encoded (binary) instructions perform a set of primitive operations, such as incrementing, addition, and
7
Copyright © 2015 by ASME
Hardware
Software
Cyberware
Physical
Computing
Informing
Gravity, force, friction, etc.
Instruction, language code, finite-state machine
Structurability, interpretability
Shape
Extent
Extent
Principles
Morphology
Unit of Operation Action
Rolling, moving, attraction, etc.
Marshaling, calling, invoking, compiling, interpreting, etc.
existing
Flow of Operation Process Energy flow, material flow, analog signal flow
Figure 3
Analog signal flow, digital signal flow
Analog signal flow, digital signal flow
Overview of the unified aspects of capturing the operation of HW, SW and CW
sorting. The ISA model and the primitive operations are important for both processor designers, who build various processors that execute those instructions, and the compiler designers, who need to know the permitted instructions and their encoding. Since MOT intends to support pre-embodiment design of CPSs, the operations and attributes of microprocessors and other computing components are considered on aggregate level, unless the operations of their components are important for designing a CPS. This argumentation also stands for compound electromechanical aggregate components. With a view to this, the concept of unit of operation (UoO) has been introduced as generalization of the physical phenomenon and digital computing induced aggregate operations of interrelated HW, SW, and CW constituents. Methodologically, a UoO is defined as a quint-tuple of: (i) the starting state of a domain, (ii) the input events and quantities, (iii) the transformation over the domain, (iv) the output events and quantities, and (v) end state of the domain, i.e. UoO = { SS, I, P, M, O SE}, and the transformation is: T ={ P, M(PE) }, where: P is the procedure of transformation, and M(PE) is the methods of transformation, which are associated with the elements of the procedure, PE (Figure 4). Computationally, the transformation means the execution of the specified set of methods. The UoO formalism can flexibly be applied to multiple levels of operation and supports the introduction of a layer oriented allocation and representation of operations. In addition to physical and computational transformations, UoOs can describe motor, cognitive, perceptive and emotional actions and interactions of human stakeholders and embedding environments. UoOs are procedural elements of a flow of operation (FoO). The overall operation of each particular domain of a system is thus described by a specific FoO. This implies that it makes no sense to specify a FoO without having a domain, which is a proper carrier of the concerned compound and elementary operations, specified. The basis of a computer representation and digital
simulation of the operations of domains are various statetransition representations [5] [15]. The logical order of processing the physical and the computational transformations is specified as a sequence, a hierarchy, or a web of procedural steps, as discussed among others in [29]. The procedural steps are computed by state-transition algorithms [17] [22]. As von der Beeck showed, the state-chart formalism has become quite successful due to its hierarchical representation of state information and its expressive power in modeling concurrency by simultaneously active states and parallel transition execution [50]. The algorithms implement individual methods or a composition of them. The concept of methods facilitates the uniform treatment of physical and computational operations. 4.4. Specification of operation processes of systems FoO is a fundamental concept introduced for the purpose of specification and computation of operation processes of systems. A FoO arranges the operations of UoOs in a logically, physically, and temporally feasible structure (a work flow) [10]. Primary, secondary, or tertiary operations can be blended in a FoO. However, it is required that UoOs of the physical constituents of a domain are temporally integrated with those of the computational constituents. This can be achieved by including timed computing in the framework of FoOs. However, it may be complicated in many cases if certain complex system operations depend on instantaneous circumstances and conditions. The real-time handling of the changes seems to be a partially solved issue in terms of: (i) the objectives of the operations, (ii) the available control information, and (iii) the emergent environmental effects. Nevertheless, FoO should represent dynamic operation in time (DOT) and synthesize time-continuous physical operation (that may include singularities and discontinuities) with timediscrete (event-triggered) operation of computational elements. FoO brings these operation modes into a common procedural framework, by imposing explicit time management. It extends to three aspects of timing the course of operations in real time by: (i) considering time as explicit parameter in the representation of physical properties, (ii) guaranteeing the execution of the operations by the time at which the results are needed, and (iii) management of time-critical networking concurrent/parallel computing of data. To achieve these goals, operation timing should be included in UoOs from which FoOs are built up. The overall operation of a domain can be modeled as a discrete-event process, which consists of an event record with the associated Start state
End state Domain
Input Procedure
Figure 4
8
Output Methods
Logical model of describing and processing physical and computational operations
Copyright © 2015 by ASME
time-stamp. For a time-aware (real-time) FoO, task-oriented scheduling (TOS) and event-oriented programming (EOP) need to be combined and procedurally integrated. A challenge for EOP is to extend control to emergent events and unscheduled event interactions in FoOs. A two-phase event-oriented control is to be implemented, with a first phase dealing with event detection, and the second phase with event handling. The timedriven synchronous state machines can provide this functionality. State-transition models can also be used for specification of human–computer interaction [51]. 5.
A DEMONSTRATIVE CASE STUDY We have considered a smartwatch as a simple, but demonstrative case study. As a product it is shown in Figure 5 [35]. As a wearable computing-based system, the functionality of the smartwatch extends to functions such as chronograph, cell phone, calculator, GPS navigator, compass, media-player, translator, accelerometer, thermometer, altimeter, barometer, heart rate monitor, and many more provided by flexibly downloadable (mobile) applications. Though the analogue hardware part is relatively simple, the digital hardware shows a large complexity, and can be extended with plug-ins such as wireless headset, microphone, and insulin pump. The software part includes Android Wear as operating system, and a pool of dedicated apps [2]. The cyberware includes collected and downloaded data structures and various media materials (e.g., audio, and video files). The studied smartwatch needed all introduced ‘part-of’ relations that have been discussed in Subsection 2.4. Below are some representative examples: PSysCom: the innards pack, as a combination of compound components, is a part of the smartwatch system. PComCom: motherboard, as a component, is a part of the innards pack, which is higher-level aggregative component. PComCon: the runtime environment is a component, which can be de-aggregated into SW and CW constituents in a lower architectural level. Being purely software, the Dalvik virtual machine can be considered as a constituent of it.
Figure 5
Subject of case study: a. the smartwatch, b. the inductive charging domain, c. the touch detection domain
PConCon: the Bytecode interpreter is a SW constituent of the Dalvik virtual machine, which is a software constituent in itself. The connectivity relations have also been applied to check their necessity and sufficiency. Below we present examples for the use of connectivity relations in the context of the smartwatch case: CComtCom: the gyro/accelerometer component is connected to the motherboard. CComwCom: the touch panel pack has a dual connection with the motherboard. CContCon: the core library, as SW constituent, is connected to the Dalvik virtual machine. CConwCon: the un-compiled SW resources of the App are twoways connected with the resource manager. a.CComtCon: the innards pack is connected to the body of the smart watch. r.CComwCom: the Bluetooth module of the smartwatch is dual connected with the Bluetooth module of the smartphone of the user. i.CComtCom: the App is connected to the processor indirectly (through the virtual machine and the kernel layer). In general, it is assumed that connectivity relations can be established between domains (components and constituents) that physically exist. The fulfilment of this condition creates a basis for complementing these relationships with physical and computational relationships. In combination, they specify operational transformations over a given domain. With regards to the representation of the two kinds of possible operations of domains, a logical framework has been defined, which is equally good for capturing physical and computational operations, and UoOs and FoOs, respectively. This framework has been shown in Figure 4, with the operational domain in its center. From a process perspective, a UoO, as well as a FoO, has a start state and an end state. In the case of operations of physical nature, there are input streams and output streams of material and energy streams, as well as of analogue signal, while in the case of computational operations, there are input and output streams of information. For both physical and computational operations, the transformation is described as a (algorithmic) procedure and processed by relevant (alternative) computational methods. Below, we exemplify the description of the physical and the computational operations of different architecture and operational domains of the smartwatch. We focus on the procedural elements of the transformations. A domain with typical physical operation is the domain of the inductive charger pack (ICP). Placed inside the rear housing of the smartwatch, ICP is in charge of receiving energy from the electromagnetic field of the inductive charger and converting it into electric current. The start state of operation is in which the smartwatch is at the moment when placed onto the inductive charger in a specified position. The end state is in which the smartwatch is at the moment when picked up from the charger.
9
Copyright © 2015 by ASME
In these states, both the input and the output are energy streams. The input stream is magnetic energy field, and the output stream is electric current picked up by the smartwatch. The physical transformation can be modeled by the following procedure: Begin 1. Generate electromagnetic field by charger 2. Bring smartwatch into physical contact with the charger 3. Connect inductively to the electromagnetic field 3. Take power from the electromagnetic field 4. Convert power into electric current 5. Conduct the current to the battery End These together represent one UoO. There are specific computational methods assigned to each element of the procedure, which cannot be discussed here due to space limitations. In addition to the characteristic variables of the physical effects (e.g., the strength of magnetic field, and the duration of charging) and the quantitative formulas that describe their relationships, the methods also include the morphological variables of the lower level components of the ICP such as charging coil, magnetic sticker, and the connector, as well as spatial position relationships, for example, the distances between two coils, the length, diameter and number of turns of the charging coil, etc. A domain with typical computational operation is the domain of the touch recognition/localization software that plays an important role because touch and gestures are the main input modes for the studied smartwatch. Actually, the touch recognition/localization domain is a connected compound one. From an operational perspective, the UoO of touch recognition and the UoO of swipe (sweep vector) and gesture identification are intertwined. Nevertheless, two domains can be identified, which are represented by two software constituents and the shared touch panel, the capacitive sensors, the connector to the motherboard, the chipset of touch panel controller, and other computational constituents. We analyze the UoO of touch recognition that involves the descriptive architectural and morphological variables of the whole digitizer unit and the embedded software constituents that is in charge of calculating touch positions. The recognition of touch is based on the multiple capacitive sensors, which sense and measure ∆ voltages in different points. These sensors are at the boundary (periphery) of the operational domain. The software constituent computes and recognizes the location of (subsequent) elementary touches based on the differences in the detected ∆ voltages. Considering the physical domain of the touch panel, the elementary touches can be combined into a sweep vector. As mentioned above, this is seen however as a different computational UoO of the same touch recognition/localization domain. The reason is that the changes in the ∆ voltage values should be processed in time in order to be able to compute and identify swipes and gestures. The start state is the untouched state of the domain and the end state is again the untouched state of the domain. The start
state and the end state are characterized by various time and event variables. The input consists of material, energy and analogue signal streams, and output comprises the digital information stream generated by the domain. The touching human fingers create distortion of the electrostatic field on the touch panel, which generates currents in the capacitive sensors according to the analog touch signal produced by the user. The computed touch positions are the digital information output. The transformation procedure between start state and the end state is as follows: Begin 1. Activate if there is a touch 2. Capture the ∆ voltage data in all sensors 3. Send the data to the touch panel controller 4. Calculate the position of the point of touch 5. Record the data of the points of touch 6. Repeat procedure until touch is detected End The steps of the above procedure are done by dedicated parts of the software constituent. Capturing of ∆ voltage data can be done by alternative methods, depending on the number and relative position of sensors. 6. DISCUSSION AND RECOMMENDATIONS 6.1. Reflection on the proposed theory Our endeavor is ambitious, because the research problem described in this paper is challenging. We tried to address the issue of transdisciplinary pre-embodiment design of CPSs onto an application, system, constituent, and method-neutral foundation. The presented theoretical framework seems to be able to resolve this issue. This paper was intended to give an overview of the elements of this foundation, but due to the concomitant complexity it could not deal with all (technical details). Extension of spatiotemporal mereotopology into the physical (and computational) realms is a rational step, which is justified by the growing number of publications reporting on research in this direction. Using extended mereotopology as the basis enables a dynamic architectural modeling of HW, SW, CW and aggregate components. It also allows connecting time and temporal relations to spatiotemporal domains with physical change relations. This paper presented the results of the first phase of development of MOT. The emphasis was put on to introduce and explain those novel concepts that facilitate a uniform representation of the architectural aspects of HW, SW and CW constituents and aggregated components of multiple aggregation levels. The unifying concept of architectural description is domain, which expresses spatial dimensions and/or extension of components and constituents. From operational point of view, physical and computational operations have been distinguished. A domain performs a transformation that is captured as a unit of operation. The operation (multiple transformations) of interconnected domains is represented as flow of operation. The computational
10
Copyright © 2015 by ASME
representation of the physical and computing operations are based on state-transition formalism. 6.2. Follow up research There are three major objectives for the follow up research: (i) refinement and improvement of MOT as a whole and its elements, (ii) extending MOT with the theory of system manifestation features (SMFs) and (iii) transferring the extended theory into a SMFs-based pre-embodiment design methodology for CPSs. In order to reveal the bottlenecks and limitations of MOT, several application case studies will be completed. In the case studies, MOT will be applied to largely different CPSs such as a stroke rehabilitation enabler system, automated greenhouse system, and fire escape facilitation system. The objective of the analysis will be a critical testing of consistency, adequacy, and representation potentials of the theory from both architectural and operational points of view. As far as the second and third objectives are concerned, four tasks have been identified: (i) development of a conceptual and procedural (methodological) framework for SMFs and designing with SMFs, (ii) elaboration of a computational representation of system manifestation features, (iii) elaboration of a SMFs-oriented CPSs analysis methodology, and (iv) elaboration of a SMFs-based synthesis methodology for CPSs. The goal of a SMFs-oriented analysis is set as to investigate how the system can be de-aggregated, considering both architectural and operational aspects, as well as the given heterogeneity, into SMFs of various complexity levels. Domains and domain topologies, together with the related UoOs and FoOs, and the interfaces will be identified and represented as SMFs. The elaboration of a SMFs-based synthesis methodology will commence with the development of a SMF-ontology and an active library (repository). REFERENCES [1] Allen, J.F. (1983). Maintaining knowledge about temporal intervals. Communications of the ACM 26 (11), 832-843. [2] Android. (2015). Developers web-page, http://developer.android.com/reference/packages.html [3] Bellomo, N., Bianca, C. and Mongiovì, M.S. (2010). On the modeling of nonlinear interactions in large complex systems, Applied Mathematics Letters, 23, 1372-1377. [4] Bennett, B., Cohn, A.G., Torrini, P. and Hazarika, S. (2000). Describing rigid body motions in a qualitative theory of spatial regions. In Proceedings of the National Conference on Artificial Intelligence, 503-509. [5] Broy, M. (1997). The specification of system components by state transition diagrams. In Research Report TUM-I 9630, Technische Universität München. [6] Buck, J.T., Ha, S., Lee, E.A. and Messerschmitt, D.G. (1994). Ptolemy: A framework for simulating and prototyping heterogeneous systems. International Journal of Computer Simulation on Simulation Software Development, 4, 155-182.
[7] Cohn A.G and Hazarika S.M. (2001). Continuous transitions in mereotopology. In Proceedings of the 5th Symposium on Logical Formalizations of Commonsense Reasoning, 1-10. [8] Cohn, A.G. and Varzi, A.C. (1998). Connection relations in mereotopology. In Proceedings of ECAI, 150-154. [9] Davis, J., Goel, M., Hylands, C., Kienhuis, B., Lee, E.A., Liu, J., ... and Xiong, Y. (1999). Overview of the Ptolemy project. ERL Technical Report UCB/ERL, 1-23. [10] Deelman, E., Gannon, D., Shields, M. and Taylor, I. (2008), Workflows and e-Science: An overview of workflow system features and capabilities, Future Generation Computer Systems, 25(5), 528-540. [11] Del Mondo, G., Stell, J.G., Claramunt, C. and Thibaud, R. (2010). A graph model for spatiotemporal evolution. Journal of Universal Computer Science, 16(11), 14521477. [12] Demoly, F., Matsokis, A. and Kiritsis, D. (2012). A mereotopological product relationship description approach for assembly oriented design. Robotics and Computer-Integrated Manufacturing, 28(6), 681-693. [13] Demoly, F., Toussaint, L., Eynard, B., Kiritsis, D. and Gomes, S. (2011). Geometric skeleton computation enabling concurrent product engineering and assembly sequence planning. Computer-Aided Design, 43(12), 1654-1673. [14] Demoly, F., Yan, X.T., Eynard, B., Rivest, L. and Gomes, S. (2011). An assembly oriented design framework for product structure engineering and assembly sequence planning. Robotics and Computer-Integrated Manufacturing, 27(1), 33-46. [15] Desharnais, J., Frappier, M. and Mili, A. (1998). State transition diagrams. In Handbook on Architectures of Information Systems, Springer, Berlin, pp. 147-166. [16] Donnelly, M. (2003). Layered mereotopology. In IJCAI, 1269-1274 [17] Drusinsky, D. and Harel, D. (1989). Using statecharts for hardware description and synthesis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 8(7), 798-807. [18] Duntsch, I., Wang, H. and McCloskey, S. (2001). A relation-algebraic approach to the region connection calculus. Theoretical Computer Science, 255, 63-83. [19] Eker, J., Janneck, J.W., Lee, E.A., Liu, J., Liu, X., Ludvig, J. and Xiong, Y. (2003). Taming heterogeneity - The Ptolemy approach. In Proceedings of the IEEE, 91(1), 127-144. [20] Galton, A. (2004). Multidimensional mereotopology. In Proceedings of KR ‘04, AAAI Press, Menlo Park, 45-54. [21] Gruhier, E., Demoly, F., Kim, K.Y. and Gomes S. (2014). Mereotopology and product design. In Proceedings of TMCE 2014, Budapest. Hungary. [22] Harel, D. (1987). Statecharts: A visual formalism for complex systems. Science of Computer Programming, 8, 231-274.
11
Copyright © 2015 by ASME
[23] Horváth, I. (2014). What the design theory of socialcyber-physical systems must describe, explain and predict? In An Anthology of Theories and Models of Design. Springer London, 99-120. [24] Horváth, I. and Gerritsen, B.H. (2012). Cyber-physical systems: Concepts, technologies and implementation principles, in Proceedings of TMCE 2012, Vol. 1, May 7– 11, 2012, Karlsruhe, Germany, 19-36. [25] Horváth, I. and Gerritsen, B.H. (2013). Outlining nine major design challenges of open, decentralized, adaptive cyber-physical systems, In Proceedings of the International Design Engineering Technical Conferences, ASME, 1-13. [26] Horváth, I. and Thernesz, V. (1996). Morphologyinclusive conceptual modelling with feature-objects, in Proceedings of SPIE, 2644, 563-572. [27] Horváth, I. and van der Vegte, W. (2003) Nucleus-based product conceptualization - Part 1. In Proceedings of the 14th International Conference on Engineering Design, Stockholm, 1-10. [28] Hovda, P. (2009). What is classical mereology?. Journal of Philosophical Logic, 38(1), 55-82. [29] Lee, B. and Lee, E.A. (1998). Hierarchical concurrent finite state machines in Ptolemy. In Proceedings of the International Conference on Application of Concurrency to System Design. IEEE, 34-40. [30] Lee. E.A., (2010), CPS foundations, in Proceedings of the 47th Design Automation Conference, ACM, 737-742. [31] Lemon, O. and Pratt, I. (1998). Complete logics for QSR: A guide to plane mereotopology. Journal of Visual Languages and Computing. 9, 5-21. [32] Liu, J., Lajolo, M. and Sangiovanni-Vincentelli, A. (1998). Software timing analysis using HW/SW cosimulation and instruction set simulator. In Proceedings of the 6th International Workshop on Hardware/Software Codesign. IEEE Computer Society, 65-69. [33] Mantripragada, R. and Whitney, D.E. (1999). Modeling and controlling variation propagation in mechanical assemblies using state transition models. IEEE Transactions on Robotics and Automation, 15(1), 124140. [34] Mei, H., Zhang, W. and Zhao, H. (2006). A metamodel for modeling system features and their refinement, constraint and interaction relationships. Software & Systems Modeling, 5(2), 172-186. [35] Motorola (2014) Moto 360 Teardown smartwatch, https://www.ifixit.com/Teardown/Motorola+Moto+360+T eardown/28891 [36] Muller, P. (1998). A qualitative theory of motion based on spatio-temporal primitives. In Principles of Knowledge Representation and Reasoning-International Conference, 131-143. [37] Polkowski, L. (2014). Mereology in engineering and computer science. In Mereology and the Sciences. Springer International Publishing. 217-291.
[38] Pourtalebi, S., Horváth, I. and Opiyo, E.Z. (2014). First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems. In Proceedings of the ACDP 2014 Workshop, Stuttgart, xx. [39] Purao, S. and Storey, V.C. (2005). A multi-layered ontology for comparing relationship semantics in conceptual models of databases. Applied Ontology, 1, 117-139. [40] Rajkumar, R., Lee, I., Sha, L. and Stankovic, J. (2010). Cyber-physical systems: The next computing revolution, in Proceedings of the 47th Design Automation Conference, ACM, New York, 731-736. [41] Randell, D.A. and Cohn, A.G. (1989). Exploring naive topology: Modelling the force pump. In Proceedings of the Third International Qualitative Physics Workshop, 1-14. [42] Salustri, F.A. (2002). Mereotopology for product modeling: A new framework for product modeling based on logic. Journal of Design Research, 2, 1-12. [43] Salustri, F.A. and Lockledge, J.C. (1999). Towards a formal theory of products including mereology. In Proceeding of 12th International Conference on Engineering Design, 1125-1130. [44] Simons, P. and Dement, C. (1996). Aspects of the mereology of artifacts. In Formal Ontology, Vol. 53, Nijhoff International Philosophy Series, Kluwer, 255-276. [45] Smith, B. (1996). Mereotopology: A theory of parts and boundaries. Data and Knowledge Engineering, 20(3), 287303. [46] Stell, J.G. and West, M. (2004). A four-dimensionalist mereotopology. In Formal Ontology in Information Systems, 261-272. [47] Varzi, A.C. (1996). Parts, wholes, and part-whole relations: The prospects of mereotopology. Data and Knowledge Engineering 20(3), 259-286. [48] Varzi, A.C. (1998). Basic problems of mereotopology. In Formal Ontology in Information Systems. IOS Press, Italy, 29-38. [49] Varzi, A.C. (2014) Appendix: Formal theories of parthood, in Mereology and the Sciences, Springer, Berlin, 359-370. [50] Von der Beeck, M. (1994). A comparison of statecharts variants. In Formal Techniques in Real-Time and FaultTolerant Systems, Springer, Berlin, 128-148. [51] Wasserman, A.I. (1985). Extending state transition diagrams for the specification of human–computer interaction. IEEE Transactions on Software Engineering, (8), 699-71
12
Copyright © 2015 by ASME