Libraries of Reusable Models: Theory and Application A.P.J. Breunese
J.L. Top
J.F. Broenink
Oc´e–Technologies B.V.
ATO–DLO
University of Twente
Free University Amsterdam
P.O. Box 101
P.O. Box 17
Dept. of Electrical Engineering
Computer Science Department
5900 MA Venlo
6700 AA Wageningen
P.O. Box 217
De Boelelaan 1081a
The Netherlands
The Netherlands
7500 AE Enschede
1081 HV Amsterdam
[email protected]
[email protected]
Abstract
Introduction
All too often, computers in simulation are primarily used as number crunchers. Because of the steady increase in computer performance, more and more complex models can be simulated in reasonable time. However, model complexity is not the same as model quality. A brute force approach to simulation can result in wrong answers to the right questions, or even worse, in answering the wrong questions. Using modern simulation software, it is quite easy to generate a couple of plots in virtually no time, but, as Cellier [5] states it, “It is our experience that students (and most practicing engineers) won’t take the time to verify even the simplest facts about their models once they were able to generate nice–looking graphs.” It may take days before you recognize that there is in fact something wrong with your model. Experienced modelers know how hard it can be to find the origin of a counter–intuitive simulation result in complex models. In many cases, it appears that the model is crippled by an invalid assumption that was rapidly and implicitly made, e.g. using the common form F = m a v ) for Newton’s law, when in fact the full form F = d(m dt is needed because of changes in mass. Therefore, modeling and simulation support systems should maintain a balance between exploiting computer power for numerical simulation and assisting the construction of models that can be used with confidence.
The Netherlands
The Netherlands
[email protected]
[email protected]
In this article, we describe how modern information technology, such as artificial intelligence techniques, object– oriented databases and graphical user interfaces, can be used to support modeling and design activities with the objective to provide better models in less time. In particular, we describe our work in the area of modeling physical systems dynamics as performed in the OLMECO project1 . It is important to separate modeling from simulation: We will focus on model building and model management. Simulation is one of the things we can do with the models we create, but definitely not the only thing. It will become clear in the paper that model building is more than graphically connecting a couple of submodels, and that model management is more than keeping model files in separate directories. Many of today’s tools [2, 12, 13] are good at simulation, but exhibit rather limited support for model building and model management. The heart of our approach is the development of a library of reusable models and model fragments. In this article, we will present the structure of a library of reusable models and demonstrate the role it can play in modeling and simulation activities. The library we propose enhances modeling efficiency because models can be assembled in a ‘plug–and–play’ style. Model quality improves because the library focuses on modeling, not just on obtaining computer code to produce simulation plots. The user is assisted by the library software to build models with confidence, i.e. models that have a solid background, with respect to both the theory of the area of interest and the practical application. At all times, the library software will assist the engineer by providing generic model fragments or complete application models that are relevant for the current phase of model development. Models are built in an evolutionary way, using existing knowledge and experience where possible. The first step towards the realization of the library is to recognize that a monolithic piece of simulation code is not suited as a unit of reusability. Reuse of this type of models is limited to performing the same simulation over and over again. Instead, we have to organize the model into truly reusable, modular fragments that can be used to assemble larger models quickly and easily. This reuse of previous
Setting up a simulation model is more than writing down state equations and running them on a computer. A lot of conceptual information about the physics and engineering aspects of the system must be taken into account to construct a useful simulation model. The role of a model library is to manage this information and to make model fragments reusable. This is especially important if models are reused and shared in cooperative work groups. In this article, we discuss the architecture of a library of reusable models. The practical application is demonstrated by reviewing an actual modeling problem in the machine tool domain. Keywords: physical system modeling, model libraries, reusable buildingblocks, knowledge sharing.
1
J.M. Akkermans
1
1 ‘OLMECO’ (Open Library for Models of mEchatronic COmponents) is a European project, partially supported by the EU Esprit–III program as project P6521 in the period 1992–1995. The partners in this project are PSA Peugeot Citro¨en (France), BIM (Belgium), Fagor (Spain), Ikerlan (Spain), Imagine (France), University of Twente (The Netherlands) and ECN (The Netherlands).
modeling efforts leads to a substantial increase in efficiency of the modeling process. Chopping models into fragments in itself does not lead to reusability. Adequate provisions need to be taken to assure that the model fragments are really reusable. Two types of provisions can be distinguished:
models that have different degrees of refinement and allow their rapid updating.
The model documentation should clearly state the intention of the model and the assumptions and constraints that apply to it. The modeling paradigm must allow for truly encapsulated models. The internals of a model should not influence its possible usage, for instance when connecting submodels.
Contemporary tools like ACSL [13], Easy5 [2], Simulink [12], and SystemBuild [9] focus on simulation. Their ‘modeling’ part comes down to mere model entry. It is not easy to switch submodels, and it is hard to isolate model fragments for reuse in another context, since the complete model is frequently stored in a single file. Besides the trouble in supporting the iterative nature of the modeling process, this class of tools also has difficulties in dealing with the various conceptual levels. All too often, the underlying paradigm of computer code appears at the surface when dealing with high–level issues. A particular example of this problem is the fact that most of today’s tools use input–output models. While the question whether voltage is a function of current in a resistor or the other way round is irrelevant from a physical perspective, but the use of input–output models makes that we end up with two models for one phenomenon. The situation only gets worse for models with multiple interactions. Another aspect in which current tools leave to be desired is the area of documentation. Most of today’s tools allow for the inclusion of comments in models (sometimes as annotations in graphical models), but the information is not formalized in any way, so that the information cannot be checked by tools, even if comments are frequently used to specify important information like applicability constraints. Some packages, like Dymola [8] and Omola [1] do support non–causal modeling, but also these tools provide basic support for model reuse only, and documentation facilities are restricted to textual comments and annotations. Although we use models for the dynamic behavior of mechatronic systems as an application for the library, many of the concepts behind the library are generic. Therefore, the library structure can be adapted for use with other classes of models. An example is the application for logistic problems, where Petri nets can serve as a conceptual representation comparable to block diagrams and our technical components can be mapped to resources. Even more general is the application of this framework in organizational models2 . In this article we will first describe the basic entities in the library together with the links between them (section 2).
One may object to the idea of reusable models that a model always presents a subjective, context–dependent viewpoint of the world, which cannot simply be copied into another context. However, if we clearly document the intention of the model, and if we explicitly state the assumptions that apply to the model, this ‘situatedness’ becomes less problematic. If the modeling assumptions are properly acknowledged, we can indeed use the model in different circumstances. Even if the problem context is somewhat different, we prefer to base our model on a previously defined model for a similar case rather than constructing a completely new one from scratch. The importance of a good model library is that the modeler can be supplied with reasonable alternatives [15]. Explicit information about the model’s constraints and its intended usage is also important to make models sharable. A model is sharable if it can be used with confidence by others than the original author. Because the author of a model has been considering constraints on the usage of the model as part of the modeling effort, these constraints will appear to be ‘natural’ to him or her, even without explicitly specifying them. Other users do not have that experience, and we can only hope that they think of the implicit constraints for models they have not developed themselves. Explicit attention for model applicability is also helpful for the original author, because awareness of the constraints that apply to a model will fade over time. Sharability is also influenced by the choice of the modeling paradigm. A proper choice of the modeling paradigm allows us to encapsulate the internals of a submodel by defining the interface of a model in terms of pairs of variables, which are not committed to an input or output role while defining the submodels. The submodels equations should be written in a declarative style, i.e. as real equations in the mathematical sense, not as a procedure for computation. This is often called a non–causal model description [1, 4]. To get the most out of the library, it should provide a model structure that is compatible with the process of model construction as performed in practice. In the design of our library, we emphasize two characteristic features of the modeling process:
Modeling is an iterative process. Initial models provide a high–level overview of the system. Details are added in several refinement steps as we gain insight in the system. Sometimes it is needed to undo the effect of previous modeling decisions to correct invalid assumptions. The library must thus support relations between
The construction of simulation models involves modeling decisions at various conceptual levels, e.g. at the technical, physical, mathematical, and numerical level. In order to manage modeling decisions at different levels conveniently, the model structure must have multiple levels of representation that capture relevant aspects of the model.
2
2
As an example, consider the KISS method [11], which is used to model organizations and their workflow. This method has a highly iterative nature, which centers around interviewing people, formalizing information into models, and validating the models in a next round of interviews with the people in the organization. The KISS method uses different levels of abstraction: the lowest level describes elementary actions involving business objects, the middle level defines composites of actions as functions, and the highest level specifies the workflow as a message–passing system, where the reception of a message leads to the execution of one or more functions.
sub/superclass
The models in the library contain more than just the material needed to generate simulation code. In fact, the ‘administrative’ information that does not influence the simulation results is quite important, because it helps to find relevant models, and thus contributes to the development of models in limited time. By formalizing this information, it becomes possible to let the computer check the consistency of the decisions we make, or even perform an initial selection for us. The supporting information stored with the models is discussed in section 3. The combined use of the model structure and the supporting information in the library is then demonstrated in an example (section 4) that shows how a modeler might use the library to assemble a model that provides answers to specific design questions.
2
composed of
decomposition specified by
component class
members
technical component level specified by
conceptual physical description
composed of
bond graph element
physical concept level specified by
equation
Model Structuring Principles
Various conceptual views on the system are taken by modelers in the course of the modeling process. A common way to provide model explanations is to draw a schematic diagram that shows the different components that make up a system, and that indicates the meaning of the variables and parameters in the mathematical model. Moreover, in the explaining text it will be mentioned which physical processes are assumed to play a role in the system behavior and which effects have been neglected. Hence we suggest a model structure that consists of three layers [17]: The system viewed as a concrete object or device composed out of technical components, e.g. amplifiers, motors, or linkages. The system viewed as a network of physical concepts, or processes, that constitute the dynamic behavior, e.g. capacitance, inertia, or friction. The system viewed as a set of mathematical relations (equations) that quantitatively describe the dynamic R behavior, e.g. x = v dt, or V = I R. The choice of the three layers of the model structure is not based on the observation that many hierarchical, modular models more or less fit into this framework. This is merely a highly convenient side effect that makes that the framework can be easily adopted. The primary rationale for the framework is that the three layers represent three different viewpoints, that all need careful consideration in a modeling effort. The explicit identification of the viewpoints in the three layers helps us to direct our focus, and ask the relevant questions. The introduction of different viewpoints is a technique that is borrowed from the artificial intelligence domain. In that domain, the definition of proper ontologies, i.e. systematic descriptions of the nature of ‘things’ in the problem domain, is crucial for the ability to reason about a domain, and hence to provide automated support for decision–making. The role of the viewpoints can be illustrated by the difference between the technical component layer and the physical concept layer. The technical layer refers to real world objects, the physical concept layer refers to properties associated with these objects and the interactions between these properties. If we include an electrical resistor at the technical component level, we refer to a tangible device without specifying its properties. This resistor can have a size, color, weight or electrical resistance, but non of these properties
member of
component
mathematical level
Figure 1: Top–level view of the library architecture.
3
is assigned a priori. At the physical concept level we decide that resistance is a relevant property in the context of our problem. However, at high frequencies we may have to add capacitance as another relevant attribute. In short, objects have no properties unless a specific problem context is selected. As the example above shows, similar models can have quite distinct roles, depending on the viewpoint. If these roles are not properly acknowledged, for instance by mixing viewpoints in a single layer in the model structure, there is a considerable risk that potentially relevant questions are not being asked. This happens in pen–and–paper exercises as well as in commercial simulation software. Especially in the electrical domain, where components and concepts practically coincide under ‘normal’ operating conditions, the sloppiness in modeling frequently leads to oversights if the ‘normal’ operating conditions no longer apply. Summarizing, we feel that the three viewpoints are needed in understanding a system model. The different interpretations must be distinguished and separately stored in the library. This model structure is reflected in the object diagram of figure 1. The Object Modeling Technique (OMT) notation developed by Rumbaugh et al. [16] is used for this figure. Boxes represent entity types or object classes. Solid lines indicate relationships. The nature of the relationship is indicated near the line. Solid balls indicate one–to–many relationships. For instance the lower line between the boxes labeled ‘component’ and ‘component class’ indicates that a relationship exists between these classes. The text near the line and the solid ball specifies that this particular relation means that each component class can have multiple members. In this diagram, modeling decisions appear as choices from the different specified by relations. This way of making decisions explicit is borrowed from the polymorphic modeling approach [6]. The composed of relations indicate that the library contains pre–established model fragments. The ‘component class’ and its relations represent the search
native model contents for the same (sub)system. The model structure allows us to specify for instance that we have a technical component with two plugs: ‘shaft–in’ and ‘shaft– out’. In a later stage, we can consult the library to find model fragments of transmissions that are compatible with this interface. We can then choose one of those models. If we have made a preliminary choice for one of the transmission models, we can still decide to use another model, because the identical interface makes that models can be exchanged easily. In the early stages of modeling, the details of the interaction are not yet known. Therefore, various levels of detail can be provided for the interface. The simplest interface consists of a number of plugs without further specification. Each plug indicates that ‘some form of interaction takes place’ with a neighboring technical component. When the model is refined, the plugs can be specified in more detail by stating whether they represent energy or information exchange, which physical domain they are associated with, etc. Refining plugs can be compared to refining the submodels. As more information becomes available, more details of the model can be filled in. Information about structure leads to refining submodels, e.g. by decomposition. Information about interactions leads to the refinement of one or more plugs. Each refinement makes that the list of potential model fragments that ‘fit’ gets smaller. This works both ways: adding detail to a plug restricts the potential components to those that have interfaces that are compatible to the plug definition. On the other hand, the choice of a component dictates (part of) the definition of a plug. The use of constraint propagation mechanisms and other Artificial Intelligence techniques helps to keep the model consistent at all times, and to find relevant model fragments.
actuator path
controller
process sensor
amplifier
reductor
motor
technical component level
I 1
R GY
1
R
I
U = I·R
1 ω=— T.dt J ∫
physical concept level
mathematical level
Figure 2: Layered model structure. facilities in the library. This subject will be dealt with in section 3.2. The meaning of this architecture for the different types of model fragments is illustrated in figure 2. We describe each of the three layers in the model below. We conclude this section with a discussion of the way the individual building blocks can be put together to form actual models. 2.1 Technical Component Layer: Engineering Devices This layer refers to the system to be modeled as a tangible and compositional object: an engineering device. A technical component may consist of subcomponents which can be identified in the ‘real’ system. A technical component can be used within a larger component. In design, technical components are typically associated with a description of the functions within a system. The component hierarchy then describes functions in terms of subfunctions. In a typical modeling problem, the decomposition into a number of technical components is the first step. This ‘divide–and–conquer’ technique is frequently applied in a rather quick and informal way, at least initially. The graphical representation we used for technical components in figure 2 reminds us of that informal ‘back–of–the–envelope’ approach. A prominent attribute of a technical component in the library is its interface. The interface enables the interconnection with other components, so that we can see the composite system as a network of interacting technical components. In the context of physical systems modeling, the interface determines what energetic and information exchange can take place with other technical components in a system. The definition of standardized interfaces for model fragments enables the modeler to change easily between alter-
4
2.2 Physical Concept Layer: Physical Processes At some point in the modeling process, the possibilities to find a meaningful decomposition in terms of technical components are exhausted. Then it is time to consider the physical processes that we associate with each of the technical components. In terms of our modeling approach, we move to the physical concept layer. For each leaf of the component tree, we specify the internal physical mechanisms, describing the behavioral (as opposed to compositional) aspects of the system, for instance whether or not we want to include internal damping for the torsion stiffness of a shaft. Since we are dealing with physical systems, bond graphs [3, 10] are a good way to describe the physical concept layer. This is especially true if we consider models that span multiple domains. Bond graph elements are qualitative, non– causal descriptions of elementary physical mechanisms. A bond graph thus yields an interpretation for a mathematical model in terms of general physical principles, as has been clearly argued in a recent article in this journal by Cellier [5]. We emphasize however that the physical concept layer is certainly not restricted to bond graphs. Other types of network models may be used as well. The choice for bond graphs is a matter of convenience, and our model structure does not depend on specific bond graph properties. The interface of a physical process description consists of a number of ports. Each port represents the exchange of energy or information. For power ports, the domain of the
energy being exchanged can be specified. The definition of ports as interface elements of models at the physical concept layer is a refinement of the interface elements we defined for technical component models (the plugs). Because the library stores the domains of each plug and each port, automated tools can determine whether a particular physical process description will fit in a chosen technical component interface. The combination of the technical component layer and the physical concept layer represents the rationale behind a purely mathematical model. They provide its meaning in terms of the system and in this way improve its understandability and reusability. 2.3 Mathematical Layer: Dynamic Equations To perform simulations and quantitative analyses, we need to specify the exact nature of the relations between variables, parameters, constants etc. in the model, and the values for parameters must be specified as well. This is done in the mathematical layer of the model. The information in the other two layers of the model provides the interpretation for the equations. The link between the physical concept layer and the mathematical layer is formed by ports. What looks like a port from the concept side, appears as so–called port variables from the mathematical side. A signal port at the physical layer is associated with one mathematical port variable, whereas a power port is associated with two conjugate port variables (‘effort’ and ‘flow’, or ‘across’ and ‘through’). The port variables can be used in equations in the same way as any other variable. Because the link between the different layers is well– defined, automated tools can assemble the complete set of equations describing the behavior of a system, starting from small, reusable sets of equations specified for each submodel in the mathematical layer. Here we see a powerful form of automated modeling: the user focuses on selecting relevant concepts and their mathematical form and can easily compose models thanks to their highly modular structure. The computer takes care of the routine (but laborious) conversion of the equations into a simulation model. Compared to writing bond graphs and mathematical equations — or even worse: computer code — from scratch, this is a major improvement in modeling efficiency and effectiveness. For processing and simulating models, the library can use existing tools like Matlab, Simulink and SystemBuild. In the design of the library, we use a generic model description language that accomodates (in a formal manner) all information necessary to export the models to various formats, which can then be fed into the tools that are already available in the modeler’s ‘workbench’. In this way, we can leverage the ongoing investment in model processing and simulation tools, and their further development. The use of a generic model description has been proposed by others as well, at various levels of abstraction, e.g. DSblock [14] for computational models and more recently Modelica [7] for mathematical models and their composites. 2.4 Assembly of Models A complete model of a system is assembled from individual model fragments by moving between the three viewpoints and inserting the proper model fragments at each layer. The
5
library provides a network of potential associations between different model fragments, as indicated by the specified by relations in figure 1. Each time we decide to use a particular model, more questions will follow, until we arrive at the desired level of detail. For instance, if we decide at some point that we need a model of a shaft, the next question could be whether we want to include bearing friction and/or torsion effects in the model. If we include friction, we can then decide to keep it simple, by using a linear viscous friction model, or we could use a form that includes stiction and Coulomb friction. As this example shows, assembling a model means to select a specific path through the associations between model fragments provided by the library. Each association we fill in corresponds to a modeling decision. In this way, modeling becomes a piecemeal process in which a single aspect of a model is considered at a time. For each aspect the library provides a number of alternatives which have been useful in other applications. For each technical component a set of alternative decompositions is available; for each leaf component in a decomposition a number of alternative physical concept models is present; each of the nodes of these models in turn can be associated with different mathematical models. Just storing generic model fragments in the library is not enough. To be efficient, it is also necessary to store (partially) assembled models. In our library, these are referred to as instantiated models. Despite their less general nature, they are useful because they can provide ready–to–use modeling solutions as well as examples of possible usage that can be used as a starting point in the construction of a model. In other words, in addition to storing a large network of potential generic routes, we also want to store and retrieve actual paths through this network. The library can store models in different degrees of instantiation: If a simulation model has been built for a specific device, and this simulation is validated, i.e. found to agree with actual measurements, we may want to store the complete model together with a set of parameter values. If validation has been done for several circumstances, there can even be a number of parameter value sets for one instantiated model. It can also be useful to store a model that is only partially assembled, for example down to the level of bond graphs, as an independent model entry. This type of entry can be used as a framework for modeling decisions at the lower levels, while the high–level structure is available at once. Any frequently used combination of model fragments is a candidate for inclusion in the library as an instantiated model. Different branches of the model hierarchy may have different degrees of refinement. A part of the model can be predefined up to the level of subcomponents while other parts can be completely mathematically specified. A typical use of this feature is controller design: the physical system to be controlled is modeled down to the mathematical layer, but different controllers can still be plugged in. Whenever an instantiated model has an ‘open end’, it is possible to select a suitable fragment that fits at that location, just as in the case of assembling simple model fragments.
Using instantiated models may be quick and convenient, but we have to be aware that instantiated models are less general in nature than the basic building blocks. Therefore, the reason for selecting an instantiated model must be carefully motivated. The convenience of quickly setting up a simulation model should not lead to ill–considered models that do not properly capture relevant phenomena. Hence, modeling support is even more needed when using instantiated models than when assembling basic model fragments. This modeling support is the subject of the next section.
3
have not gone through the confidence building phase for this particular model. The structural information captured in the technical component layer and the physical concept layer of the model do provide context knowledge about the model, but this information is not sufficient. By just looking at the interface variables of a fluid flow model, one cannot tell whether it is a universal model, or that it is restricted to laminar flow. This information is certainly relevant if we want to reuse the model, so this should be part of the model documentation. The same is true if a model has been validated under restricted laboratory conditions only, so that we have to be careful in extending the operating range of the model. This part of the model documentation can be formalized to some extent by specifying constraint equations, restricting the permitted range of some variables. As a consequence, this part of the documentation can be used to automatically select models that fit the intended operating conditions. Not all documentation can be formally specified, so there will be a need for free–form comments, for instance to hint at possible extensions, or to describe subtleties about the usage of the model. Both the formalized and the free– form model documentation is used to determine if a model is suitable to solve a certain problem. The emphasis on documentation and the specification of extra attributes to enable automated checks of the model may give the impression that the modeling process becomes cluttered with administrative tasks. However, automated tools can help here. By propagating information that can be automatically derived, manual entry of information can be minimized. The bookkeeping burden that is left after the optimal application of automated tools should be considered as an investment. The return of this investment is additional support in the modeling process and an increase in model quality. This is not unlike the situation one faces in software development: a strict policy on comments and documentation is well worth the effort.
Modeling Support
The modeling process is supported by two main categories of information in the library: model contents and model management data. The technical data (model contents) has been discussed in the previous section and is embedded in a conceptual model framework. The model management data, which is the subject of this section, provides background information about model fragments in the library that guides the user through the process of modeling. In the context of document management systems, the term metadata is sometimes used for what is called model management data here. Model content can be specified formally and unambiguously in a computer language. As a result, it can be processed automatically, e.g. e.g. for deriving simulation code and subsequent numerical evaluation. Model management information on the other hand cannot always be formalized, for instance because the information at hand is outside the scope of the system (e.g. a reference to a data book), or because the semantics of a particular type of management information is not sufficiently clear for formalization, as is the case for the natural–language item ‘miscellaneous remarks’ that might appear on a component information form. Trying to formalize model management information is generally well worth the effort, since it results in more possibilities to verify choices made in the modeling process. The boundary between model contents and model management information is not a really crisp borderline, and it may shift over time. By formalizing parts of the model management information, this information can become part of the model itself. As a matter of fact, our technical component and physical concept layers already formalize information that used to be free–form information about (mathematical) models. In this section, we will cover three aspects of model management data: model documentation, that is associated to individual models, taxonomies, which assist in finding models by gradually specifying the desired characteristics, and quality assurance, which is a general term for a variety of techniques leading to better and more reusable models. 3.1 Model Documentation During the construction of a model, an engineer learns about the problem context, and formulates assumptions and constraints that apply to the model under construction. Even if it is implicit, this process of confidence building is an important part of modeling. However, ‘just knowing’ how a model should (not) be used is not good enough if we want to store it in a library for reuse by others. This information has to be stored in the library explicitly, since other users
6
3.2 Taxonomies If the amount of models in the library gets bigger, the need for a navigation mechanism becomes obvious. There is just no way that any user can oversee a collection of thousands of models if they are not properly structured. By identifying component classes of models that are similar in some way, we already reduce the complexity by an order of magnitude. The system is made really usable by recognizing that kind–of relations exist between component classes. Just like biologists set up a taxonomy (a kind–of tree) of animals in which the subclasses mammal, bird, fish, etc. are used to deal with the multitude of animals, we set up a taxonomy of component classes. The top right corner of the object diagram of the library structure in figure 1 shows the component classes with their subclass/superclass relations, and the membership relation between the component classes and the actual components. Figure 3 shows an example of a component class taxonomy. The more generic superclasses are drawn to the left of the more specific subclasses. This taxonomy illustrates how the structure can be used to find a model that is appropriate for the modeling problem at hand. Figure 3 also illustrates an important feature of our taxonomy structure: it is not restricted to a tree–like structure, but it can also be a lattice, as long as the links between
constraint rigid body mechanical
beam
mechanism
basic stuff energy storage
root
power supply
transformer motor valve jack
joint support 3D rigid body 2D rigid body flexible lateral crank/slider four bar mechanism damper mass flywheel spring cam gear
step down is also a good strategy to find mechatronic design alternatives. This is illustrated by the up–and–down route from electrical motor through motor to hydraulic motor in figure 3. The lattice–structured taxonomy is a powerful mechanism, but it comes at the price of some complexity. Building and maintaining the taxonomy structure requires a reasonable level of modeling experience to recognize the relevant links that need to be made. Furthermore, it is important to consider reusability of the candidate model fragments from the beginning, by considering modularity and the use of standard interfaces for similar models to enable plug–and–play modeling without the need for manually ‘rewiring’. A further way to reduce the apparent complexity is by controlling the representation of the taxonomy to the user. The full lattice structure is unsuitable for this purpose. Figure 3 is already rather messy, even if it is only part of the content of our experimental library, and a library of practical proportions would really lead to a ‘spaghetti’ view. Fortunately, the use of modern graphical user interfaces enables us to provide a suitable cross–section of the lattice, which may even be controlled dynamically by the user. The presentation is greatly helped by identifying the ‘flavors’ of the links, i.e. the aspect of specialization they represent, so that the user interface can suggest aspects for which the modeler could try to obtain additional information.
hydraulic motor
hydraulic accumulator hydraulic line resistor capacitor inductor
flexible line rigid line
electrical motor
electrical diode electrical source transformer
voltage source current source
Figure 3: Example taxonomy (partial). classes represent kind–of relations. Hence, it is possible to reach the same component class through different routes. If we consider that each step through the taxonomy represents a modeling (or design) decision about a single aspect of a model, we have to conclude that a tree restricts us to a fixed order of decisions. This is not practical for a general– purpose library that must be usable in a wide variety of applications. The reason that tree–like taxonomies seem to work for biologists is that this type of classification scheme relies on properties that can be determined fairly easily by looking at an animal (e.g. either an animal is vertebrate or it is not), so the order in which the information has to be used in the classification is not all that important. However, engineering problems deal with decisions that are based on information that is not always easy to obtain. We don’t want to postpone modeling (or design) decisions just because a little piece of information is not yet available. Regardless of the order in which we obtain the required data, we should be able to make refinements in our models. This difference between biological classification and engineering design makes that we absolutely need the lattice structure. In the taxonomy in figure 3, the free choice of primary and secondary aspects for specialization can be viewed for the component class motor. A functionally oriented user may arrive at the motor class through the power supply class, but if we know beforehand that we can only use hydraulic or electrical power in our system, we can use that knowledge to make a first step, and arrive in one of the subclasses of the motor class as a second step. Using the lattice structure, it is also possible to go ‘up’ towards a more generic component class that is not the one we were just coming from. Taking one step up and one
3.3 Quality Assurance As we observed before, the main difference between the contents of the model and its meta–contents is that the former is better formalized. This means that all kinds of consistency checks can be performed automatically on the formal part of the model: the model can be verified. A typical verification is the check if the interface of a proposed technical component decomposition or bond graph matches the interface of the component it should be plugged into. Also checks on consistency of physical domains and units are forms of verification. Obviously, the level of sophistication depends on what can be represented formally in the library. Another important criterion for the quality of a model is its competence, i.e. whether or not it adequately represents those aspects of the system it is supposed to represent, given the problem context. The formalization of this type of meta– knowledge is essentially restricted because ultimately every problem context is different. However, there are two methods for expressing model competence which can be partially formal:
7
Validation. Model validation means that the predictions based on the model are compared with measurement data from the actual device. This data (or a reference to it) can be stored in the library as evidence that the model is consistent with the data, at least with respect to a number of explicitly described aspects and experimental conditions. For example, model and measurements can be in agreement in terms of asymptotic behavior or completely agree over time within a certain error range. Domain theories. Modeling assumptions should be consistent with the general, established theories of the domain of application. By relating the equations (or the structure) of a model to textbook approaches, we get
machine tool
sheet metal in
die
product out load mechanical
die mech. transmission
sheet feeder transmission
Figure 4: Schematic overview of a stamping press.
elec. transmission signal transmission control
an indication for the quality of the model. As a matter of fact, the use of bond graphs for physical systems models means that some confidence is already built into the model, because correct bond graph models are energetically proper by definition. Other types of general knowledge are not always so easy to formalize.
sensor actuator path
Figure 5: Taxonomy fragment for the evaluation of the electrically driven slider.
Quality assurance influences the procedure for the inclusion of new models in the library. Of course, the quality of the individual models should be monitored, but the structure of the library should be maintained as well. Here the role of the library manager is evident. Whether this authority should be restricted to one person or a committee depends on the amount and frequency of library updates and on corporate policy. There is a remarkable similarity with a traditional library: the librarian requires that books have a certain quality, and also makes them accessible by proper classification in catalogs.
4
mech. transmission
die
mech. transmission
slider
flywheel
Figure 6: Top–level model of the press. for a new generation of presses could not be achieved using this approach. It was decided to investigate the possibility of using an electrical motor to drive the slider directly. The most important issue in the evaluation is of course whether or not the electrical sheet feeder can actually provide better performance than the mechanical one, but it is also important to consider the cost of the electrical alternative. Because more power consumption requires a bigger motor, power consumption is considered an important factor. Thus, the concrete questions the modeler is facing are formulated as follows:
Practical Use of the Library
During the project, we built a prototype of the library structure proposed above. We used an object–oriented database as the basis for the implementation, since the structure of this type of database makes it easy to map the conceptual structure of the library on the implementation. Hence, it was possible to construct a prototype faster than using a more conventional (relational) database. In this section we describe a modeling session in which the modeler works with the library. The example describes an actual modeling problem that has been contributed by Fagor, one of the partners in the OLMECO project. Fagor also provided a large number of models in the domain of the example problem for inclusion in the prototype of the library. In the example modeling problem, we consider the dynamic behavior of a machine tool. The specific part we will be looking at is the so-called metal sheet feeder, which is used to load sheets of metal into a stamping press. Stamping presses are used to shape sheet metal into parts that find use in the automotive and other industries. A sketch of the relevant parts of the type of press we are considering is given in figure 4. The synchronization between the vertical (stamping) motion and the horizontal (loading/unloading)motion is critical for the throughput of a stamping press. This synchronization is traditionally obtained by mechanically coupling the motion of the dies to the motion of the slider (the moving part that carries the sheet metal) by means of a series of joints, cams and linkages. However, the bandwidth of this type of mechanism is limited, and the desired performance
flywheel die slider gearbox cam linkage shaft eccentric rack-pinion amplifier filter controller motor servo
Build a model that can describe the achievable positioning accuracy and throughput for an electrically powered metal sheet feeder. The model should also describe power consumption.
8
Fortunately, we don’t have to start the development of the model from scratch. The machine tool category of the library already contains a model of a conventional sheet feeder as part of a press model. Moreover, the library also provides some classes in the taxonomy that could become relevant in the modeling process. This part of the taxonomy is shown in figure 5. The top level model of the press is shown in figure 6. The decompositions of the transmissions from the flywheel to the die and the slider are shown in figures 7 and 8 respectively. Clearly, the component we need to look at is the mechanical transmission from flywheel to slider. We can take different routes to explore the library to find models or model fragments that can help us further. Of course, we could perform a keyword search for electrical transfers, but then we wouldn’t use the a priori knowledge represented by the fact
eccentric
controller
linkage
Figure 7: Decomposition of the transmission from flywheel to die. shaft
cam
motor
mech. transmission
sensor
linkage
Figure 9: Specification of the servo component.
Figure 8: Decomposition of the transmission from flywheel to slider. we already have a model as a starting point. The exploration of the library thus could take one of the following forms: Starting at mechanical transmission, go down the taxonomy to find more specific components. From the same point, go up the taxonomy to find more generic components. Ask for decompositions of the mechanical transmissions at the technical component layer (such as the one shown in figure 8). Ask for physical concept layer models (bond graphs) that specify the behavior of the mechanical transmission. Check if the library provides instantiated models for a previously modeled mechanical transmission. Since we are not really interested in detailed models of mechanical transmissions, only the second alternative offers any hope to find a suitable model. Hence, we move up to the parent class transmission of the mechanical transmission class. In this case, we show only one parent class, but in general, there may be the choice between various parent classes, because of the lattice structure of the taxonomy. Since the transmission class we’re now looking at is fairly generic, the next move will be a move down. There is the choice between electrical transmission and signal transmission. The obvious choice is to take a look at the electrical transmission class. In this class, we find that the interface consists of two plugs that have electrical interaction. This is not what we were looking for: the slider is still a mechanical device, so we will need at least one plug that allows for mechanical interaction. Hence, we head back to the transmission class. From there, we try the signal transmission class instead as a sort of last–ditch effort. It doesn’t help us to a suitable model, but it does point to the controller class, which might come in handy some time later. For now, we’ll abandon our first exploration of the library. As an alternative approach, we perform a global search for component classes with electrical and mechanical connections. This search yields the classes sensor, actuator, and servo. Considering the application we have in mind, we take a look at the servo class first. In this case, the names of the component classes are sufficient to know the intention of the models in the classes, but of course we could have consulted the documentation if we weren’t sure. By looking around in the taxonomy, we see the controller class once more, which strengthens our impression that it may play a role in the final solution.
amplifier
path
servo
slider
Figure 10: Testbed for the servo solution.
9
After looking around for a little while, we proceed with our main effort: finding a suitable way to fill in the electrically powered transmission from flywheel to slider. For that purpose, we take a look at the instantiated models in the servo class. After checking out the repository of available models, we find that the one shown in figure 9 is a likely candidate: it uses the controller class we’ve seen before, and it uses the motor class as well. Of course, a motor would be part of an electrically powered slider system, so this model really looks good. We’ll stick with this one for now. Of course, the servo is only half the story: the information to drive the servo should be delivered to the input of the servo system somehow. We could use the sensor class we’ve seen before, and rebuild the complete press model with the electrical slider, but that is a rather ambitious step to perform at once. Keeping in mind the “keep it simple” adage, we will use a simple path generator first, test a model composed of a path generator, a servo and the slider. After we’re convinced that that model works, we’ll take another look at the bigger picture. Thus, the model in figure 10 is the one we’ll be using. Of course, we take care that the plug of the path component is such that we can immediately hook up the combination of the servo and the slider to the complete stamping press model. The servo model we have used is not a fully instantiated model. We have to determine a suitable specification for each of the submodels of the servo model. As a first step, we focus on the content of the transmission submodel, since we feel that the other parts of the servo model can be filled in with relatively simple specifications. Keeping in mind the relation between rotation speed and torque of the electrical motor, we decide to use a reductor and a rack and pinion combination as a transmission. We’ll also use some flexible couplings to compensate for alignment problems. Although this specific form of transmission is not available in the library rightaway, we can quickly build this model, since the library comes with a number of pre– installed models for frequently–used components. Some of these models are even fully instantiated. From the documentation that comes with the generic component models, we find that there are no obstacles for using them, so with a few mouse clicks we arrive at the model shown in figure 11. The construction of the technical component model shows the interplay between the knowledge stored in the library and the judgment of the modeler: After evaluating the proposals provided by the library, the modeler decides upon the next
Metal Sheet Feeder flex
reductor
flex
2.5 200
rack/pinion
Figure 11: Specification of the transmission component. technical component
selected bond graph
path servo controller amplifier motor sensor transmission flex reductor flex rack/pinion slider
cycloidal PD–controller clipped current source DC w/inertia+friction velocity+position damped spring w/backlash+inertia damped spring w/friction inertia+friction
-2.5 -200
time position
Figure 12: Selected bond graphs for all subcomponents of the metal sheet feeder. step in the search for a competent model. By providing a large number of meaningful links, the library supplies a high level of support, but at the same time the user is free to choose the most suited approach to solving the problem. At this point,we feel that further decomposition in smaller components makes little sense. Doing that would only lead to excessive detail, not to insight in the essence of our problem. So we switch from the technical component viewpoint to the physical concept viewpoint, i.e. we start specifying bond graph fragments for each of the components. In doing so, we specify which physical effects we assume to be relevant for the behavior of the sheet feeder. The bond graph fragments are all provided by the library. If none of the bond graph fragments in the library suits our needs, we can construct a new bond graph fragment, either from scratch, or based on another bond graph fragment. Figure 12 summarizes the choices we have made. The combination of the technical component viewpoint and the physical concept viewpoint gives qualitative insight into the relevant phenomena. But since we require quantitative results, we have to go one step further and specify mathematical relations. Since we have used instantiated models before, some choices have already been made, and we only need to check the documentation once more to see if all assumptions hold. For the choices we have to make explicitly, the library supplies sets of linear relations, but also specific non–linear are available. For brevity, we assume that a suitable set of equations can be found for all bond graph elements, so that we do not need to create new sets of equations. The model of the sheet feeder is now ready. We can perform simulations to answer the question that triggered the modeling effort. First, we instruct the library software to assemble the model in a form that our preferred simulation software can handle. Then we start the actual simulation and get the result shown in figure 13. We can now use the simulation model to tune the controller, to experiment with different dimensions of the mechanical parts of the sheet feeder, or we could reconsider the choice for a particular motor. In the end, we know how accurate we can position
0
10
current
Figure 13: Simulation of the metal sheet feeder. the slider, and what power consumption we need to achieve that. With that information, we can decide if the electrically powered sheet feeder is a feasible alternative to the ‘classic’, mechanical sheet feeder. After we have finished the model of the electrical sheet feeder, we want to store it in the library for future reference. Since the feasibility of the electrical sheet feeder has been demonstrated, it is likely that future developments in this area will take place. After we have documented the model and explicitly stated the constraints, the library manager is willing to store the model in the publicly accessible library. This makes our model an instantiated model in the library, complete with one or more sets of parameter values. The library manager also takes care of establishing appropriate links to our model, so that interested users are directed to the result of our modeling effort in the future.
5
10
Conclusions
A library of reusable and sharable models can be a big help in the construction of simulation models. Such a library should not only store the dynamic equations of the models in the form of reusable fragments, but it should also provide background information to assist engineers in building models with confidence. We have described the OLMECO library structure that meets these requirements. The contents of the model fragments are separated from their interface. As a result, we get a structure of modular and ‘pluggable’ models. Using three different views on a model, it is possible to describe functional and conceptual structure of a model, as well as detailed quantitative relationships. Thanks to this structure it is possible to consider one aspect at a time. The library supports the assembly of models by systematically providing alternatives for each step in a sequence of modeling decisions. For each alternative, the library provides explicit documentation on intended usage and applicable constraints. This allows the user to make informed modeling decisions. A significant increase in modeling efficiency is achieved because it is possible to store previously
[7] H. Elmqvist and S.E. Mattsson. Modelica – the next generation modeling language, an international design effort. In Proc. 1st World Congress on System Simulation (WCSS ’97), Singapore, Sept. 1–3 1997. [8] H.E. Elmqvist, D. Bruck, and M. Otter. Dymola - user’s manual version 3.0. Dynasim AB, Lund, Sweden, 1996. [9] Integrated Systems, Inc., Palo Alto, CA. System build user’s guide, 1986. [10] D.C. Karnopp, D.L. Margolis, and R.C. Rosenberg. System Dynamics: A Unified Approach. John Wiley & Sons, New York, second revised edition, 1990. [11] G. Kristen. Object-Orientation: The KISS Method. Addison-Wesley, Reading, MA, 1995. [12] The Mathworks, Inc., Natick, MA. Simulink, a program for simulating dynamic systems, User’s Guide, 1992. [13] MGA Software, Concord, MA. ACSL Reference manual version 11, 1995. [14] M. Otter and H. Elmqvist. The DSblock model interface for exchanging model components. In F. Breitenecker and I. Husinsky, editors, Proc. 1995 Eurosim Conference, pages 505–510. Elsevier, Amsterdam, The Netherlands, 1995. [15] R.C. Rosenberg. The bond graph as a unified data base for engineering system design. Journal of Engineering for Industry, 97(4), 1975. [16] J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, and W. Lorensen. Object-Oriented Modeling and Design. Prentice Hall, Englewood Cliffs, NJ, 1991. [17] J.L. Top and J.M. Akkermans. Tasks and ontologies in engineering modelling. International Journal of Human-Computer Studies, 41(4):585–617, 1994.
assembled models in the library. These can be used as ready– made solutions or as useful modeling examples serving as an initial proposal for a new model. The selection of a starting point for a model can be done using a keyword search, or by checking model documentation, but it is also possible to browse through the library using the taxonomy structure. The taxonomy structure defines a network of associations between classes of related models. A prototype of the library structure described in this article has been implemented. The object–oriented database paradigm was advantageous for both the clarity of the implementation and the effort needed to build it. The prototype was used to verify the use of the library in a practical setting. Evaluation by the industrial end users within the OLMECO project indicates that the proposed model structure and the organization of the models is convenient and leads to a significant increase in productivity. Although this article focuses on defining structuring principles for the library, we should bear in mind that populating the library with models is equally important to ensure successful application. Even with a limited repository of models, the prototype library constructed in the OLMECO project has shown to represent a significant amount of knowledge. The library structure we propose can be used for other applications than modeling dynamic behavior of mechatronic systems. Other models, and even other types of (engineering) information may be mapped onto the library framework. The library structure can also be used in parallel with (or integrated with) engineering document management (EDM) or product data management (PDM) systems.
Acknowledgements The authors would like to thank I˜naki Martinez (Fagor Arrasate, Mondragon, Spain) for providing the material for the example used in this article, and Johannes van Dijk (University of Twente, Enschede, The Netherlands) for working out the simulation study and the controller design for the metal sheet feeder.
About the authors
References [1] M. Andersson. Object-oriented modeling and simulation of hybrid systems. PhD thesis, Lund Institute of Technology, Lund, Sweden, 1994. [2] Boeing Computer Services, Seattle, WA. EASY5 technical overview, 1995. [3] P.C. Breedveld. Multibond graph elements in physical systems theory. Journal of the Franklin Institute, 319(1/2):1–36, 1985. [4] F.E. Cellier. Continuous System Modeling. Springer Verlag, New York, NY, 1991. [5] F.E. Cellier. Bond graphs: The right choice for educating students in modeling continuous-time physical systems. Simulation, 64(3):154–159, 1995. [6] T.J.A. de Vries, P.C. Breedveld, and P. Meindertsma. Polymorphic modelling of engineering systems. In Proceedings of the ICBGM’93, pages 17–22, San Diego, CA, 17–20 January 1993.
11
Arno Breunese received his M.Sc and Ph.D. degrees from the University of Twente in 1992 and 1996 respectively. His Ph.D. research focused on automated support in mechatronic systems modeling, i.e. using computer tools to make models available to a wide community of engineers, while ensuring high model quality through extensive verification. Much of that research is reflected in this article. After obtaining his Ph.D. degree, Arno Breunese joined the research department of Oc´e–Technologies, a leading supplier of hardware and software to support the reproduction, distribution, presentation, and management of information. His research interests include the role of analog and digital documents, information sharing, document management, and workflow. Jan L. Top is head of the Production Management department at ATO-DLO, the Dutch institute for food, non-food and systems research. Present research projects include integral management of wastewater treatment, planning systems for the food industry, modeling and control of the production of corrugated board, optimization of internal logistics of food production plants. Top’s research interests are intelligent tools for optimizing industrial processes and systems for knowledge management. He is presently responsible for
the development and implementation of a method for modeling (bio)chemical and (bio)physical systems based on the approach described in this article. Moreover, this modeling method is being extended to describe logistic aspects based on discrete events, introducing a seamless integration between continuous and discrete time models. Top received an M.Sc. in applied physics and a Ph.D. in computer science, both from the University of Twente. Jan F. Broenink received his Ph.D. in Electrical Engineering in 1990 from the University of Twente. His Ph.D. research was in the design of computer facilities for modelling and simulation of physical systems using bond graphs. He is presently Assistant Professor at the Control Laboratory of the Department of Electrical Engineering of the University of Twente, where he the project leader software tools development. His research interests include development of computer tools for modelling, simulation and implementation of embedded control systems, and robotics. Hans Akkermans is professor of Application- and Business-Oriented Informatics at the Free University Amsterdam, and a consultant in knowledge and information management. He received his M.Sc. and Ph.D. degrees, both cum laude, in physics from the University of Groningen, The Netherlands. He served as a senior policy officer at the Dutch Ministry of Education, Culture and Sciences, and has been the manager of the Software and Information Engineering department of the Netherlands Energy Research Foundation ECN. His research interests include the development and use of knowledge-oriented methods and intelligent systems, focusing on innovation in industry and business applications. He has published over 100 international articles and papers in various fields of informatics, physics and engineering, and participates in many national and international cooperative industry-university projects. The work reported in this article was carried out when he was at the University of Twente and ECN.
12