Frameworks for Increased Portability and Product Lifetime

2 downloads 0 Views 36KB Size Report
Frameworks for Increased Portability and Product Lifetime. Steve Quenette,. Shane Fernando. Victorian Partnership for Advanced Computing [email protected].
Frameworks for Increased Portability and Product Lifetime Steve Quenette, Shane Fernando Victorian Partnership for Advanced Computing [email protected] [email protected]

Paul Beckett School of Electrical & Computer Systems Engineering, RMIT [email protected] Abstract. A major part of the cost of software development is biased towards post-release maintenance. Portability between hardware architectures and operating systems and the maintenance of package release numbers have become expensive issues. Similarly, the utilization of third party APIs and libraries can cause product lifetime and porting problems. This paper describes how an adaptor-wrapper oriented framework approach to application development can avoid these problems and provide product portability and extended lifetime.

1.

INTRODUCTION

The nature of real-time simulations for training is continuously changing, driven in part by improvements in hardware and software capability. As the requirements for fidelity in presentation, behavior, tactics etc. increase, so does the importance of developing solutions that will address these issues in a cost effective and extensible manner. There is a trend towards the use of COTS technology and component reusability to constrain costs. The work reported here has explored the importance of these technologies and their relevance to the development of a generic simulation framework. This paper presents an examination of existing simulation technology and its suitability for the development of portable, extensible and flexible real-time solutions. It then identifies the main problems experienced with a small but realistic example framework. The key driver is the reduction in software development costs. For this reason alone, component-based software engineering (CBSE) coupled with object frameworks is generating tremendous interest in the software engineering community [1]. However, as also summarized in [1], a number of additional advantages can be derived from using generic simulation frameworks. These include rapid development via the reuse of common components, easy maintenance and the ability to easily upgrade to suit a variety of applications. Further, they support the assembly of systems through composition [3]. Composition primarily refers to assembling a set of representations of the “real-world” such as objects, processes and support elements such as data collectors and external system interfaces. These models run within a framework that handles central coordination issues such as time management and message passing. The underlying simulation framework

is fairly static and its composition is modified only infrequently. Bélanger et al [2] make the point that a simulation framework can standardize and automate the process of simulation development in accordance with generalized standards. An extensible simulation framework can be tailored to implement industry or domain specific capabilities, processes and standards. Furthermore, an extensible simulation framework can encapsulate and hide any proprietary details internal to an applicationspecific simulation component while maintaining an open, standards-based interface to the outside world. Stytz and Banks contend that "..only an open architectural approach will provide the re-use and cost savings needed to insure the continued vitality and expansion of the CGA community and to insure meaningful improved performance by CGAs" [1]. Courtemanche et al [4] have identified another major advantage of using a framework approach: the actual code needed to develop a model is small for the implied capabilities. This arises because architectural decisions are actually represented by the framework at the appropriate level rather than the code itself. For example, simply extending a base class allows the developer to inherit many of the capabilities that previously needed to be either designed in or accounted for in the developed model. However, the disadvantage of this approach is that implied limitations of the class are also inherited. 2.

FRAMEWORKS FOR SIMULATION APPLICATIONS

An object framework can be specified and used at a number of levels within an architecture. It provides the communication and coordination services needed to assemble applications from components and binds the components together. A framework provides an execution environment for components and objects and

provides services and facilities to support a set of primitives for a group of components, thereby providing a formal mechanism for defining the interface between objects/components.

System

Frameworks are therefore a natural extension to Object Oriented Development (OOD). However, whereas traditional OOD will attempt to discover an exact and resolved design for a problem, frameworks attempt to determine a general solution for the problem. The framework user (i.e. the end product developer) then implements the specific domain details via mechanisms provided by the framework.

External Interface (Responsibilities)

Component

General Component Code

Wrapper Interface (Common subset)

User Specific Implementation

Framework User Application User Application User Application Code Code Code

User Application User Application User Application Code Code Code

Figure 1: Framework Inversion of Control (from [4]) Another key difference between the framework approach and other software engineering techniques is the inversion of control flow implicit in the framework. Figure 1, redrawn from [4], illustrates this point. As outlined above, the typical architecture of the framework is a collection of components, each of which can provide a link to a specific service for the program. For example, a "Physics Engine" may provide a service relating to the dynamics of the modelling process. It should be reiterated, however, that rather than these components actually implementing the service, they provide a bridge to the implementation. That is, the service used throughout the program is accessed via the component’s interface. It is then the component’s responsibility to form the link between this interface, and potential solution implementations. Hence, if a solution becomes dated, a replacement can be made and interchanged. The result is minimal redevelopment time. In a simulation application, these components can represent areas that traditionally have porting and lifetime problems - an Interoperability Component is a good example. In this particular case, the first implementation achieved might be a DIS wrapper. If the framework is designed correctly, a HLA wrapper could be developed at a later time. In common with other approaches, the key issue here is the correct design of the component interface. As will be identified later in section 3, this requires an in-depth understanding of the likely impact of both existing and likely future technologies. Frameworks, then, represent a collection of interface definitions to components, as well as the sequence and data-type of messages between and within components. Specific implementations then only have to adhere to this pre-defined set of data and interface requirements.

Figure 2: A Generalized Object Framework 3.

METHODOLOGY

The methodology for discovering a framework is an iterative process that may take a several cycles before the desired outcome is achieved. In contrast to the ideal of pure top-down design, frameworks are developed rather than planned, and hence it is not unusual for a designer to be continuously tweaking the framework. This may appear to fly in the face of current software design methodologies that mandate rigorous interface planning before coding commences but in reality it represents a closer match to the way software is typically developed. There appears to be a real sense in which a framework will never be complete because its use across many applications provides continued opportunities to iterate the design based on improved domain knowledge. While our interest here is to apply the framework idea as a means for building simulation systems, the methodology for discovering frameworks is not necessarily bound to the simulation domain, but can be applied to many other fields. The following steps describe the general development process. Step 1: Identify the major components of the system The first step involves capturing user requirements and expanding them into a set of simulation requirements (e.g. [3]). A list can then be build representing items that need to be included in the target simulation and which represent the major components, or building blocks, of the system. There are several goals during the development of the component list. The major component list should be common to similar simulation applications. By identifying these components, code need only be written once and reused in future simulation developments. For example, an application in “real-time simulation for training” may have the some or all of the following major components: •

Image Generator

• • • • • •

Data Base Interoperability Input Physics Computer Generated Forces Core engine

From this list, one can easily identify components that can be implemented with pre-existing software (as may be provided by off-the shelf technology), with open source, or by the operating system itself. The idea is not to reinvent the wheel, nor is it to lock the system into to one particular implementation. Identifying these major components allows a layer of abstraction to be developed. A wrapper must then be written for each implementation, such that it can "plug into" this abstract component (see figure 2 on the previous page). As outlined previously, the goal is not to necessarily find the best implementation for each particular component but to investigate all possible implementations and to discover the set of interfaces that are common to all of these. This is clearly a more difficult task as it requires insight into a number of alternative implementations rather than just one.

this step we identify the form of these inputs and outputs, and hence the mechanism their transfer. This also involves determining the messages and data types required to satisfy their responsibilities. In the black-box, all details are hidden from the user. However, the WhiteBox Framework [4] is a useful concept here as it encourages the use of inheritance to specialize subclasses with additional behaviors (whilst obeying the internal conventions of its underlying "superclass"). Of particular interest at this point is the issue of component dependencies. This includes artificial dependencies [5] that may seriously affect the ability to determine cause and effect in the simulation results. Within the Image Generator example described above, the scene is an obvious data type passed into the component. The same process is repeated for formalizing the interactions required by wrappers to the components. Image Generators generally have an initialization phase, and then after the initial scene is created, the renderer is started and loops the drawing process for each application cycle until the program ends. Hence init, start, draw and end would satisfy the wrapper interface in this case. The scene is important to start and draw, whilst the display conditions are important to init.

Step 2: Define the responsibilities of each component Step 4: Design the framework architecture

A component exists in a system for the purpose of performing a task. Conversely, this task is the responsibility of that component alone. If the component responsibilities seem vague at this point, then revert back to step one, remodel and continue. This step defines the interface between the system and the component. For the example list of major components listed above, an outline of their responsibilities might be as follows:

The model is designed using Object-Oriented notation. This should generate an overall system model and a model for each component. It should be remembered that a framework will ideally be derived from a study of existing problems and solutions that demonstrate that a common solution is, in fact, feasible.



Step 5: Code the core of the framework

• • • • • •

Image Generator: draws the virtual world state on to a display device. Database: both the current state of the world (the scene) and persistent state of the world (model files, etc). Interoperability: enabling the simulator to participate in a distributed simulation environment. Input: obtains user input from physical devices. Physics: evaluating the state of simulation objects at every time step Computer Generated Forces: evaluating the controls of simulation entities that are computer controlled (not human controlled). Core engine: the state machine that controls and runs the components

Using the Image Generator as an example, key external interfaces may be: • •

Initialize the visuals. Draw the scene.

Step 3: Identify how these components interact If we treat each component as a black box then, in its simples form, this box accepts some form of input, processes this information, and produces some output. In

Code the core engine component. This is the state machine, or main looping program. This includes coding the abstract components and dummy plug-ins if necessary. This allows the components to be tested in the environment that they will ultimately be used. Step 6: Implement one of the engine components Code wrapper between a component and a chosen implementation. Step 7: Re-assess framework Framework development is an iterative process. The design may require change, the component wrapper may be too restrictive (i.e. too difficult to use), or too loose (leading to performance problems). Regardless of the reason, a change here alters the responsibilities of a component and step 2 onwards must be repeated.

Step 8: Implement next component Repeat the process until all components are completed. As new components are added, they must be checked for correct interaction with all implemented components. 4.

A TRIAL FRAMEWORK DEVELOPMENT

The methodology outlined above was applied to a trial framework to determine its effectiveness in practice. The characteristic of the framework were deliberately kept simple, but represented a typical application in the realtime simulation domain. The application theme chosen was that of a proof of concept for a fire fighter trainer. Ideally, such a trainer would be made up of a distributed consortium of 1stperson trainers focussing on team training for hazardous situations. Hence each trainee would assume a role in the synthetic environment, and be able to interact with the environment as well as other concurrent trainees.

inclusion of features such as collision detection represents extra load as this involves checking entities on each station not only against other entities on that station, but all entities in the simulation environment. The project team had access to SGI’s Iris Performer and Multigen’s Vega. Given that Performer was able to implement the visualization, database management and collision detection components, as well as being freely available for Linux and readily available on the Irix platforms, it was heavily used. The physics components were implemented with custom code. The interoperability component was implemented using DMSO’s HLA RTI. Much emphasis was placed on building an interoperability component that was not overly biased towards the RTI architecture. This is difficult as the RTI is quite flexible and a well-designed framework within itself. The application was then developed in four phases:

Such task fits perfectly into the framework concept. The initial work was intended solely as a proof of concept trainer and so the fidelity of the human behavior modelling, the physics modelling and visual presentation were kept simple. For example, the physics engine exercised only Newtonian equations. Thus there is an implicit understanding that improved components would need to be incorporated at a later date, if this was ever to be used as a real training package. The framework permits this to occur in a straightforward manner. Our proof of concept project concentrated on building the framework that drives the basic application. This application took on the role of a trainee fire engine driver in a hazardous scene. The scene was a simple open flight model of a virtual world, while a secondary role of an “ordinary vehicle” could be scripted to represent additional obstacles. The implementation of a viewer mode permitted the replaying of trainee sessions. This could be either free flying, tethered or a fixed position viewer. To accommodate team training, the application was distributed across a number of simulator stations on a network - one per (possibly simulated) trainee. Each station was a member of the synthetic environment, and behaved accordingly. From this brief requirements outline the fundamental framework was designed. Each of the different types of roles making up the environment required a number of items such as: •

User input handlers.



Computer automation/Computer Generated Forces



Physics models.



Visual models.



Distribution models (note that the means and quantity of communication is usually a function of the object’s type and state).

Thus each station was responsible for all of the items listed above for each entity running on that station. The

1.

The Core engine (management & entities) and Visualization components were coded. To complement this, dummy entities with hardwired motion were included. The fixed place and tethered viewers were also developed, forming an early testing and debugging environment.

2.

A first pass at developing the interoperability, physics and input components was then undertaken. This stage revealed many flaws in the initial interoperability and input component designs and they were redesigned.

3.

A second pass at developing the interoperability and input components was made.

4.

Finally, the “vehicle” implementation of the interoperability, user inputs, physics and visuals was created. This process was short and simple. At this point the framework satisfied the requirements of the proof on concept application.

To ensure cross platform portability of the application, safe coding techniques were utilized. The GLIB library (www.gtk.org) was used to ensure data types and timing routines were consistent across platforms. Techniques were also needed to ensure that the task of building the simulator on different platforms was painless. The GNU “autoconf” tool (commonly called “./configure”), is a portable building environment that encourages portable code and documentation. Help on the efficient use this tool can be hard to find, but it almost certainly guarantees compilation portability across every common platform. 5.

LESSONS LEARNT

During the development of our test framework, several issues and technical problems arose. The problems and

solutions that had significant impact on the final result are outlined in the following sections. 5.1 Component Responsibility – Scope and time It is often difficult to determine the actual partitioning of responsibilities between component levels. For example, collision detection is vital in a physically based simulator. However, collision detection typically involves making comparisons between object locations at the polygonal level which is computationally intensive. Hence collision detection and database/scene technology go hand in hand making it difficult to cleanly separate the physics component (where collision detection should conceptually reside and where the results are used) from the scene graph (i.e. the database component) with its proprietary tool for collision detection. In the particular experiments reported here, a decision was made to directly use the proprietary functionality for collision detection rather than create a flexible wrapper for this technology. 5.2 Component Responsibility – Technologies spread across components In many cases the third party products used can fulfil many of the fundamental components described above. To that end, many components have a high dependence on each other. For instance the Image Generator – Scene Graph – Database link. This does not cause a problem. Implementations that venture across multiple components will have a wrapper for each of those components merged into one. The problem will emerge when one dependent item has to be replaced - and all of the related components have to be replaced at the same time. 5.3 Speed vs. Portability vs. Framework integrity A speed trade-off had to be made during the development of the simulator. System and implementation dependent speed optimisations were not possible since it would violate the focus of making this simulator portable and extensible. 5.4 Major computer hardware architecture differences Even though the objective is the development of a context-free coding scheme, it is still not possible for a framework to entirely ignore the differences between underlying computer hardware models. For systems with only one processor this is straight forward - no additional considerations are really necessary. However, for shared

memory multiprocessors, such as some SGI workstations and SGI Oynx 2s (as was eventually used by the project team), multi-process (otherwise known as forked) coding was required. In our particular environment, forked processes share memory space via the SHMEM API. Parallel or clustered hardware systems are becoming increasingly popular and cost effective - particularly for computational (as opposed to visualization) tasks. This class of system tends to use the MPI API. It is too difficult to abstract these competing memory and processor management tasks (i.e. SHMEM and MPI) into a generic component. Instead one or the other should be chosen early in the design process. In our case, the SHMEM alternative appears to be the most useful for simulation applications - given the typical hardware platforms they reside on. 6.

CONCLUSION

The concept of developing of a portable and extensible simulation framework is a cost effective and viable solution for creating simulation applications. The ability of component reuse allows for rapid development of applications. The plug-in style design of the framework also permits easy upgrades to existing simulators based on this technology. Such flexibility makes this framework methodology an highly useful piece of technology, which supports portability as well as helping to significantly extend product lifetime. 7. [1].

[2].

[3].

[4].

[5].

REFERENCES Dr. Stytz M., Dr. Banks S. (1997) “The Development of Computer Generated Actor Frameworks for Distributed Simulation”, 9th Conference on Computer Generated Forces & Behavioural Representation. Bélanger, J-P.; Fortier, P.; and Lam, L. (1998) “Lessons Learned from a Framework Approach to HLA_Compliant Federation Development,” The 1998 Spring Simulation Interoperability Workshop, Orlando, FL., 9-13 March., pp.959-967. Aronson, J.S. and Wade, D. (1998) “Model Based Simulation Composition,” The Fall Simulation Interoperability Workshop, Orlando, FL, 13-18., pp. 290-295. Courtemanche A., Burch R.. (1999) “Using and Developing Object Frameworks to Achieve a Composable CGF Architecture”. 9th Conference on Computer Generated Forces & Behavioural Representation. Youmans, Robert. “Toward An Object Orientated Design For Reproducible Results in Military Simulations”, SISO 9th CGF.