Space Transportation Systems Division. Rockwell International. Downey, California 90241. Distributed simulation based on Opera- tional Evaluation Modeling ...
OpEM distributed simulation John R. Clymer Research Center for Systems Science Applied State California University, Fullerton Fullerton, California 92634
Distributed simulation based on Operational Evaluation Modeling (OpEM) is discussed. OpEM, a system design and analysis methodology, uses a two-dimensional, directed graph language to describe system operation. The OpEM simulation programming system, that implements this methodology, includes object-oriented event
processing routines, moving-object routines, an expert system controller, and continuous system simulation routines. These routines are required to model large systems, such as the Strategic Defense Initiative (SDI) systems, that are characterized by closely coupled, context sensitive interactions. These interactions are modeled using a large global state space consisting of both discrete and continuous state variables. Execution of such large, complex models on uniprocessors can be slow; therefore, possible speedup by execution on a multiprocessor system, a set of processors that share memory, is investigated. A generic surface-
ship warfare simulation, characteristic of many large models, is executed on a simulation of a multiprocessor system to investigate speedup.
Daniel Hernandez
Space Transportation Systems Division Rockwell International Downey, California 90241
Introduction
Performing sensitivity analysis, during the Mission Area Analysis and Concept Exploration phases of the system acquisition process, with a large simulation program can be very slow on a uniprocessor. Early in the Demonstration and Validation phase it is necessary to model some processes continuously rather than with discrete events. The additional modeling fidelity slows down simulation program execution even more. Often analysis takes longer and is less effective due to large execution time. Parallel computing has long been considered an excellent solution to this problem by system scientists, and as multiprocessor systems become more capable and less expensive, it is becoming increasingly practical. Can distributed processing be used effectively to speed up large, closely coupled simulations? Much research has been done during the last ten years to experiment with distributed simulation. The logical process concept has been discussed extensively, starting with [4]. Logical processes interact only through message passing. Closely coupled, context sensitive, global interactions involving the state of the physical system and its operations must be implemented as message chains, resulting in events from several processes being executed at the same simulated time. If these processes are being executed on different processors they must synchronize, resulting in reduced speedup. To understand better the closely coupled systems discussed in this paper, Figure1 represents a system in terms of structure, operation, and decision making. Structure describes that part of the system that is constrained 395
that describes its current state (present activities, communications with other entities, and capabilities). Entities may be organized into a relational data base using the inheritance of attributes and functional flow dependencies to relate them. An explicit formalism for the structural view simulated using discrete events is found in [17].
,
System process System process is defined as the set of all possible sequences of system states and events that occur. System process is represented as a network of interacting subprocesses. Since a subprocess can also be a system process, a top down hierarchy of system prois possible. Subprocesses operate asynchronously of one another except for interactions such as synchronization and resource contention. Operation of each function of a subprocess is described by what are called mission attributes which are parameters that characterize process behavior. An example of system process is as follows. A system has a job arrive every 5 minutes. Each job has the functional flow diagram shown in Figure 2. Tasks Tl through T4 are each performed by a separate resource shared by all jobs in the system (resource contention). Task T1 must be completed prior to the start of tasks T2 and T3. Tasks T2 and T3 of the same job can be done simultaneously; however, both tasks must be completed prior to starting task T4 (synchronization). cesses
Figure 1. Structure, operations and decision making
by the laws of physics, and it is described by a set of interconnected entities. Operation describes that part of the system that is constrained by the laws of synchronicity, and it is described by a set of communicating processes. The decision making part considers these constraints to make optimal decisions that maximize system effectiveness. The constraints are described by subsets of the global state space. A decision is implemented by an event chain. This paper discusses distributed simulation applied to highly context sensitive systems where many processes share a large global state space. System definition, system design procedure, and Operational Evaluation Modeling (OpEM) methodology are discussed briefly. With this background, distributed simulation is explored. The conservative approach is described in terms of the logical process concept. Experimental results are presented that show the speedup gained versus the loss of causality using the moving time window (MTW) approach [16]. The logical process concept using MTW is evaluated for context sensitive systems, and the requirements of combined discrete event and continuous simulation (possibly real time) are presented.
System definition System structure System structure is defined as an interconnected set of structural elements (entities). Interfaces between elements define how elements communicate with each other (data and control flow). Operation of each element is determined by transformations, that relate its inputs to outputs. The study objectives, using system structure, is to demonstrate and validate system effectiveness achieved by a particular structure. Structure can be simulated either continuously or with discrete events. Discrete event simulation is often accomplished using dynamic lists of entities that enter and leave the system. Each entity has a set of attributes
396
Figure 2.
User task
System process describes the operation of a system in terms of sequences of states and events. The study objective using system process is to determine the effect
of time critical interactions (resource contention and synchronization) on system effectiveness. An explicit formalism for systems process simulated using discrete events is found in [6].
Decision
making Decision making to select operations and allocate resources to achieve system goals is described in Figure 3. The system process is divided into three types of subprocesses that interact (originally discussed in [1; 2]). An expert system can be used to decide highly context sensitive cases [5; 8; 9] or an algorithm can implement a very simple set of rules in less context sensitive or context free situations. The disturbance process describes operation of the environment with which the system interacts. With a
biological organism, the disturbance process would be operation of all predators and other dangers the organ-
purposes for the system. A customer/producer dia-
ism is forced to contend with to survive.
worked on. OpEM directed graph models greatly facilitate this dialogue. Functional analysis is performed to decompose mission and mission objectives into a functional hierarchy. Functional interfaces are defined and requirements for system attributes (structure and process) are specified. The OpEM directed graph models are translated into simulation programs. Process analysis is performed using these programs to determine the effect of time critical interactions and develop optimal decision rules. The OpEM expert system controller with its run time interface greatly facilitates developing optimal decision rules. System synthesis is performed to develop the system block diagrams that define system structure and document the system design. Several alternative designs may be specified when more than one conflicting decision criterion exists. Trade studies are done using a simulation model to evaluate these alternative designs and select one (or more) for further development. OpEM Distributed simulation could greatly facilitate this system design and analysis procedure by reducing the time required to complete the analysis. What is OpEM?
Figure 3. Cybernetic systems The object process represents operation of the system. It represents all possible organism behaviors as it obtains its living and contends with predators and other dangers. The regulator process represents decision making of the system which selects from among all possible system behaviors the action it considers potentially most effective. The regulator process receives information from the object process on the status of the disturbance and system entities and processes. The regulator considers this information and evaluates possible system behaviors in anticipation of the disturbance. When a decision is made, the regulator process directly executes an event(s) in the object process that starts a system response.
System design and analysis methodology The system design and analysis methodology discussed next is greatly facilitated by using OpEM distributed simulation based on structure, process, and decision making. System engineering identifies system requirements from mission objectives and allocates them to lower levels of system description. Levels of system description are shown in Figure 4. The phases of a system engineering project are:
(1) mission area analysis to establish system missions and mission objectives
(purposes),
(2) concept exploration to identify all feasible system
concepts to achieve the objectives, (3) demonstration and validation to evaluate architectures implementing concepts, select one (or more), and document it (them), (4) full scale development to perform the hardware and software design required to implement the architecture(s) selected and demonstrate operational effectiveness and suitability, (5) production, and
(6) operation. The first step during any phase is to evaluate the
logue
occurs
to assure the right
-
problem is being
Operational evaluation modeling The Operational Evaluation Modeling (OpEM) graphical language [6; 7] explicitly describes operation of a system in a way that includes structure and decision making. It allows structure and operation and decision making rules to be variable during system design and supports a top down, user oriented system design procedure throughout the system lifecycle. OpEM is one tool in the larger field of operations research, that employs a collection of modeling and optimization techniques to analyze the behavior of systems. A key feature of OpEM that facilitates explicitly defining, visualizing, understanding, and analyzing system operation is the concept of a parallel process expressed using a two-dimensional, graphical language. Some other approaches also use graphical languages. Petri nets describe processes using a graphical language, but they can not represent most of the context sensitive transitions that are subject to decision making. SLAM and SIMNET are graphical languages that describe transactions flowing through a network of queues and resources, but they generally do not represent either structure or operations as clearly as
OpEM. The OpEM directed graph language represents system operation using primitives for process flow that describe context sensitive interactions among parallel processes such as resource contention and process synchronization. The OpEM language also describes system structure through manipulation of state variables, and it includes decision making through the use 397
Figure 4.
Levels of system description
of logic specified for wait states. An OpEM directed graph model is used as a language to define architectures. It provides a basis for trade-off studies that compare alternate architectures and technologies for system implementation. The two-dimensional directed graph model describing a system allows system operation to be understood with little difficulty by non-programmers so the customer and design team members can participate in the system design. The input parameters used by the model, called mission attributes, provide the structure for system specification. For more information about OpEM refer to [6; 7]. The Surface Ship Warfare Simulation model is discussed as an example of OpEM application. In this model, attacks by air, surface, and subsurface targets against a warship such as a destroyer or cruiser are simulated. Ship resources for air and surface warfare are Fire Control Systems (FCS), Launchers (LNS), guns (GUN), and Close in Weapon Systems (CIWS). An FCS is required to achieve lockon and FCS track of a target. An FCS track is needed to compute missile and launcher data. A launcher is aimed to achieve the specified missile trajectory against either air or surface targets. A gun is aimed and fired at surface targets, and a CIWS is aimed and fired at air targets within its range. The number, location, and magazine size, if appropriate, are variable for each resource. The current configuration consists of two FCS subsystems, one LNS, one GUN and one CIWS. Figure 5 shows a directed graph model of air warfare processes that represents a subset of the surface ship warfare simulation. The circles represent states and the 398
directed line segments represent events. The top process describes target operation as it attacks the ship. The motion of each target is described by a motion table that is an input to the model. The motion table specifies the path, speed, and actions of the target. The next process is ship motion control that allows the expert system to order ship maneuvers. The third process models detection and system track, threat evaluation, and weapon assign operations. The expert system controller is used in event E10 of that process to allocate ship resources to targets. Process four describes missile launch and flight. Processes one, three, and four are duplicated for each target in the scenario. Process five models FCS operation and process six models the launcher. These processes are duplicated for each resource in the system. Processes seven (GUN) and eight (CIWS) are not shown. The directed graph model assists an analyst in visualizing ship operation, and it describes all possible timelines. Timelines differ due to time variation of states and alternate transitions from states. An analyst must be able to visualize interactions among parallel processes. Visualization results in insight needed to find optimal decision rules and, thus, accurately com-
pare system designs.
Distributed simulation Conservative approach
Figure 6 shows a logical process (virtual) node. The distributed simulation algorithm shown was derived
,
Figure
I
5. Directed
graph
model of surface
ship
warfare
from [3; 4; 13; 14; 15], and others. There are more virtual nodes than physical processors available in the system. The operation of each virtual node is a system task that may be active (executing or blocked) or inactive. An executing node has at least one event that can be executed immediately. A blocked node has events that can not be executed immediately because of synchronization requirements. A node is inactive if it has no events scheduled. The collection of node tasks is executed under a multitasking operating system. OpEM processes are distributed to virtual nodes to minimize node synchronization. Predecessor nodes send messages (events or state variables) to Node Ni. Node Ni sends messages to successor nodes. The channel time Cij is the last known simulation time of predecessor Node Nj by Node Ni. For each node, simulation time Tnow(i) is the minimum
of all channel times and time of the next node event, which is the smallest time of all node event queues. A request-time message is sent to a predecessor node Nj if channel time Cij is less than the time of the next node event and the time has not been requested before. Node Nj sends a reply when Tnow(j) is equal to or greater than the requested time. Node Ni blocks itself if Tnow(i) is less than the next event. If all nodes are blocked, a deadlock exists. To break deadlock, the minimum next-event time (global time) of all the active nodes must be determined. For nodes with simulation times less than global time, set these simulation times equal to global time. Two methods are specified to break deadlocks. These are deadlock detection and recovery and deadlock avoidance (using additional null messages when a node blocks or becomes inactive). 399
Second, the simulation time Tnow(i) of each Node Ni is updated. The minimum channel time of all predecessors is determined. If the minimum channel time is greater than or equal to the time of the next event or the time of the next event is in the global window relative to global time, then Tnow(i) is set equal to the time of the next event. In this case, the next event will be executed in the next cycle. Otherwise, Tnow(i) is set to the minimum channel time or global time, whichever is greater, and the node is blocked during the next cycle. A control flow diagram is shown in Figure 7 [121. Each directed line goes from predecessor node to successor node. Shown are the synchronization requirements for the
Figure 6. Logical process or node Simulation of Conservative Distributed Simulation
Approach Each virtual node Ni maintains its own simulation time Tnow(i) and event queues in global memory. The simulated distributed-simulation program executes, for each executing virtual node Ni, all events having execution times within the node window relative to Tnow(i). Events for each node are placed on the concurrent events list. After all nodes are considered, all events on the concurrent event list are executed, completing an execution cycle. Of course, an execution cycle is an approximation of actual concurrent execution that does not consider variable event execution time or the fact that some events are executed asynchronously of other events. Parallel processing performance parameters are obtained during each execution cycle. K factor is the number of execution cycles (sequential tasks) divided by the total number of events executed (parallel tasks). Speedup is the minimum of 1 /K and the number of physical processors. After all events in a cycle are executed, it is determined which nodes are still active. To be active, a node must have at least one event scheduled to occur in one of its event queues. The time of the next event for each active node is obtained and the minimum event time (global time) is determined. A message cycle determines Tnow(i) for each Node Ni. The following steps are repeated until there is no further change in Tnow(i) for all i. Such repetition is required to simulate chained replies to request-time messages. First, the channel times and request-time count are updated for each Node Ni. If simulation time Tnow(j) is greater than or equal to the time requested by node Ni, channel time Cij for Node Nj is set equal to Tnow(j). This simulates a reply from Node Nj to a request-time message from Node Ni to Node Nj. If channel time Cij of Node Nj is less than the next event time for Node Ni and no previous request for this time has been made, the request-time count is incremented and Time Requested [i,j] is set equal to the time of the next event of Node Ni. This simulates a request-time message sent from Node Ni to Node Nj. ,,
400
surface ship warfare simulation. Processes one and three to six for target one are shown synchronizing with node 3’. Node 3’ only executes event E10 of process three, and it is the only node that synchronizes with all the targets to implement global decision making. The above described deadiock avoidance scheme is intended to be implemented using shared memory rather than message passing. The information required for deadlock avoidance is placed in shared memory. This information includes global time, and tnow, next event time, and time requested from each successor node for each node Ni. Mail boxes in global memory are used instead of one node sending a message to another. Windows relax event synchronization Global and node windows are used to relax synchronization requirements. The global window allows Tnow(i) to be advanced to the next event time for node Ni if the next
Figure 7.
Control flow diagram
.
Figure 9. Speedup versus size of moving time window I
Figure 8. Moving time window event occurs within the global window relative to global time. A node window allows all node events that occur within the node window relative to Tnow(i) to be executed as if they all occurred at time Tnow(i). The top part of Figure 8 shows events for three
synchronized processes being executed on a node. Tnow(i) is equal to global time for this discussion. Time increases horizontally in the diagram. The same time is represented as a vertical line. For example, the dotted line shown represents Tnow(i) currently at global time. The simulated time for each of these events is different, so they must be executed sequentially. The speedup for these events is one. In the bottom part of the figure, the same events are executed with a moving time window discussed in [16]. Events with simulation times within the node window can be executed simultaneously. In the figure events, El and E2 have simulation times within the node window relative to time Tnow(i), but E3 does not. The K factor for the node during the execution of events El, E2, and E3 is two cycles divided by two parallel events during the first cycle plus one event during the second cycle (0.667). If there are two or more physical processors assigned, speedup is 1/K (1.5). A node window is used for each active node, thus speedup for the entire simulation could be higher.
Speedup Versus Window Size Figure 9 shows speedup as a function of window size for two scenarios with four and eight targets attacking during a mission. Each point on the graph represents 300 surface-ship warfare missions. A maximum of twenty five nodes was used for the eight target scenario with fewer nodes required for the four target scenario. The speedup obtained with both window sizes equal zero was 1.10 for four targets and 1.12 for eight targets.
I
These low speedups occur because node synchronization in the surface ship warfare simulation forces all events to be executed sequentially except for events that occur at the same time. Using a global window size of 10 seconds and a node window of 2 seconds, the speedup raises to 1.7 for four targets and 2.1 for eight targets. Speedup depends on the allocation of processes to nodes, so the optimum results may not be represented here. The reason for the low speedup is that the surfaceship warfare processes are closely coupled through target motion. However, this close coupling is different from that found in purely continuous simulations. It is different because, even though target position changes continuously in time, the meaning of target position in relation to the ship changes at discrete times. Discrete event messages from one process to another, indicating such a change, triggers decision making. A decision results in an event chain, connecting events in several processes at the same time. If these processes are being executed on different nodes, synchronization of all nodes involved in the decision, either supplying data or executing events, is required. Thus, close coupling results in increased process synchronization, and because chained events happen at the same time, no lookahead value can be used during synchronization without some loss of causality. Increased synchronization and reduced lookahead tend to decrease speedup. The surface-ship warfare simulation is typical of the large aerospace and mission analysis models of interest to Rockwell. Thus, the results indicate that conservative distributed simulation appears to have little payoff unless a moving time window is used. This conclusion is consistent with findings discussed in [16]. Lesssynchronized queuing network models, however, appear to have more speedup potential using the conservative approach. The moving time window relaxes synchronization requirements and may degrade causality. Figure 10 shows system effectiveness (fraction of targets killed) as 401
moving-object routines maintain proper interaction times even when one or more object vectors change. These routines greatly simplify the programming of object motion and interactions in a simulation. The moving-object routines for distributed simulation have been implemented to relax the need for absolute time synchronization. Object interactions are scheduled relative to global time, the lowest time of any node. Some interacting nodes can be ahead of global time, but all interactions occur at the correct ranges and global times. Processes running ahead of global time may use old information to
Figure 10. System effectiveness versus moving time function of window size for the four and eight target scenarios. If window size has no effect, each curve
a
would be a straight, horizontal line. The curves show small variation for global window sizes under 10 seconds and increasing variation for larger windows. The results show that window sizes under ten seconds result in up to a ten percent variation in system effectiveness. Some of this variation is due to normal random fluctuation and some is caused by loss of causality. However, a 10-second global window with a 2-second node window seems adequate in this case. Simulation results, however, must be shown relatively insensitive to the size of the global window and node window used for each sensitivity curve.
Conclusion Global reasoning based
shared state space Using a moving time window can speed up some of the aerospace simulations of interest; however, causality may be lost to achieve this. The problem with the surface ship model example discussed in this paper is that node 3’ synchronizes with nodes of all targets due to state variables shared and decision events executed. Node 3’ performs event E10 of process three, performing global decision making for the ship. Ship decision making depends heavily on target positions which are modeled by continuous time varying state variables in a shared state space. Moving object routines [11] are used to model continuous time varying object positions with discrete events. Object vectors are constant until a discrete event changes them. The position of any object can be determined at any time given its position at the last update, the delta time since the last update, and its constant vector. The main purpose of the moving-object routines, however, is to predict the time when two objects will come within a specified range of one another. A event is executed when this range is reached. specified Such an event is called an interaction, and a decision is usually required when an interaction is executed. The
402
on a
make decisions. If relaxation is limited to a window of time relative to global time, then loss of causality due to processes running ahead can be controlled. To improve speedup in a simulation based on shared state variables, such as the positions of objects, the processes must be uncoupled. All object processes that interact based on their shared state variables are executed on the same node. Discrete events, implementing a decision, can be executed on the same or different nodes. Shared state variables of the interacting objects simulated on node Ni are updated to Tnow(i) independently of other nodes. When an object starts interacting with another set of objects, it moves to another node. Thus, nodes only synchronize at discrete times and applying the logical process concept achieves better
speedup [10]. Real time continuous simulation &dquo;&dquo;&dquo’;
The major problem in achieving speedup when executing a distributed simulation of a system with global decision making based on a shared state space is close coupling of the processes. If processes can be partitioned into classes of processes, sharing their state variables for global decision making, and executed on the same node, the processes can be uncoupled. What happens when some of these nodes are executing continuous processes in real time? A continuous system simulation, among other things, performs numerical integration, solves difference equations of digital filters, or computes functional values to create a close approximation of the continuous state variables as a function of time. To achieve this close approximation, solution occurs at short time intervals that are usually fixed-length. The problem is generally described with the following vector differential
equation:
X’(t)=F[X(t),U(t),t]. The vector X’ ( t ) is integrated numerically to obtain vector X ( t ), which provides the value of each continuous state variable as a function of time, in which the vector U ( t ) is the process input and the vector X ( 0 ) is
the initial condition. The interval between time steps must be sufficiently small for the integration error to remain within desired bounds. Ideally the equations
global time. The fine grain execution of continuous processes may also decrease speedup further due to to
Figure 11.
Discrete event and continuous simulation
would be solved analytically, but this is rarely possible. It is best to avoid continuous system simulation in top-level concept formulation models, if possible. As the design proceeds, however, evaluation of detailed system structure and operation becomes desirable. A combined discrete event and continuous system simulation allows analysis of detailed system structure and operation in areas of specified interest, while the remainder of the system is modeled with discrete event processes. Later in a design project, hardware in the loop may be combined with discrete and continuous simulation operating in real time. Continuous system simulation can be seen as a special case of discrete event simulation. In OpEM, continuous simulation is implemented by what is called a semicontinuous state. Figure 11 describes combined simulation operation. A discrete Event El begins combined simulation by executing the Begin_Con_Sim routine that places a consim record on the consim list. A consim record describes a continuous process to be executed. Event El also schedules semicontinuous Event E2 that decides when Process 2 ends. During continuous simulation, the state variables X(t) are updated at a specified integer multiple of the smallest delta time. Continuous Process 2 is updated at the smallest delta time. Event E5 begins continuous Process 3, which is updated at twice the smallest delta time. All continuous processes are updated whenever a discrete event occurs as well as at the specified intervals. Thus, continuous processes remain synchronized with both the discrete events and the smallest delta time, allowing either real-time or nonreal-time simulation. Discrete Event E2 ends continuous Process 2 by executing the End_Con_Sim routine that removes the consim record from the consim list. If continuous Processes 2 and 3 share state variables, it is easy to see that they should not be executed on separate processors. The synchronization and message flow required would degrade performance. Therefore, the OpEM distributed simulation methodology groups closely coupled discrete-event or continuous processes, and executes them on the same processor. The result of node Ni executing continuous processes in real time is that global time and node simulation time Tnow(i) must be synchronized with real time. Thus, the node window for node Ni must be equal to delta time; although, other node windows for purely discrete event processes can be greater. Also, global time would advance in delta time increments. Experiments with the surface ship model indicate that some decrease in speedup results from having some nodes synchronized
increased synchronization overhead. When the amount of continuous processing reaches some critical level yet to be determined, executing combined discrete event and continuous processes in real time may be speeded up more by using a variable time step method instead of the logical processes concept described in this paper. Another possibility is time warp within a MTW (an optimistic distributed simulation approach) if an efficient way to deal with the large shared state space can be found. Further research into distributed simulation of closely coupled processes will determine the best
approach. References 1.
2. 3.
4.
5.
6. 7.
Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall, London, U.K. Ashby, W. R. (1960). Design for a Brain: The Origin of Adaptive Behavior. Chapman & Hall, London, U.K. Bain, W.L., and D.S. Scott. (1988). "An Algorithm for Time Synchronization in Distributed Discrete Event Simulation," Distributed Simulation, Simulation Councils, Inc., San Diego, pp. 30-33. Chandy, K.M., and J. Misra, (1981). "Asynchronous Distributed Simulation Via a Sequence of Parallel Computations,"Communications of the ACM, Vol. 24, No. 11 (April 1981 ) Clymer, J.R., (1989). "OpEM Expert System Controller," In Simulation and AI, 1989, San Diego, CA: The Society of Computer Simulation International, Volume 20, Number 3, pp. 20-26. Clymer, J.R., (1990a). Systems Analysis Using Simulation and Markov Models, Englewood Cliffs, N.J.: Prentice-Hall, Inc. Clymer, J.R., Corey, P.D., and N. Nili, (1990b). "Operational Evaluation Modeling," In Simulation, San Diego, CA: The Society of Computer Simulation International, December 1990
issue, pp. 261-270. Clymer, J.R., (1990c). "System Design Using OpEM Inductive/Adaptive Expert System Controller," In IASTED International Journal of Modeling & Simulation, Volume 10, Number 4, pp.129-136. 9. Clymer, J.R., Corey, P.D., and J. Gardner, (1992). "Discrete Event Fuzzy Airport Control", In IEEE Transactions on Systems, Man, and Cybernetics, Volume 22, Number 1, January 8.
1992. 10.
11.
Conklin, D., J. Cleary, and B. Unger, (1990). "The Shark’s World", Distributed Simulation, Simulation Councils, Inc., San
Diego, pp. 157-160. Corey, P.D., and J.R. Clymer, (1991). "Discrete Event Simulation of Object Movement and Interactions," In SIMULATION, San Diego, CA: The Society of Computer Simulation International, March 1991 issue, pp. 167-174.
12. Cota, B. A., and R.G. Sargent, (1990). "A Framework for Automatic Lookahead Computation in Conservative
Distributed Simulations", Distributed Simulation, Simulation Councils, Inc., San Diego, pp. 56-59.
403
"Performance Measurements of Distributed Simulation Strategies", Distributed Simulation, Simulation Councils, Inc., San Diego, pp.
13. Fujimoto, R. M., (1988).
14-20.
14. Fujimoto, R M., (1990). "Performance of Time Warp Under Synthetic Workloads", Distributed Simulation, Simulation Councils, Inc., San Diego, pp. 23-28. 15. Reed, D. A., and A.D. Malony, (1988).
"Parallel Discrete Event Simulation: The
Chandy-Misra Approach," Distributed Simulation, Simulation Councils, Inc., San
Diego, pp. 8-13. 16.
Sokol, L.M., D.P. Briscoe, and A.P. Wieland,(1988). "MTW: A Strategy for
Scheduling Discrete Simulation Events for Concurrent Execution," Distributed Simulation, Simulation Councils, Inc., San Diego, pp. 34-42. 17.
Zeigler, B., (1984). Multifaceted Modeling and Discrete Event Simulation, London :Academic.
JOHN R. CLYMER is an associate professor of electrical engineering at California State University Fullerton (CSUF) and had consulted for Rockwell International and others in the area of
systems engineering. Dr. Clvmer has -~ worked as a consultant in the Rockwell International AS&ASD mission analysis unit and STSD simulation laboratory during the past six years. Operational Evaluation Modeling (OpEM) developed by Dr. Clymer during the twenty two years, has been used by these Rockwell organizations to perform concept formulation, architecture tradeoffs, and top level mission analysis and simulation. - __
--
-
-
--
last
DANIEL HERNANDEZ is a principal engineering specialist in the Aerospace Simulation and System Test Center of Rockwell International, where he works in the research and development of realtime simulation test beds for the analysis, design, and verification of-large aerospace and defense systems. He received a BS in electrical engineering from the University of Puerto Rico in 1962, and an MS in electrical engineering from California State University at Long Beach in 1970. He is a member of The Society for Computer Simulation, The IEEE Computer Society, and the American Institute of Aeronautics and Astronautics.
404