Ruminations on the Implications of Multi-Resolution ... - CiteSeerX

3 downloads 2856 Views 95KB Size Report
lectively, VRM and CRM techniques are referred to as ... HLA by simulation modelers and tool developers, a larger number of ..... A semi-automated methods.
Published in the 3rd Intl. Workshop on Distributed Interactive Simulation and Real Time Applications DIS-RT’99. c 1999, IEEE. Personal use of this material is permitted. However permission to reprint or republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Ruminations on the Implications of Multi-Resolution Modeling on DIS/HLA Radharamanan Radhakrishnan and Philip A. Wilsey Dept. of ECECS, University of Cincinnati, PO Box 210030, Cincinnati, OH 45221–0030 framanan,[email protected] Abstract

as needed. Unfortunately, when simulation designers “mixand-match” simulations that involve multiple levels of resolution, they often encounter inconsistent results in the simulation. These inconsistency problems can be attributed to the erroneous or insufficient correlation between the interacting attributes in the simulation which may be at different levels of resolution. For example, a simulation model of a tank battalion may have characteristics (attributes) such as current GPS coordinates, average speed, number of vehicles, and state of readiness. At a lower abstraction level, the same battalion can be modeled as a group of individual tanks or trucks, each of which has attributes such as current GPS coordinates, top speed, fuel level, gross weight, etc. If the battalion model and its constituent tanks are simulated together in one simulation, all interactions with the battalion abstraction and its constituents over all (over-lapping) periods of time must be correctly handled.

With the advent of standardization efforts such as the High Level Architecture (HLA) and Distributed Interactive Simulation (DIS), inter-operability of military simulation models has emerged as the chief design requirement. By enforcing strict conformance to the DIS and/or HLA, the Defense Modeling and Simulation Office (DMSO) has so far been able to “mix-and-match” different simulation models and frameworks to satisfy the military’s simulation needs. However, by linking different legacy simulations (simulations previously designed to operate independently) together, the simulation now has to correctly handle multiple levels of detail in the interacting simulation entities. In addition, a given entity itself can be represented in different ways each with a different level of detail (also referred to as resolution or fidelity). A natural question to ask in this situation is why does the model require different resolutions? Why isn’t one level of resolution sufficient to address all the needs of the simulation? There is no simple answer to this question. Depending on the model, what it is used for, and what it is interacting with, it may require several different resolutions. One justification for the need for different resolutions is the way humans think and comprehend information. In general, humans think and reason at different levels of detail and therefore require models to reflect their chosen resolutions. By embedding their knowledge into simulation models, simulation designers try and build a virtual world that mimics the real world. In this paper, we survey and review previous attempts at multi-resolution modeling and discuss its implications for the DIS, HLA, and parallel discrete-event simulation (PDES) communities.

Difficulties when defining different resolutions for the same modeled entity typically arises when low resolution entities (LREs), for example, a tank battalion, interacts with high resolution entities (HREs) such as tanks. The common solution is to dynamically change the resolution of an LRE (or HRE) to match the resolution of other encountered entities. This dynamic change of resolution is called aggregation (HREs to LRE) or disaggregation (LRE to HREs). The problem of linking simulations at different levels of resolution is known as the aggregationdisaggregation problem [35]. Figure 1 illustrates the different forms of interactions between LREs and HREs. More specifically, building new models or model families where users can readily change the resolution at which phenomena are treated is called variable-resolution modeling (VRM). Cross-resolution modeling (CRM) is linking existing models with different resolutions [12]. Collectively, VRM and CRM techniques are referred to as multi-resolution modeling (MRM). Although the dynamic aggregation-disaggregation approach is relatively simple to implement, it has several disadvantages. Problems such as chain disaggregation (cascading disaggregation resulting in an explosion in the number of objects in the sim-

1 Introduction One of the most important scientific challenges associated with today’s modeling and simulation domain is learning how to design models or model families, such that designers can move from one level of resolution to another 1

Tank

Tank Battalion LRE to HRE Communication

HRE to HRE Communication Truck LRE to HRE Communication LRE to LRE Communication Tank Battalion HRE to HRE Communication Truck

Figure 1. Interactions between Low Resolution Entities (LREs) and High Resolution Entities (HREs) ulation), network flooding (due to the increased message traffic), transition latency, and mapping problems between levels [35] plague the aggregation-disaggregation approach and hence, it is not generally considered a complete solution. Thus, a more unified approach to simulation model design is required and the sooner modelers begin to understand this problem, the sooner a solution can be found. In this paper, we survey and briefly review several research studies that have been carried out on multi-resolution modeling (MRM) by different researchers. The remainder of the paper is as follows. Section 2 reviews the research on MRM carried out by the defense community. Section 3 reviews and details the contributions by researchers at universities on furthering the state-of-the-art in MRM techniques. Given this background information, Section 4 summarizes our initial efforts on incorporating MRM techniques in our non-military related simulations. Finally, Section 5 presents some conclusions and directions for future research.

several research studies (in collaboration with other researchers) [9, 11, 12, 11, 35] for evaluating and documenting existing MRM methods as well as developing new MRM techniques in an effort to encourage model developers to use MRM techniques. As most of these studies were funded by the DoD, the focus of MRM techniques was restricted to military-specific models. Initial attempts at using MRM techniques resulted in the definition of the hierarchical variable-resolution modeling (IVHR) approach. Using object-oriented and software engineering techniques such as hierarchical data-flow diagrams, Davis et al formulated an approach in which variable-resolution models were constructed in a hierarchical fashion. Davis and others found that despite its limited applicability [12], the IVHR approach and the related methods can be intuitive and useful. More recently, with the large-scale adoption of DIS and HLA by simulation modelers and tool developers, a larger number of legacy simulations were incorporated in large joint simulations. With more interactions taking place between legacy simulations, the aggregation-disaggregation problem began appearing more often. This led to an increased interest within the defense modeling community to try and addresses these issues, and ultimately to the formation of a team of experts (representatives from RAND, DMSO, and the University of Virginia) to re-examine the

2 Research by the Defense Community Dr. Paul Davis and other researchers at RAND’s National Defense Research Institute have been instrumental in advocating the use of MRM techniques in DoD related simulation efforts. Over the years, Davis has led 2

3 Research by Universities

MRM problem. Specifically, Reynolds et al have proposed a new approach to multi-resolution modeling based on their notion of a multiple resolution entity (MRE) [27, 28, 35]. By forcing each entity to maintain separate state information (attribute set) for each level of resolution that it interacts with, they enforce logical consistency among corresponding attributes at various levels of resolution. By doing this, they ensure that entities at different resolutions interact in a correct and consistent manner. The costs of maintaining such multi-resolution models and their effect on simulation has also been studied [28]. Summarizing the results from these various efforts, a recent research report [10] made the following major observations:

While Davis and others studied MRM techniques for military specific simulation models, university-based researchers have tried to address the more specific problem of model abstraction. In addition, they have focussed on the applicability of MRM-based techniques in other domains [13, 16, 17, 22, 41, 42, 43]. Specifically, Zeigler et al [22, 41, 42] at the University of Arizona have been concentrating on developing objectoriented design techniques which deal with some aspects of variable-resolution modeling primarily by enforcing hierarchical design of simulation objects. In particular, Zeigler’s Systems Entity Structure/Model Base (SES/MB) framework has been shown to provide a workable foundation for model base management in advanced simulation environments and workbenches [42]. SES-based model based management has been implemented in the DEVS-Scheme simulation environment [21]. The environment enforced a hierarchical, modular method of building models in a systems-oriented manner that is not possible with conventional simulation languages [40]. In addition, the environment supported hierarchical structuring of a family of models, pruning the structure to a reduced version, and transforming the reduced version to a simulation model by synthesizing component models in the model database [43]. Fishwick et al [14, 15, 16, 17] have been trying to address the generic problem of model abstraction. Given today’s complex systems, modelers require efficient ways of abstracting their models. Although hierarchy can be used to simplify and organize the complex system (as originally suggested by Zeigler and Davis), there remains some unresolved issues. Specifically, the problem with hierarchical modeling is that systems components in each level are dependent on the next lowest level such that it is not possible to run each level independently (which is a prerequisite in MRM). Fishwick presents a solution to this problem by augmenting hierarchical modeling where in abstraction can take place on two levels: structural and behavioral [23, 24]. This is an important distinction as this facilitates the use of semi-automated methods for viewing and analyzing complex systems at different levels of abstraction. Structural abstraction is the process of organizing the system hierarchically using refinement and homomorphisms. Refinement is the process of refining a model to more detailed models (higher resolution) of the same type (homogeneous refinement) or different types (heterogeneous refinement), while homomorphism is a mapping that preserves the behavior of the lower-level system under the set mappings [14]. So using structural abstraction, a modeler can construct an abstraction hierarchy with simple model types at first, refining them with more complex model types later. This type of iterative refinement (which

 There is a conflict between designing for “generality” and designing for multi-resolution. Designing for MRM involves resolving more trade-offs such as between level of fidelity and speed. i.e., MRM entails modeling from several different view points. This is not generally attempted by normal system designers.  Normal modeling practice is to abstract the details of lower levels and avoiding structural details as this was seen as the natural way of extending the flexibility of the software package. In contrast, MRM design would probably want to exploit structural details so as to increase analytical agility.  Normal modeling practices avoid introducing aggregate variables unless specifically requested (due to processing overheads). MRM entails identifying and exploiting good aggregate variables from the outset of the design.  MRM design may involve support for asking “what if” questions of an aggregate nature. Typical simulators/simulation models do not support this kind of aggregate queries.  Finally, the biggest distinction between the normal modeling and MRM is that the former follows a strict bottom-up design philosophy while the later involves modeling bottom-up, top-down, and sideways. In summary, the following conclusions were drawn. While the theory and practice of MRM is still in its infancy, there are many objectives that can be accomplished today. Research ranging from fundamental work on analytical methods for MRM to applied development work on improving design tools, including computational tools to assist MRM is needed. Design tools that support MRM must be made available to the modeling and simulation community in order to make MRM techniques available to the modeler. 3

results in multiple abstractions of the system) is well understood and used widely in hardware design, synthesis, and simulation [1, 2, 3]. Hardware description languages such as VHDL [19] and VHDL-AMS [20] permit the modeler to iteratively refine the abstraction hierarchy of digital/analog circuit models. After creating the hierarchy, each level in the hierarchy can be executed alone. This is where the behavioral approaches are employed. Behavioral abstraction focuses only on behavioral equivalence without structural preservation. Below the structural abstraction, each component is a black-box with no detailed internal structure. Behavioral abstraction is used to represent the blackbox by approximating the behavior of lower level system components. Once again, this type of behavioral abstraction is widely employed in the hardware description languages. Specifically, VHDL [19] permits the modeler to define a structural architecture for a given entity as well as a behavioral architecture for the same entity. Either of the two architectures may be used to define the functionality of the top level entity. This dual abstraction method has been implemented by Fishwick and others in the Multimodeling Object-Oriented Simulation Environment (MOOSE) [6]. Given this type of framework for organizing models according to abstraction levels, the question that remains to be answered is that of selecting an optimal abstraction level under given time and accuracy constraints. For example, for applications with significant time constraints, using simpler (lower resolution) models that are computationally less complex (at the expense of reduced accuracy) is desirable.

lel simulation techniques to speed up circuit simulation has met with limited success. This is due to the fact that event granularities are very small and, in general, each event processed will generate one or more events that must be communicated to other parallel processes (resulting in a very high communication to computation ratio) [4, 37]. Furthermore, several studies have shown that, contrary to popular belief, only limited parallelism actually exists in logic simulations [4, 36]; even when the logic circuit is scaled up in size [5]. Thus, parallel logic simulators have yet to produce acceptable performance figures. To address this concern, researchers at the University of Cincinnati have developed an aggregation scheme called process combination [26] wherein several parallel processes from a logic circuit are compiled into a single combined Logical Process (in parallel simulation terminology, a Logical Process (LP), is one of the parallel objects in the simulation). Thus, instead of n parallel LPs executing events and exchanging event information, we statically compile the objects together to reduce the number of LPs executing in parallel. The motivations for this are (i) to reduce the number of communicating objects, (ii) to increase the event (the computation) granularity, and (iii) to realize a parallel simulator that will be effectively optimized for executing on small scale SMP workstations. A general theory of manipulating and combining VHDL processes was developed [25]. This theory was then applied to combine processes and speed the simulation. While the primary motivation is to speed logic simulation, the theory is built to merge arbitrary processes irrespective of whether they represent elements of logic circuits or elements from other abstraction levels. However, process combination is currently done statically (at compile time). Methods to do process combination dynamically (at run-time) are under investigation.

4 Some Lessons Learned The issues of aggregation and disaggregation are not confined to the defense modeling and simulation community. This is a problem that affects the entire modeling and simulation community in general. In this section, we describe a few case studies that researchers at the University of Cincinnati have carried out that are related to the issue of aggregation and disaggregation. As mentioned before, the notion of model abstraction (structural and behavioral abstraction) is widely employed in the design of hardware circuits and systems. Hardware modeling languages such as VHDL [19] allow the modeler to define models at different levels of abstraction. In addition, model components can be specified at different levels of abstraction allowing components of different resolution to interact. One example of where this type of abstraction is widely used is in the simulation of logic circuits. As technology progresses rapidly, the simulation technology has struggled to keep up the complexity. In an effort to address this capacity and performance gap, parallel simulation techniques have been applied to circuit/logic simulation. Unfortunately, the successful application of paral-

In another experiment, an unsynchronized simulation kernel was developed to study the effect of ignoring causal violations in parallel discrete-event simulations [33, 38]. Specifically, the aim of our experiment was to determine if a synchronized simulation could interact with an unsynchronized simulation and still correctly simulate the desired goal. We found that there are advantages in ignoring causality in certain simulations. Specifically, they are as follows: (i) faster simulation execution times (we observed a speedup of 8 over a Time Warp synchronized simulation), (ii) memory consumption is a fraction of what is needed for Time Warp simulation (as states are not saved), (iii) data obtained from unsynchronized simulation closely follow the data obtained from a Time Warp synchronized simulation (error rate is less than 2% on the average for our experiments), and (iv) no change in the modeling paradigm is required for such systems. This type of time-based resolution changes can be dynamically triggered and we envision its applicability in both the DIS and HLA runtime infrastruc4

System Model DoD

Avionics System Defense Contractor

Communication Subsystem Sub-Contractor 1

Radar Subsystem Contractor

RF Board Sub-Contractor 2

DSP Board Sub-Contractor 3

Processor Semiconductor Vendor

ASIC 1 IP Vendor

MMU Semiconductor Vendor

Satellite Communications Defense Contractor

Communication Subsystem Sub-Contractor

Software Sub-Contractor 4

Local Synthesis/Simulation

Cache Semiconductor Vendor

Remote Synthesis/Simulation

Figure 2. Distributed Architecture of a Communications System zation may be reluctant to release their intellectual property for integration and synthesis/simulation at higher levels.

ture (RTI) [7, 8] framework [18]. In yet another experiment, we developing a software environment for modeling and simulating large-scale networks [32, 33, 34]. However, modeling these large and complex networks has necessitated a corresponding improvement in the simulation tools used. The growing complexity of software simulation systems requires the reuse of software components since reinvention of technology is not desirable. Although considerable research has been conducted in the areas of composeable systems and reusable components [31], the software models developed for simulation based verification and analysis are seldom re-used. There are several reasons for this [29]. Some of the dominant hurdles faced are: (i) models developed for simulation by manufacturers and network designers are confidential (intellectual property issues); (ii) the models may not be portable or inter-operable [39] (lack of a standard modeling language) and; (iii) the models may not be readily available or accessible [30]. Figure 2 presents such a distributed design environment where the aforementioned problems may be faced. For example, consider the development of a largescale air/space communications system, as pictured in Figure 2. This shows the architecture of a distributed communication system, which would support an avionics operation, in which the components that make up the system are distributed across a number of organizations. Each organi-

Researchers have identified that the World Wide Web (WWW) provides an excellent backbone to enable sharing of information and data [15]. However, the complex interaction between components for modeling and simulation [32] render the raw WWW services insufficient. To address these issues, a distributed, rapid prototyping environment for designing, analyzing, and deploying networked systems is required. In an effort to build a tool that eases the design, verification, analysis, documentation, and deployment of computer networks, a web-based framework for networks engineering called FWNS [32] was developed at the University of Cincinnati. Although FWNS provides a practical solution to the model re-use problem and takes us closer to Fishwick’s digital objects ideology [16], research is needed to handle further diversity of simulation models. Different synchronization strategies are used by different researches and a universal mechanism to enable all the different strategies to co-exist in the same simulation is needed. More complex models at various levels of abstraction need to be developed to study the effectiveness of the framework. The framework for web-based network simulation provides an excellent infrastructure that will prove to be a stepping stone for an ultra-scale heterogenous simulation machine that is fully distributed over the Internet. 5

So how does the issues of multi-resolution modeling affect the DIS and the HLA community? As more and more legacy simulations are networked using the DIS or HLA protocols, it becomes necessary to rethink the existing modeling standards and practices. As different aggregate and disaggregate entities interact, there are bound to innumerable occurrences of inconsistencies in the simulation. Instead of using the current approach of waiting till a problem surfaces, simulation tool developers should start investigating methods to support multi-resolution modeling. In addition, techniques are required to convert extant legacy simulations to conform to a multi-resolution paradigm. Already there is an increased interest within the defense modeling and simulation community to address the multi-resolution modeling problem (as can be evinced from the recent broad agency announcements in this area). In addition, different research (defense and non-defense related) groups have started funding and addressing the model abstraction problem [10]. The DIS and the HLA community can contribute to the development of MRM techniques by studying and developing their models to be amenable with other multiresolution models. As we have illustrated in this section, the aggregation-disaggregation problem is not restricted to the DIS/HLA world. It affects the general simulation and modeling community. The PDES community also needs to study how their parallel and distributed models interact at different resolutions. One of the reasons, the PDES community hasn’t confronted this problem earlier is because there is not much interaction between model developers in the PDES community. One way to overcome this problem is to adhere to Fishwick’s notion of “digital objects” and put their models in an accessible location (e.g. web site) where other researchers can use them.

fulfill the design goals. Furthermore, these parties may be geographically dispersed and may have sensitive design information to protect. Moreover, current virtual prototyping languages do not fully support the methodologies for hardware/software co-design and multi-resolution modeling. In addition, heterogeneous models also create problems during modeling and simulation. This can lead to problems when attempting to connect these heterogeneous models to form a virtual prototype of the system as a whole. Given the diversity of simulation models and different resolution levels, the chief problem prohibiting fast and effective virtual design, prototyping and component reuse of multi-resolution simulation models is the absence of an uniform modeling paradigm for building multi-resolution models and a multi-resolution backplane for dynamically selecting a model that meets specific resolution requirements from a set of different resolution models for the same entity model. This problem stems from the fact that current design tools rarely support multi-resolution (variable-resolution or cross-resolution) modeling. More directly, such tools are non-existent. Hence, in conclusion, a simulation design environment for building large-scale heterogeneous, multiresolution simulations requires the following two major capabilities: (a) we need a design philosophy for expressing the structural and behavioral properties of (extant and new) simulation models at different levels of resolution such that the same model set can be used in multiple applications; and (b) we require a multi-resolution backplane for dynamically selecting a specific model resolution for an entity from an underlying model set. The purpose of this backplane is to insulate the other participants in a simulation from the details of the entity’s abstractions. Even though the resolution of the entity may change underneath the backplane, the other participants/entities in the simulation will still see the same interface presented by the backplane.

5 Conclusions

Acknowledgments

The days of isolated software development are quickly fading away as the Internet, the World Wide Web (WWW), and corporate intra-nets spread across the computing landscape. Modern software developers must fit the systems they develop into a complex information grid consisting of servers and clients interconnected by a plethora of local and wide area networks. This allows each provider (server) or consumer (client) to use tightly focused expertise to analyze and combine elements in order to produce or consume ever more advanced products. However, the growing size and complexity of electronic systems requires designers to reuse existing components. Multiple design teams within the same organization may develop components for the same system, and other components may be intellectual property provided by a third party. In either case, the parties participating in the design must collaborate in order to

The authors would gratefully like to acknowledge the suggestions and contributions of Timothy McBrayer, Dhananjai Madhava Rao, Karthik Swaminathan, and Narayanan V. Thondugulam. We would also like to acknowledge the support provided by the Advanced Research Projects Agency under contracts J–FBI–93–116 and DABT63–96–C–0055 for this work.

References [1] P. Ashenden. The Designers Guide to VHDL. Morgan Kaufmann Publishers, Inc, San Mateo, CA, 1996. [2] P. J. Ashenden, P. A. Wilsey, and D. E. Martin. SUAVE: Extending VHDL to improve modeling support. IEEE Design and Test of Computers, 15(2):34–44, April–June 1998.

6

[3] P. J. Ashenden, P. A. Wilsey, and D. E. Martin. SUAVE: Object-Oriented and Genericity Extensions to VHDL for High-Level Modeling. In Proceedings of Forum on Design Languages (FDL98), pages 109–118, September 1998. [4] M. L. Bailey. How circuit size affects parallelism. IEEE Transactions on Computer-Aided Design, 11(2):208–215, Feb. 1992. [5] M. L. Bailey. A time-based model for investigating parallel logic-level simulation. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 11(7):816–824, July 1992. [6] R. M. Cubert, T. Goktekin, and P. A. Fishwick. Moose: Architecture of an object-oriented multimodeling simulation system. In Proceedings of Enabling Technology for Simulation Science (SPIE AeroSense’97), Apr. 1997. [7] J. S. Dahmann. The High Level Architecture and Beyond: Technological challenges. In Proceedings of the 13th Workshop on Parallel and Distributed Simulation (PADS’99), pages 64–70, May 1999. [8] J. S. Dahmann, R. M. Fujimoto, and R. M. Weatherly. The Department of Defense High Level Architecture. In Proceedings of the 1997 Winter Simulation Conference, pages 142–149, Dec. 1997. [9] P. K. Davis. An Introduction to Variable-Resolution Modeling and Cross-Resolution Model Connection. Research report published by RAND, 1993. (R-4252-DARPA). [10] P. K. Davis and J. H. Bigelow. Experiments in Multiresolution Modeling (MRM). Research report published by RAND, 1998. (MR-1004-DARPA). [11] P. K. Davis and R. J. Hillestad, editors. Proceedings of Conference on Variable Resolution Modeling, CF-103-DARPA. RAND’s National Defense Research Institute, May 1992. [12] P. K. Davis and R. J. Hillestad. Families of models that cross levels of resolution: Issues for design, calibration and management. In Proceedings of the 1993 Winter Simulation Conference. ACM, 1993. [13] F. Eddy. Object-oriented modeling: Holitic and variableresolution views. In P. K. Davis and R. Hillestad, editors, Proceedings of Conference on Variable-Resolution Modeling, CF-103-DARPA, pages 44–51, 1992. [14] P. A. Fishwick. Simulation Model Design and Execution: Building Digital Worlds. Prentice Hall, Englewood Cliffs, NJ, 1995. [15] P. A. Fishwick. Web-based simulation: Some personal observations. Winter Simulation Conference, pages 772–779, Dec. 1996. [16] P. A. Fishwick. An architectural design for digital objects. In D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S. Manivannan, editors, Proceedings of the 1998 Winter Simulation Conference, volume 1, pages 359–365, December 1998. [17] P. A. Fishwick. Issues with web-publishable digital objects. In Proceedings of the SPIE Aerosense Conference, April 1998. [18] R. Fujimoto. Exploiting temporal uncertainty in parallel and distributed simulations. In Proceedings of the 13th Workshop on Parallel and Distributed Simulation (PADS’99), pages 149–156, May 1999. [19] IEEE Standard VHDL Language Reference Manual. New York, NY, 1993.

[20] IEEE Computer Society. IEEE Draft Standard VHDL-AMS Language Reference Manual, 1997. [21] T. G. Kim, C. Lee, E. R. Christensen, and B. P. Zeigler. System entity structure and model base management. IEEE Transactions on Sys. Man. Cyber., 1991. [22] T. G. Kim and B. P. Zeigler. The DEVS formalism: Hierarchical, modular system specification in an object-oriented framework. In Proceedings of the 1987 Winter Simulation Conference, pages 559–566, 1987. [23] K. S. Lee and P. A. Fishwick. Dynamic model abstraction. In Proceedings of the 1996 Winter Simulation Conference, pages 764–771, Dec. 1996. [24] K. S. Lee and P. A. Fishwick. A semi-automated methods for dynamic model abstraction. In Proceedings of Enabling Technology for Simulation Science (SPIE AeroSense’97), Apr. 1997. [25] T. McBrayer. Combination of Parallel Tasks to Speed Up Optimistic Simulation. PhD thesis, University of Cincinnati, May 1996. (PhD proposal). [26] T. McBrayer and P. A. Wilsey. Process combination to increase event granularity in parallel logic simulation. In 9th International Parallel Processing Symposium, pages 572– 578, Apr. 1995. [27] A. Natrajan and A. Nguyen-Tuong. To disggregate or not to disaggregate, that is not the question. Technical Report CS-95-18, University of Virginia, 1995. [28] A. Natrajan, P. F. Reynolds, and S. Srinivasan. MRE: A flexible approach to multi-resolution modeling. In Proceedings of the 11th Workshop on Parallel and Distributed Simulation (PADS’97), pages 156–163, June 1997. [29] E. H. Page, S. P. Griffin, and L. S. Rother. Providing conceptual framework support for distributed web-based simulation within the high level architecture. In Proceedings of SPIE: Enabling Technologies for Simulation Scicence II, Apr. 1998. [30] E. H. Page and R. E. Nance. Parallel discrete event simulation: A modeling methodological perspective. In Proceedings of the ACM/IEEE/SCS 8th Workshop on Parallel and Distributed Simulation, pages 88–93, Dec. 1994. [31] J. Penix, D. Martin, P. Frey, R. Radhakrishnan, P. Alexander, and P. A. Wilsey. Experiences in verifying parallel simulation algorithms. In Second Workshop on Formal Methods in Software Practice, Co-located with ISSTA98, Clearwater Beach, Florida, USA, March 4-5 1998. [32] D. M. Rao, R. Radhakrishnan, and P. A. Wilsey. FWNS: A Framework for Web-based Network Simulation. In 1999 International Conference On Web-Based Modelling & Simulation (WebSim’99). Society for Computer Simulation, Jan. 1999. [33] D. M. Rao, N. V. Thondugulam, R. Radhakrishnan, and P. A. Wilsey. Unsynchronized parallel discrete event simulation. In Proceedings of the 1998 Winter Simulation Conference, Dec. 1998. [34] D. M. Rao and P. A. Wilsey. Simulation of ultra-large communication networks. In Proceedings of the Seventh International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS’99), Oct. 1999. (forthcoming).

7

[35] P. F. J. Reynolds, A. Natrajan, and S. Srinivasan. Consistency maintenance in multi-resolution simulations. ACM Trans. on Computer Modeling and Simulation (TOMACS), 7(3):368–392, July 1997. [36] L. P. Soul´e and T. Blank. Statistics for parallelism and abstraction level in digital simulation. In Proceedings of the 24th Design Automation Conference, pages 588–591, New York, 1987. ACM/IEEE. [37] L. P. Soul´e and A. Gupta. An evaluation of the Chandy-Misra-Bryant algorithm for digital logic simulation. ACM Transactions on Modeling and Computer Simulation (TOMACS), 1(4):308–347, Oct. 1991. [38] N. V. Thondugulam, D. M. Rao, R. Radhakrishnan, and P. A. Wilsey. Relaxing causal constraints in pdes. In Proceeding of the 13th International Parallel Processing Symposium (IPPS/SPDP ’99), pages 696–700, Apr. 1999. [39] S. Vinoski. Corba: Integrating diverse applications within distributed heterog enous environments. IEEE Communications Magazine, Vol 35, No. 2, 35(2):46–55, Feb. 1997. [40] B. P. Zeigler. System-theoretic representation of simulation models. IIE Transactions, pages 19–34, Mar. 1984. [41] B. P. Zeigler. Object oriented modeling and discrete-event simulation. Advances in Computers, 33, 1991. [42] B. P. Zeigler. A systems methodology for structuring families of models at multiple levels of resolution. In P. K. Davis and R. Hillestad, editors, Proceedings of Conference on Variable-Resolution Modeling, CF-103-DARPA, pages 377–408, 1992. [43] B. P. Zeigler, C. J. Luh, and T. G. Kim. Model base management for multifacetted systems. ACM Transactions on Modeling and Computer Simulation (TOMACS), 1(3), 1992.

8

Suggest Documents