2013 IEEE 24th International Symposium on Personal, Indoor and Mobile Radio Communications: Fundamentals and PHY Track
Reliability of Network Simulators and Simulation Based Research Shahbaz Khan, Bilal Aziz, Sundas Najeeb, Aziz Ahmed, Muhammad Usman and Sadiq Ullah Telecommunication Engineering Department, University of Engineering & Technology, Peshawar, Pakistan
[email protected] networks (MANETs) covered in the proceedings of ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobi-Hoc). The authors covered the ACM Mobi-Hoc proceedings from 2000-2005 and estimated that 75% papers used a network simulator during the testing phase of their research. Therefore, it is fair to assume that a significant number of the published research is simulation based and although the percentages stated above may vary somewhat by including all available simulators; however, the values show the general trend. It is a fact that excluding the industry, testbeds (be it for MANET related, IEEE 802.11 based mesh networks, heterogeneous networks, or for large scale networks) are only available to fewer universities and research centers mostly in the developed countries. In the absence of such facilities, all other researchers have to resolve to network simulators. Even if a testbed is available it is nevertheless a fact that majority of tests conducted in practice take significantly longer time and extra efforts than conducting the same using a simulator. It is fair to assume that the percentage will rise with the passage of time owing to the fact that simulators save cost, time and efforts. However, the ride has remained bumpy for network simulators due to consistent revelations about inaccuracies/inconsistencies in their results [5]–[9]. The questions about credibility of network simulator and the simulation based research have always engrossed the minds of researchers. Surveying the literature on this topic reveals three major research directions: 1. A great deal of the published research discusses the accuracy of various simulators by comparing different simulators on a common test scenario(s). In this case, many studies [5], [9], [10] only compared simulators amongst themselves. We believe that without knowing about a standard benchmark, mere comparison of various simulators would not help identify which of those simulators is accurate. Like many other studies [11] and [12], we believe that in such comparisons, real testbed measurements, carefully conducted, should be used to accurately identify the degree of variation in measurements of various simulators. A review of such studies is given in section-II. 2. A small proportion of the literature [4], [6] has discussed the fallout(s) of poor credibility of network simulations either because of non-credible simulators (giving too idealistic results) or because of poor formulation of simulation based experiments that lead to illogical and/or unrepeatable results. The credibility and repeatability issues are discussed in section-III of our paper. 3. Some studies [9]-[13] have compared different simulators using various parameters like memory usage
Abstract— Network simulators have gained an increased importance in the research community to an extent that approximately 11% of the research papers that appeared in the proceedings of leading international conferences used network simulators to test their claims. It is anticipated that due to the several advantages network simulators will become more prevalent in the research community in future. However, with the growing popularity of the network simulators, challenges in the form of consistent revelations about their inaccuracies have started to seriously question the use of such simulators and simulation based research. This paper attempts to answer the questions (1) How can assumptions/simplifications in modeling affect realism of simulations? (2) How credibility of simulation based research be improved?, and (3) What necessary steps must be taken to make network simulators credible? The paper provides deliberated recommendations for improving the authenticity of simulation based experiments. The authors believe that the discussion in this paper will greatly facilitate the repeatability of simulation based research and will help researchers to select the most suitable (credible) simulator. Keywords- Network Simulators, Inaccuracy of simulation, OPNET, NS2, OMNet++, QualNet, Review
I. INTRODUCTION Owing to the significant advantages of network simulators in terms of cost, resource and time-of-testing efficiency and these along with many more factors advocate the widespread use of the simulators. To get an idea of the extent of use of network simulators in research, we conducted a thorough survey of research papers accepted in the conference proceedings of IEEE ICC, VTC, INFOCOM, GLOBECOM, PIMRC, WiCOM, WCNC, and MILCOM between January 2006 and September 2010, available via the IEEE Xplore Digital library. According to our estimate 1 approximately 11% research papers of the total 36103 papers had used network simulators during the testing. Out of the total 11% papers, NS-2/NS3 [1] has been used by about 70%, OPNET [2] by 20%, QualNet [3] by 8 to 12% while all other network simulator are used by less than 3% researchers. This is shown in figure-1. Our search included a wider range of topics covered in the previously mentioned conferences. In another study [4] the authors specifically focused on the use of network simulators for research related to mobile ad-hoc ϭ
The authors used the Advanced Search option in the IEEE Xplore digital Library using the keywords of OPNET, NS-2, NS-3, OMNet++, QualNet, GloMoSim, ShoX, and JSim. The percentage reported may increase if more simulator keywords are included. The percentage however, indicates the general trend.
978-1-4577-1348-4/13/$31.00 ©2013 IEEE
180
by varying the number of stations in network, simulation execution time and so on. In this case, the conclusions drawn from the comparisons merely tell us which simulator performs faster and/or consumes less memory during the simulation, however, such comparisons cannot help identify that whether the simulators are credible, and should their results be believed. In this paper, we do not review the studies as listed in this bullet-point. The rest of the paper is organized as follows: section-IV discusses the importance of independent conformance tests that are deemed to improve the credibility of simulated models in the simulators. Building on the discussion of section-III we then suggest a publicly accessible simulation repository in section-V. Finally, we summarize the key messages that come out clear during the analysis. The lessons learnt and recommendations for coping with the existing challenges are given in section-VI.
affects the performance of every protocol. For instance using the Free Space model would mean that the received signal strength would not be affected as in reality when a mobile station moves away. This would mean that the MAC layer retransmissions would be lower, and throughput would be higher. The coverage range in such cases especially for broadcast frames would be higher. This directly affects simulation of MANET routing protocols – the performance shown is over optimistic. Similarly, higher throughput at the MAC level would essentially mean a higher throughput at the Transport layer in a network wide context. A. Previous Comparisons The authors of [11] focused on comparison of the physical layer models of OPNET, Qualnet and NS2 with testbed measurements. The authors asserted that most of the deviations of results are caused by wrong configuration of the parameters of the physical layer models. However, the comparisons done do not reflect the actual reasons of performance deviation; for example, the authors have shown list of parameters in a simple table and have measured TCP throughput at variable distance. It is very important to mention that TCP itself has a number of parameters that needs proper tuning to get a uniform output even in the same simulator [16]. Similarly, the discussion and the eventual conclusion are based on the throughput as a function of the distance. However, throughput can be associated to be a function of the received power, which is a function of the distance and all of it depends on the transmit power of the transmitters. However, the paper [11] does not mention a value(s) of the transmit power. Likewise, contrary to what is shown in figure-2, the authors of [11] conclude that OPNET’s physical layer model does not make it possible to carry out simulations close to reality especially in indoor, close distance scenarios. This has been shown to contradict with our results shown in figure-2, later in this paper. NS2, Qualnet and OMNet++ are compared with a simple testbed in [17]. The authors have drawn the conclusions that NS2 gives results closer to reality as compared to other simulators while OMNet++ is not recommended because of lack of adequate models. While in another study [9], it is concluded that OMNet++ has improved the number of simulated models and is suggested to be a better option as compared to NS2. However, both of these studies did not use OPNET which has the second highest percentage use in all simulators (refer to figure-1) and the highest percentage in all commercial simulators. The authors in [18] compared OPNET, Qualnet and NS-2 with an Ethernet testbed and showed that testbed equivalent results can be generated through careful configuration of all three simulators. Although, this conclusion may be true in simulations of wired networks – the same conclusions cannot be drawn when wireless networks are considered. The performance of NS-2, in terms of packet delivery ratio, connectivity graphs and packet latencies was compared with a testbed and emulator in [19]. The authors concluded that careful
Figure-1: Usage percentage of various network simulators
II. CREDEBILITY OF NETWORK SIMULATORS We agree with the authors of [11] and [14] and believe that the deviation in the results of most of simulation studies are caused by wrong configuration/lack of accurate modeling at the Physical and medium access control (MAC) layer. Simple models that exclude several real-world effects, lack realism and their results are exaggeratedly optimistic. The authors in [8] surveyed papers that appeared in the proceedings of Mobi-Hoc and MobiCom conferences from 1995 to 2003 and showed that a greater number of papers used Flat Earth (Free Space model that does not include any effects) and Simple (unrealistic) models. The authors in [14] showed that simulated AODV [15] gives packet delivery ratio in excess of 80% while the same AODV experiment gives 42% delivery ratio in a testbed. The authors claim that the use of free-space and two-ray ground reflection models tend to estimate longer transmission range, resulting in shorter routes (in AODV and other protocols where the route selection is based on hop-count) and therefore better packet delivery ratio. The optimism or such disconnect at the Physical layer travels up the TCP/IP protocol stack and
181
configuration of NS2 gives an error of 0.3% and 10% for packet delivery ratio and network topology respectively. However, that claim is partially accurate – as the findings in [20] asserts that NS2 and Qulanet are not able to give a true distribution of the received signal strength, MAC layer interference, stability of routes at the network layer. The authors in [20] have drawn their conclusions through comparison of NS-2, Qulanet with testbed. The authors however, agreed that path-loss parameters can be configured in the simulators to reflect the real behavior. In the literature, we also come across studies that can mislead researchers about network simulators. For instance, [21], has drawn conclusions that OMNet++ performs better than NS2 and OPNET. While such bold statements need extensive and comprehensive analysis of all aspects of simulators – it should not merely be based on checking simulation execution time or any other simple performance metric that has nothing to do with authenticity of its simulation results. This statement is in contradiction of a comparative study presented in [22]. Similarly, in many previous studies [23], the performance of OPNET and NS-2 were compared targeting various aspects of the simulators. The conclusions of most of the older studies, becomes irrelevant with the passage of time because of the regular releases of the newer versions of the simulation softwares. For instance the authors of [23] used OPNET version 9 while at the time of writing this paper, OPNET has released version 16 with support for several more protocols and bug removals. As the latest releases of network simulators incorporate changes and resolution of performance bugs, it becomes important that such comparative studies of various aspects of the network simulators be conducted routinely to update the research community of the latest benchmarking results.
Combination of simulators, emulators and testbed would indeed bring the results a bit closer to the reality, it however would make the overall experimentation process more complex to reproduce and verify by another research group – as discussed in the following section. III.
CREDIBILITY VS REPEATABILITY OF SIMULATIONS
Essentially, the aim of questioning the credibility of network simulators is to make sure that irrespective of the simulation model, every simulation configured in a similar set of environment variables should conform to similar results. The variation in the credibility of every simulator directly affects the end results and also affects the repeatability of a simulation experiment when conducted using a different package. In order for a scientific experiment to be considered factual, it should be repeatable. However, even if we exclude the credibility factors of the network simulators most of the simulation based research cannot be independently repeatable. The authors in [4] surveyed research papers that appeared between 2000 and 2005, in the proceedings of ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobi-Hoc) and highlighted that poorly reported simulation experiments, due to the lack of specification of input parameters, in the published research led to 87% of the total papers (those using a simulation package) being not independently repeatable [6]. Lack of description of experiment design, the input parameters, and lack of rigorous testing can cause problems in repeating the same experiment by another research group. Such problems in repeatability also cast doubts about the viability of the outcomes of scientific experiments even if the simulation package is highly credible. Many of the popular simulators provide features that when properly tuned can close the gap between the simulators and actual measurements. But the observed trend as seen by authors in [8] and [5] is that most of the users are either not aware of the usage of those settings or may not care for proper tuning – therefore, resolving to the ‘default’ settings; that in many cases may render the final simulation results over-optimistic. This is evident from figure-2 which serves as a good example showing how careless configuration can foil results of a network simulator that has provisions to produce accurate results. The remaining part of this section discusses the likely inaccuracies of network simulations that would purely arise by using the ‘default’ parameters. Modeling the physical transmitter, channel, channelbound effects and the receiver is the most challenging part of modeling in the network simulators. Any assumptions or bypass of steps mean that the results will diverge from the reality. It then becomes important to determine the provisions that each network simulator has for modeling various physical phenomena. Figure-2 shows a comparison of received signal strength
B. Simulations cum Emulation vs real traces To the tackle the lack of credibility of network simulators, the idea of hybrid simulations has been proposed [24], [5], [25] whereby it is recommended that the lowest layers (MAC and Physical) be simulated while the rest of the interaction is carried out in real. There is also another direction to improve the simulation results by replacing the artificial (simulated) user traffic and user movement (mobility profiles) by real network traces [26]. Such mobility traces are particularly useful in simulations related to vehicular networks [27] and [28]. Likewise, it is argued that real traffic characteristics generated by network applications ranging from webbrowsing, streaming video to real-time audio and video may not be exactly reproduced in simulators [29]. Having different traffic characteristics mean a different demand of transport from the underlying network [30]. Based on this argument, the authors in [31] have proposed the Berlin Open Wireless Lab (BOWL) network where real users use the network with provisions for researcher to test the network amidst the normal use.
182
IV.
measurements (in dBs) for indoor and outdoor environment at 2, 10, 15, 20, 25, 30 and 40 meters distances in an IEEE 802.11 WLAN testbed. The measurements were recorded in a 90 meters long corridor on the ground floor of our academic block. The outdoor measurements were recorded on a grass ground. The testbed consists of two laptops connected in an Ad-hoc network connection. Both of the laptops have Fedora 10, kernel version 2.6. The laptops have Cisco Aironet 802.11 a/b/g wireless adapters. The chipset of the adapters are from Atheros and therefore compatible with Multband Atheros Driver for WiFi (MADWiFi). The same environment was then simulated in OPNET by varying the physical layer modeling parameters, for instance the transmission power and the propagation models. Figure-2 compares the performance of various propagation models available in OPNET in an indoor and outdoor environment at the distance of 2, 10, 15, 20, 25, 30 and 40 meters. Even with the default parameters setting the OPNET’s TIREM propagation models closely simulates the received signal strength. On average there is a 3dB difference between OPNET simulation and the real measurements of the testbed in the indoor experiment. Similarly, OPNET’s implementation of Walfisch-Ikegami and Longley Rice models simulates the outdoor scenarios with an average over estimation in the range of 5 dBs. We did not exploit the provisions available for configuring these models and used the default settings of the propagation models. The gap can be made closer by thoroughly tuning the configurable parameters of the propagation models. Essentially the message that comes out very clear from these comparisons is that sometimes the simulators may have the ability to generate near-actual results – however, lack of due attention in simulation design may give wrong results.
CONFORMANCE TESTS
All simulation software packages, whether they are open source, commercial or developed specifically for an experiment claim to follow standard specifications. However, despite the claims of adherence to the standard specification, if the simulators give variable results, it clearly implies that something has gone wrong in either, if not all, of their modules of the simulation packages. In this case, it is very important that all network simulation tools should pass through an independent test of conformance and validation. The tests should be carried out to check and identify the degree of adherence of each of these models to standard specifications provided either through an Request For Comments (RFC) or an IEEE or any other proprietary specification [7] and [31]. A validation and verification approach called Independent Verification and Validation (IV&V) [32] involves a third party which neither apart of the development team nor with the specification provider, verifies and validates all the possible inputs and their corresponding outputs. We suggest that an independent body be constituted whose responsibility would be to formulate standard tests for each module of a network simulator. For instance, a set of tests for the Medium Access Control module of a simulator may consist of tests that validate the MAC’s performance for Receiver group formation, queuing delay, backoff delay, list of possible receiver group, retransmissions, etc. The same set of tests should then be applied to the corresponding MAC modules in all network simulators. The independent body should only issue conformance certificates to a simulator whose validation results falls in a sufficient region of confidence. V.
MAKING SIMULATION CONFIGURATION PUBLIC
During the past decade, Ontology Alignment has been one of the challenging research issues in the field of Semantic Web and a number of matching techniques were presented every year [33]. The researchers used to test their techniques and algorithms independently with their own datasets but trivial improvement was noticed during that period of time. This dilemma has guided the research community to think about well organized and coordinated efforts. In this regard, the idea was materialized and an international initiative was started namely “Ontology Alignment Evaluation Initiative” (OAEI) [33]. The main goals of this initiative were to compare the performance of proposed techniques and to assess the alignment systems in terms of their strength and weaknesses. For this purpose, the OAEI organizes yearly events and publishes the results of such events. In addition, they have provided a common dataset which are considered as the benchmark for evaluating alignment systems. Furthermore, in recent years, the OAEI in association with Semantic Evaluation At Large Scale (SEAL) project [34] have provided new evaluation modality to provide more
Figure-2: Comparison of Received signal strength at varying distance using testbed measurements for indoor/outdoor and various propagation models of OPNET.
183
premature at this stage – it needs further detailed investigation. 3. It is also learnt that poor formulation of simulation based experiments render most of the research findings not independently repeatable- thus violating the basic definition of a ‘scientific experiment’. 4. To stop the prevalent casual simulation based experiments, reviewers should ask for detailed proof of the simulation environment and various parameters whenever a study reports achievement in a range higher than normal. 5. It is learnt that the combination of simulators, emulators and real traffic and mobility traces may improve the authenticity of the conclusion. 6. Free-space propagation models would never reflect anywhere near actual results. This would seriously undermine the results of experiments aimed at the routing, transport or the application layer experiments. 7. Simulators may not always be the culprits in simulation based studies that lack credibility. In many cases, the simulators may have the provision to generate nearactual results but due to lack of attention in the simulation design or lack of knowledge the end results may be unrealistic. 8. The ‘default’ settings/parameter values are not for all type of experiments. Care should be taken in selecting suitable values of various parameters. 9. Nothing should be left undefined – rather effort should be made in every scientific document to list all possible parameters that may have a critical role in affecting the credibility of simulation results. This will help in the repeatability of simulation based experiments. 10. Putting a detailed list of simulation parameters would not be possible in research papers. However, a repository of simulation configuration parameters of each paper that is accepted should be maintained and linked with the paper’s digital object identifier (DOI). This in turn, will make the results repeatable and even reusable if required by other researcher for further enhancement which can improve the quality and pace of network related research. 11. An independent body should be constituted to formulate benchmarking and validation tests for each module of various network simulator. The independent body should only issue conformance certificates to a simulator whose validation results falls in a sufficient region of confidence. 12. As the latest releases of network simulators incorporate changes and resolution of performance bugs, it becomes important that such comparative studies of various aspects of the network simulators be conducted routinely to update the research community of the latest benchmarking results.
automated results and give feedback to the participants. Using the SEAL project, the participants upload their tools via SEAL portal selecting the required evaluations and gets the evaluations results back. Similarly, the main idea of the OAEI and SEAL project can be extended, to the research pertaining to networks and which entail simulations, with the following considerations: • A web portal, where users can select the required simulator and configure the basic simulation model/input parameters for the selected simulator. • The simulation results can be stored in a repository for sending them back to the user or can remain accessible to other users depending on the preference of input user. The results may also be used for publishing in a conference or other platform. • The simulators should be scalable enough to give maximum input options to the web portal user. Simulator Selection Input Parameters
ns2 Web Portal
OPNET OMNET
Simulation Results
Simulation Results Repository
Any other
Figure-3: Logical Diagram of simulation through a web-portal
The whole process can be supervised by a single or group of conferences or even by the IEEE. The simulation results available in the repository can be reused by other researchers if these are made public. When information about simulation based experiments is available in public domain, it will greatly facilitate the repeatability of simulation based experiments and will help researchers to ensure strict compliance and to a greater extent the existing reservations on simulation based research will be addressed. 1.
2.
VI. LESSONS LEARNT It is learnt that results obtained through various network simulators remain over optimistic because of isolation of various physical phenomena when models are developed. This over optimism in the final results may not always cast doubts on the performance of every algorithm being tested; however, reports of variations in the results of various simulators make the research findings implausible to varying degrees. Therefore, efforts are always needed to explore the capabilities of network simulators so that the available parameters can be appropriately tuned and bring the results of simulation closer to reality. There is no unanimous agreement about the credibility of any particular network simulator – every study that is focused on a specific aspect has drawn a different conclusion. Therefore, bold statements about declaration of one simulator to be better than others is
REFERENCES [1] [2] [3]
184
NS3, Available online: http://www.nsnam.org , Accessed on 10-082012. OPNET, Available online: http://www.opnet.com Accessed on 1404-2012. QualNet, Available online: www.scalable-networks.com Accessed on 15-08-2012.
[4]
[5]
[6] [7] [8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
Stuart Kurkowski, Tracy Camp, and Michael Colagrosso. MANET simulation studies: the incredibles. SIGMOBILE Mob. Comput. Commun. Rev., 9(4):50–61, October 2005. D. Cavin, Y. Sasson, and A. Schiper. On the accuracy of MANET simulators. In Proceedings of the Workshop on Principles of Mobile Computing (POMC'02), pages 38–43, Toulouse, France, October 30– 31 2002. ACM. T. R. Andel and A. Yasinac. On the credibility of MANET simulations. IEEE Computer, 39(7):48–54, July 2006. J. Heidemann, K. Mills, and S. Kumar. Expanding confidence in network simulations. Network, IEEE, 15(5):58–63, 2001. Calvin Newport, David Kotz, Yougu Yuan, Robert S. Gray, Jason Liu, and Chip Elliott. Experimental evaluation of wireless simulation assumptions. SIMULATION, 83(9):643–661, September 2007. Elias Weingärtner , Hendrik Vom Lehn , Klaus Wehrle, A performance comparison of recent network simulators, Proceedings of the 2009 IEEE international conference on Communications, p.1287-1291, June 14-18, 2009, Dresden, Germany. P. Pablo Garrido, Manuel P. Malumbres, Carlos T. Calafate, “ns-2 vs. OPNET: a comparative study of the IEEE 802.11e technology on MANET environments”, In Proc. of the 1st international conference on Simulation tools and techniques for communications, networks and systems & workshops (SIMUTools), France, 2008 A. Rachedi, S. Lohier, S. Cherrier, and I. Salhi, “Wireless network simulators relevance compared to a real test-bed in outdoor and indoor environments,” in IWCMC ’10: Proceedings of the 6th International Wireless Communications and Mobile Computing Conference. New York, NY, USA: ACM, 2010, pp. 346–350. K. Tan, D. Wu, A. Jack Chan, P. Mohapatra, Comparing simulation tools and experimental testbeds for wireless mesh networks, in: IEEE International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), IEEE, 2010. J. Lessmann, P. Janacik, L. Lachev, and D. Orfanus. Comparative Study of Wireless Network Simulators. The Seventh International Conference on Networking, pages 517-523, 2008. IEEE. Jason Liu , Yougu Yuan , David M. Nicol , Robert S. Gray , Calvin C. Newport , David Kotz , Luiz Felipe Perrone, Simulation validation using direct execution of wireless Ad-Hoc routing protocols, Proceedings of the eighteenth workshop on Parallel and distributed simulation, May 16-19, 2004, Kufstein, Austria C. E. Perkins, E. Belding-Royer, S.R. Das. Ad hoc on-demand distance vector (AODV) routing. http://www.ietf.org/rfc/rfc3561.txt , July 2003. RFC 3561. KM Reineck, Evaluation and Comparison of Network Simulation Tools, Institute for Open Communication Systems. Master Thesis.2008 Puneet Rathod, Srinath Perur and Raghuraman Rangarajan, “Bridging the gap between the reality and simulations: An Ethernet case Study,” IEEE 9th International Conference on Information Technology (ICIT’06), 2006. S. Ivanov, A. Herms, G. Lukas, Experimental validation of the ns-2 wireless model using simulation, emulation, and real network, in: Proceedings of the 4th Workshop on Mobile Ad-Hoc Networks (WMAN’07), VDE Verlag, ISBN 978-3-8007-2980-7, 2007, pp. 433–444. K. Tan, D. Wu, A. Jack Chan, P. Mohapatra, Comparing simulation tools and experimental testbeds for wireless mesh networks, in: IEEE International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), IEEE, 2010. Xiaodong Xian, Weiren Shi, and He Huang. "Comparison of OMNET++ and other simulator for WSN simulation". In Industrial Electronics and Applications, 2008. ICIEA 2008. 3rd IEEE Conference, pages pp. 1439–1443, June 2008. Ugo Maria Colesanti , Carlo Crociani , Andrea Vitaletti, On the accuracy of omnet++ in the wireless sensornetworks domain: simulation vs. testbed, Proceedings of the 4th ACM workshop on
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32] [33]
185
Performance evaluation of wireless ad hoc, sensor,and ubiquitous networks, October 22-22, 2007, Chania, Crete Island, Greece. G. Flores-Lucio, M. Paredes-Ferrare, E. Jammeh, M. Fleury, and M. Reed, “Opnet-modeler and ns-2: Comparing the accuracy of network simulators for packet-level analysis using a network testbed,” in Proc. Int. Conf. Simul., Model., Optim., vol. 2, 2003, pp. 700–707. Lacage M, Ferrari M, Hansen M, Turletti T, Dabbous W. NEPI: using independent simulators, emulators, and testbeds for easy experimentation. ACM SIGOPS Operating Systems Review, January 2010; 43(4):60–65. S. Guruprasad, R. Ricci, and J. Lepreau, \Integrated network experimentation using simulation and emulation," in Proc. Tridentcom, Feb. 2005, pp. 204-212. Hongyu Huang, Yanmin Zhu, Xu Li, et al. 'META: a Mobility Model of Metropolitan Taxis Extracted from GPS Traces. WCNC 2010. S. Khalfallah and B. Ducourthial, “Bridging the Gap between Simulation and Experimentation in Vehicular Networks”, in Proceedings of the 72nd IEEE Vehicular Technology Conference (VTC2009-Fall), Ottawa, Canada, September 2010. A. Grzybekכ, M.Seredynskiכ, G.Danoy and P.Bouvry, "Aspects and Trends in Realistic VANET Simulations", IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2012, San Francisco, CA, USA. L. Janowski and P. Owezarski, Assessing the accuracy of using aggregated traffic traces in network engineering, Telecommunication Systems, vol. 43, no. 3-4, 2010, pp. 223-236. C. Phillips, S. Singh, D. Sicker, D. Grunwald, Techniques for simulation of realistic infrastructure wireless network traƥc, in: 7th International Symposium on Modeling and Optimization in Mobile, AdHoc, and Wireless Networks (WiOpt), 2009. Cairano-Gilfedder, C. and Clegg, R.G., “A decade of Internet research - advances in models and practices,” BT Technology Journal 23, vol.4, pp. 115-128, Oct. 2005. Robert G. Sargent. Verification and validation of simulation models. In Proceedings of the 37th conference on Winter simulation, WSC ’05, pages 130–143. Winter Simulation Conference, 2005. Ontology Alignment Evaluation Initiative, Available online: http://oaei.ontologymatching.org Last Accessed on 14-04-2012. Semantic Evaluation at Large Scale, Available online: http://www.seals-project.eu/ Last Accessed on 14-04-2012.