The NCTUns 1.0 Network Simulator Protocol Module ... - CiteSeerX

15 downloads 210 Views 2MB Size Report
Nov 11, 2003 - 2.2 Register a New Module with the GUI Node Editor. ...... module name registration, start-time parameter registration, and run-time get/set.
The NCTUns 1.0 Network Simulator Protocol Module Writer Manual

Last update date: 11/11/2003 Authors: Shie-Yuan Wang, Chih-Hua Huang, and Chih-Che Lin

(Note: Because the NCTUns 1.0 network simulator is still undergoing constant improvements, the information contained in this manual may be out-of-date. It is provided only for reference purposes.)

Produced and maintained by the Network and System Laboratory, Department of Computer Science and Information Engineering, National Chiao Tung University, Taiwan 1

Table of Contents Table of Contents ...........................................................................................................2 Chapter 1 Overview .......................................................................................................7 1.1 Introduction................................................................................................... 7 1.2 Overview of the NCTUns 1.0 Network Simulator’s Relevant Components 7 1.2.1 The Role of the Simulation Engine.................................................... 7 1.2.2 The Role of the Protocol Modules ..................................................... 8 1.2.3 The Role of the GUI Program............................................................ 8 1.3 The Concept and Format of the Simulation Network Description File (.tcl) 9 Chapter 2 Adding a New Module.................................................................................18 2.1 Register a New Module with the Simulation Engine.................................. 18 2.1.1 Module Name Registration .............................................................. 19 2.1.2 Start-Time Parameter Registration................................................... 19 2.1.3 Run-Time Get/Set Variable Registration ......................................... 20 2.2 Register a New Module with the GUI Node Editor.................................... 21 2.2.1 The Role and Format of the Module Description File (mdf.cfg)..... 22 2.2.2 The Try-and-Error Designing Module Dialog Layout Process........ 23 2.2.3 The Syntax and Semantic of the Layout Description Statements.... 25 2.3 A Complete Example .................................................................................. 33 2.3.1 Adding a myFIFO Module............................................................... 34 Chapter 3 Architecture Overview of the NCTUns 1.0 Network Simulator .................39 3.1 Overview..................................................................................................... 39 3.2 NCTUns 1.0 Network Simulator Architecture............................................ 41 3.3 Pseudo-Network Interface .......................................................................... 45 Chapter 4 Kernel Modifications...................................................................................48 4.1 Introduction................................................................................................. 48 4.2 IP Scheme and Routing Scheme ................................................................. 53 4.2.1 An IP Scheme and Routing Scheme Example ................................. 54 4.2.2 Kernel Modifications for IP and Routing Scheme........................... 63 4.3 Translation................................................................................................... 66 4.3.1 IP Address Translation ..................................................................... 67 4.3.2 Port Number Translation.................................................................. 72 4.4 System Calls added for Simulation Engine ................................................ 80 Chapter 5 Simulation Engine – S.E..............................................................................85 2

5.1 Architecture of the Simulation Engine........................................................ 85 5.2 Event ........................................................................................................... 86 5.2.1 Timer ................................................................................................ 89 5.2.2 Packet............................................................................................... 91 5.2.2.1 Packet Buffer (pbuf) ............................................................. 93 5.2.2.2 PT_DATA pbuf ..................................................................... 96 5.2.2.3 PT_SDATA pbuf ................................................................... 98 5.2.2.4 PT_INFO pbuf .................................................................... 100 5.2.3 Event Manager ............................................................................... 101 5.3 Scheduler................................................................................................... 102 5.4 Dispatcher ................................................................................................. 105 5.5 Module Manager....................................................................................... 107 5.6 Script Interpreter ........................................................................................110 5.7 The NCTUns APIs .....................................................................................114 Chapter 6 Module-Based Platform.............................................................................115 6.1 Introduction................................................................................................115 6.2 Module Framework....................................................................................117 6.2.1 Module Identifier ............................................................................119 6.2.2 Module Binder ............................................................................... 120 6.2.2.1 Upcall.................................................................................. 122 6.2.2.2 Priority ................................................................................ 123 6.2.3 Important member functions.......................................................... 124 6.3 Module Communication (M.C) ................................................................ 129 6.3.1 Inter-Module Communication (I.M.C) .......................................... 130 6.3.2 Communication with other components ........................................ 131 Chapter 7 A Simulation Example...............................................................................134 7.1 Simulation Setup....................................................................................... 134 7.2 Experimental Setup................................................................................... 137 7.3 Result Comparison.................................................................................... 138 Reference....................................................................................................................142 Appendix A – Timer APIs ..........................................................................................143 Appendix B – Packet APIs .........................................................................................147 Appendix C – NCTUns APIs .....................................................................................162

3

List of Figures FIGURE 3.1: T HE ARCHITECTURE OF THE NCTUNS. ................................................................................44 FIGURE 3.2 PACKET FLOW IN MODULE STREAM. ......................................................................................45 FIGURE 3.3: BY USING TUNNEL INTERFACES, ONLY THE LINK NEEDS TO SIMULATE. THE COMPLICATED TCP/IP PROTOCOL STACK NEED NOT BE SIMULATED. INSTEAD, THE REAL-LIFE WORKING TCP/IP PROTOCOL STACK IS DIRECTLY USED IN THE SIMULATION. ..............................................................47

FIGURE 4.1.1: T HE GENERAL ORGANIZATION OF NETWORKING CODE IN FREEBSD. ................................50 FIGURE 4.1.2: PROTOCOL LAYER PROCESSING FOR INCOMING AND OUTGOING PACKETS. ........................52 FIGURE 4.1.3: SOCKET I/O PROCESSING. .................................................................................................53 FIGURE 4.2.1: A SIMPLE NETWORK TOPOLOGY. .......................................................................................55 FIGURE 4.2.2: A NON-FORWARDING CASE (1). .........................................................................................60 FIGURE 4.2.3: A NON-FORWARDING CASE (2). .........................................................................................60 FIGURE 4.2.4: A NON-FORWARDING (3). ..................................................................................................61 FIGURE 4.2.5: A FORWARDING CASE (1). .................................................................................................62 FIGURE 4.2.6: A FORWARDING CASE (2). .................................................................................................62 FIGURE 4.2.7: A FORWARDING CASE (3). .................................................................................................63 FIGURE 4.3.1: T HE PORT USAGE ON A SIMULATED NETWORK. ..................................................................73 FIGURE 4.3.2: T HE PCB CHAINED IN MTABLE. ........................................................................................75 FIGURE 4.3.3: PCBS IN A SIMPLE EXAMPLE. ............................................................................................76 FIGURE 5.1: T HE ARCHITECTURE OF THE SIMULATION ENGINE. ..............................................................86 FIGURE 5.2.1: DATA IS ENCAPSULATED AS A PACKET-OBJECT...................................................................92 FIGURE 5.2.2: T HE PBUF TYPES. ..............................................................................................................95 FIGURE 5.2.3: T HE PT_DATA PBUF.........................................................................................................97 FIGURE 5.2.4: T HE PT_INFO PBUF. ......................................................................................................101 FIGURE 5.2.5: T HE SIMULATION ENGINE COMMUNICATES WITH EXTERNAL COMPONENTS. ...................107 FIGURE 5.2.6: AN EXAMPLE OF THE MODULE-REGISTER TABLE. ..........................................................108 FIGURE 5.2.7: MODULE MANAGEMENT IN A NODE MODULE..................................................................109 FIGURE 6.1: T HE MODULE-BASED PLATFORM.......................................................................................115 FIGURE 6.1.2: A STREAM MODEL IN THE NCTUNS NETWORK SIMULATOR.............................................117 FIGURE 6.2.1: T HE MODULE FRAMEWORK.............................................................................................118 FIGURE 6.2.2: MODULE DATA STRUCTURE.............................................................................................119 FIGURE 6.2.3: T HE MB ARCHITECTURE. ...............................................................................................121 FIGURE 6.2.4: T HE USAGE OF PRIORITY FIELD. ......................................................................................124 FIGURE 6.2.5: PACKET RECEPTION.........................................................................................................125 FIGURE 6.2.6: PACKET TRANSMISSION...................................................................................................126

4

FIGURE 6.2.7: T HE RELATIONSHIP BETWEEN PUT(), GET() AND SEND(). .................................................128 FIGURE 6.2.8: T HE RELATIONSHIP BETWEEN PUT(), GET() AND RECV()..................................................129 FIGURE 7.1.1: A SIMPLE WIRELESS NETWORK TOPOLOGY. .....................................................................135 FIGURE 7.1.2: T HE SCRIPT FILE FOR THE SIMULATION CASE...................................................................137 FIGURE 7.2.1: THE SIMULATION RESULT OF ONLY ONE GREEDY TCP CONNECTION FROM NODE 1 TO NODE 2. .................................................................................................................................................140 FIGURE 7.2.2: THE EXPERIMENTAL RESULT OF ONLY ONE GREEDY TCP CONNECTION FROM NODE 1 TO NODE 2.........................................................................................................................................140

FIGURE 7.2.3: THE SIMULATION RESULT OF TWO COMPETITIVE GREEDY TCP CONNECTIONS – TCP1: NODE 1 TO NODE 2,

TCP2: NODE 2 TO NODE 1 .............................................................................141

FIGURE 7.2.4: THE EXPERIMENTAL RESULT OF TWO COMPETITIVE GREEDY TCP CONNECTIONS – TCP1: NODE 1 TO NODE 2,

TCP2: NODE 2 TO NODE 1. ............................................................................141

FIGURE B.1: A PACKET-OBJECT DUPLICATION.......................................................................................149 FIGURE B.2: AN EXAMPLE OF PKT_MALLOC(). ......................................................................................151

5

List of Tables TABLE 1.3.1 THE GLOBAL VARIABLES USED AT INITIALIZATION-TIM..................................12 TABLE 1.3.2 THE NODE TYPES THAT ARE CURRENTLY SUPPORTED BY NCTUNS 1.0 ........14 TABLE 2.2.1 THE MEANINGS OF THE FIELDS OF THE HEADERSECTION ..............................23 TABLE 2.2.3.1 THE MEANINGS OF THE VARIABLES USED IN THE HEADERSION ................25 TABLE 2.2.3.2 THE POSSIBLE VALUES FOR THE VARIABLES USED IN THE HEADERSION .............27 TABLE 2.2.3.3 THE BASIC ATTRIBUTES USED TO DESCRIBE AN OBJECT ..............................29 TABLE 4.2.1: T UNNEL CONFIGURATION FOR FIGURE 4.2.1.......................................................................55 TABLE 4.2.2: RT FOR NODE 1 ..................................................................................................................56 TABLE 4.2.3: RT FOR NODE 2 ..................................................................................................................57 TABLE 4.2.4: RT FOR NODE 3 ..................................................................................................................57 TABLE 4.2.5: RT FOR NODE 4 ..................................................................................................................57 TABLE 4.2.6: RT FOR NODE 5 ..................................................................................................................57 TABLE 4.2.7: RT FOR NODE 6 ..................................................................................................................57 TABLE 5.2.1: T HE VALUE OF THE P_TYPE.................................................................................................95 TABLE 5.2.2: T HE VALUE OF P_FLAGS .....................................................................................................97

6

Chapter 1 Overview 1.1 Introduction

This manual aims to helping a researcher to develop his or her own protocol modules on top of the NCTUns 1.0 network simulator. In Chapter 1 and Chapter 2, it documents the detailed procedure for developing a protocol module, registering it with the simulation engine, and registering it with the GUI program. In the Appendix, the module API functions are also documented in detail. To enable a researcher to easily develop his/her own protocol modules, the source code of all supported protocol modules are released in the package. A researcher can easily learn how to develop a new module by learning the design and implementation of existing modules. To let a researcher fully understand the effect of performing a module API function, starting from Chapter 3 the internal design and implementation of the simulation engine and the modifications made to the kernel are also documented in detail. However, reading these chapters are not absolutely necessary and the most important chapers are still Chapter 1 and Chapter 2.

1.2 Overview of the NCTUns 1.0 Network Simulator’s Relevant Components 1.2.1 The Role of the Simulation Engine

The NCTUns 1.0 network simulator is an open-system network simulator. 7

Through a set of API functions provided by its simulation engine, a researcher can develop a new protocol module and add the module into the simulation engine. The simulation engine can be thought of as a small operating system kernel. It performs basic tasks such as event processing, timer management, packet manipulation, etc. Its API plays the same role as the system call interface provided by an UNIX operating system kernel. By executing API functions, a protocol module can request services from the simulation engine without knowing the details of the implementation of the simulation engine.

1.2.2 The Role of the Protocol Modules

The NCTUns 1.0 network simulator provides a module-based platform. A module corresponds to a layer in a protocol stack. For example, an ARP module implements the ARP protocol while a FIFO module implements the FIFO packet scheduling and buffer management scheme. Modules can be linked together to form a protocol stack to be used by a network device. A researcher can insert a new module into an existing protocol stack, delete an existing module from a protocol stack, or replace an existing module in a protocol stack with his (her) own one. Through these operations, a researcher can control and change the behavior of a network device.

1.2.3 The Role of the GUI Program

The NCTUns 1.0 network simulator provides a highly-integrated GUI program for users to conveniently and efficiently conduct simulation studies. The GUI program contains four main components. They are the Topology Editor, Node Editor, Performance Monitor, and Packet Animation Player, respectively. Among these four 8

components, the Node Editor is relevant to module writers.

The Node Editor is a graphical tool by which a researcher can easily construct a network device’s protocol stack. By this tool, he (she) can easily insert, remove, or replace a protocol module by manipulating the computer mouse. With a graphical representation of a node’s protocol stack, the Node Editor will generate a text description of a node’s protocol stack and export it to a simulation network description file (the .tcl file). At the beginning of a simulation, the text description file will be read by the simulation engine to construct the specified protocol stack for that node.

1.3 The Concept and Format of the Simulation Network Description File (.tcl)

The output of the GUI program is a set of files that together describe the simulation job. Among these files, the file with the “.tcl” suffix is the file that describes the relationship among the modules used inside a node (i.e., the node’s internal protocol stack) and the connectivity among the nodes in a network. The .tcl file is generated automatically by the GUI program when a GUI user finishes drawing his/her topology. Normally it is unnecessary for a user to understand the detail of a .tcl file. However, for an advanced user, he (she) may want to understand what a .tcl file defines and describes. The rest of this section explains the format and meanings of a .tcl file.

A .tcl file consists of three parts. They are (1) global variable initialization, (2)

9

node protocol stack specification, and (3) node connectivity specification.

i. Global Variable Initialization The “Set” keyword is used to set the initial value of a global variable. For example, Set TickToNanoSec = 100 means that a variable named “TickToNanoSec” is set with the value “100”. Right now, there are five global variables, which are shown in the following table. They need to be initialized in the .tcl file.

Variable name SimSpeed

TickToNanoSec

Possible values

Meaning

AS_FAST_AS_POSSIBLE

The option indicates that the simulation engine should run as fast as possible. Normally, this is the preferred mode and is the default mode. AS_FAST_AS_REAL_CLOCK This option indicates that the simulation engine should run as fast as the real clock. Normally, this mode is chosen when the user wants to use the simulator as an emulator. Note that this mode will be effective only when the simulation engine is able to run the simulation faster than the real clock. In such a case, the simulation engine can purposely slow down its simulation speed so that its speed matches the real clock. When the simulation engine runs slower than the real clock, there is really no way to ask the simulation engine to run as fast as the real clock. 100 or 1000

10

The variable indicates the conversion ratio between a virtual

clock tick and a nanosecond in virtual time. The default value is 100, which means that 1 tick represents 100 nanoseconds in a simulation. Changing this value may change the precision of simulation results and the speed of simulation. Normally, a user should not change this setting. WireLogFlag

On , off

The variable indicates whether the simulation engine should turn on or off its logging mechanism to log wired packet transfers.

WirelessLogFlag

On , off

The variable indicates whether the simulation engine should turn on or off its logging mechanism to log wireless packet transfers.

RandomNumberSeed

0, or any integer

If the chosen random number seed for a simulation case is greater than 0 and fixed, the NCTUns 1.0 network simulator’s results will be repeatable. This means that no matter how many times a simulation case is run, its results will always be the same. A user can choose a particular random number seed for a simulation case in the GUI program. If the chosen number is 0, which is also the default value, the simulation engine will internally choose a random number for the random number seed each time when the simulation case is run. This will be useful for studying a network’s behavior under different

11

stochastic conditions. TABLE 1.3.1 THE GLOBAL VARIABLES USED AT INITIALIZATION-TIM

ii. Node Protocol Stack Specification

The creation block describes the protocol stack of a node, which starts with the” Create” keyword and ends with the “EndCreate” keyword. The first line of the creation block specifies the node number, the type of the node, and the name of this node. The following is an example: Create Node 1 as HOST with name = HOST1 This statement specifies a node whose node ID is 1, type is HOST, and name is HOST1. A node’s name is constructed by concatenating the node’s type with its node ID, which is unique in a simulation. To ensure uniqueness of node IDs, different nodes use different node IDs regardless of their types. For example, in a simulation it is impossible to have two nodes whose names are HOST1 and ROUTER1. On the other hand, having HOST1 and ROUTER2 or ROUTER1 and HOST2 in a simulation is possible. The node types that are currently supported are shown in the following table:

Node Type

Explanation

HOST

An end-user computer or a workstation that is on a fixed network.

MOBILE NODE

An IEEE 802.11 (b) mobile station that operates in the ad-hoc mode

MOBILENODE_INFRA

An IEEE 802.11 (b) mobile station that operates in the infrastructure mode

AP

An IEEE 802.11 (b) access point.

SWITCH

A layer-2 switch

HUB

A layer-1 hub

ROUTER

A layer-3 router 12

WAN

A layer-2 device that simulates the various properties of a Wide-Area-Network. This device can purposely delay, drop, and/or reorder passing packets according to a specified distrtribution. Currently, uniform, exponential, and normal distributions are supported.

EXTHOST

An external end-user computer that is connected to a simulated fixed network. This node type is provided for emulation purposes. In an emulation, an external real machine (not the machine that is simulating the specified network) can interact with any node in a simulated network. For example, the external real machine can set up a TCP connection to a host in the simulated network and exchange data with it – just as the external machine is another host is the simulated network. To graphically specify how an external machine connects to which nodes in a simulated network, each external machine is represented by an EXTHOST node in the GUI program. Packets initiated and sent out by the external machine will be received by the simulation machine and from now on can be viewed that they are just initiated and sent by the EXTHOST node. Of cousse, an external machine must be connected to the simulation mamchine via some networks such as 100 Mbps Fast Ethernet. Also, some routing entries 13

and IP address settings must be set on the external and simulation machines. For these details, please refer to the NCTUns 1.0 network simulator’s GUI user manual. TABLE 1.3.2 THE NODE TYPES THAT ARE CURRENTLY SUPPORTED BY NCTUNS 1.0

A node can have one or multiple “ports.” The term “port” mentioned here refers to a hardware network interface, not a software port (e.g., a TCP or UDP port) that means a specific type of network service. For example, a host with only one network interface has only one “port” while an 8-port switch has 8 “ports.” Therefore, a creation block for a node is composed of one or several “port” blocks.

A port block starts with the “Define port portid” statement, where portid refers to the ID of this port, and ends with the “EndDefine” keyword. A port block is composed of a number of module blocks, each of which corresponds to a protocol module that has been registered with the simulation engine.

Inside a module block, there may be one or several statements to initialize the module’s parameters.

The following is a simple example that initializes the

parameters of a module named “Interface”:

Module Interface : Node1_Interface_1 Set Node1_Interface_1.ip = 1.0.1.1 Set Node1_Interface_1.netmask = 255.255.255.0 A module block starts with the “Module” keyword and ends with an empty line. The first line of this block indicates that the type of this module is “Interface,” and the name of this module instance is “Node1_Interface_1.” Conceptually, this type/name 14

relationship corresponds to class/object relationship in C++. In a module block, a user can specify the local variables (parameters) of a module object. In this example, an object named “Node1_Interface_1” contains two variables that need to be initialized -- “ip” and “netmask”. The next statement, “Set Node1_Interface_1.ip = 1.0.1.1“, initializes “ip” to “1.0.1.1”. Similarly, the third statement assigns “255.255.255.0” to “netmask”. If there is no parameter to be initialized, a module block would have only one statement to indicate its type and name, which is then directly followed by an empty line.

After defining all of the module blocks used inside a “port”, the connectivity relationship among them is then specified by the “Bind” statements. For example, the following red-color statements specify the connectivity relationship among the protocol modules used inside the “port1” of “Node1.” In this example, the “interface” module links with the “arp” module. The “arp” module connects with the “fifo” module, which in turn links with the “mac802.3” module. The remaining statements chain the “tcpdump” module, “physical” module, and the “link” module in sequence. With these “Bind” statements, the module instances defined in the module blocks are now chained together to form a protocol stack for this port.

Bind Node1_Interface_1 Node1_ARP_1 Bind Node1_ARP_1 Node1_FIFO_1 Bind Node1_FIFO_1 Node1_MAC8023_1 Bind Node1_MAC8023_1 Node1_TCPDUMP_1 Bind Node1_TCPDUMP_1 Node1_Phy_1 Bind Node1_Phy_1 Node1_LINK_1

15

With these module block definitions and “Bind” statements, now the definition of the port block is finished. If a node has multiple ports, these ports will be defined in the same way. After all ports of a node have been defined, the definition of that node’s protocol stack is finished.

ii. Node Connectivity Specification

After the internal structures (the protocol stack) of all nodes are defined, now the connectivity relationship among these nodes (i.e., the topology) should be specified. This is done through the “Connect” statements. The following is an example:

Connect WIRE 1.Node1_LINK_1 4.Node4_LINK_1 Connect WIRE 2.Node2_LINK_1 4.Node4_LINK_2 Connect WIRE 3.Node3_LINK_1 4.Node4_LINK_3

A “Connect” statement specifies two nodes and the type of the link that connects these

two

nodes.

The

format

nodeid1.link_module_instance_name

is

“Connect

LinkType

nodeid2.link_module_instance_name,”

where LinkType can be WIRE or WIRELESS.

For the WIRE link type, the first statement indicates that node1 and node4 connect to each other through a wired link. On node1, the wired link is attached to the “LINK_1” module instance, which is defined in port 1. On node4, the wired link is attached to the “LINK_1” module instance, which is defined in port 1. Similarly, the second and the third statements specify that there are wired links between node2 and 16

node4, and node3 and node4, respectively.

For the WIRELESS link type, all mobile nodes (each mobile node uses a wireless network interface) that use the same frequency channel will be collected together and put after the “Connect Wireless” statement.

After these “Connect” statements, finally comes the “Run” statement. The keyword “RUN” indicates the desired total simulation time in virtual time. For instance, “RUN 100” means that we would like the simulation case to simulate 100 seconds of the real network. Note that depending on the simulation machine’s speed and the simulation case’s complexity, the time required to finish this 100-second simulation case in real life may be smaller or larger than 100 seconds.

17

Chapter 2 Adding a New Module Because learning from working examples is the best way to understand a new scheme, the source code of the simulation engine and all supported protocol modules are released in the package of the NCTUns 1.0 network simulator. A module writer thus can create his or her own module by simply copying an existing module source code and then modifying the source code to meet his or her needs. Based on our experiences, this would be the most effective way to create a new protocol module and make it work correctly with the simulation engine.

In this chapter, we will present the required procedures to add a new module. In Section 2.1, we present how to register a new module with the simulation engine. In Section 2.2, we present how to register it with the GUI Node Editor. In Section 2.3, we present a simple example in which we add a new module named “myFIFO” to the NCTUns 1.0 network simulator (i.e., both the simulation engine and the GUI Node Editor).

2.1 Register a New Module with the Simulation Engine

Three actions are required to add a new module into the simulation engine -module name registration, start-time parameter registration, and run-time get/set variable registration. They will be discussed in later sections.

18

2.1.1 Module Name Registration

All modules must be registered with the simulation engine before they can be used by the NCTUns 1.0 network simulator to generate simulation results. The NCTUns 1.0 simulation engine provides a macro REG_MODULE (name, type) for a module developer to register a module. A module developer only needs to add a REG_MODULE statement for his/her new module into the main() function in nctuns.cc (this file is in the package’s “src/nctuns/” directory) and rebuild the simulation engine. The REG_MODULE () macro has two parameters. The first one is the name of this module while the second one is the C++ class name of this module. For example,

REG_MODULE ("SIMPLE-PHY", phy);

The above statement registers a module whose class name is “phy”and whose module name is ”SIMPLE-PHY.” From now on, the ”SIMPLE-PHY” module name can be used in a .tcl file to refer to this type of module (not to a particular instance of this type of module).

2.1.2 Start-Time Parameter Registration

A module may have several parameters whose values need to be set at start-time, that is, at the beginning of a simulation. For example, a FIFO module normally has a parameter to specify the maximum queue length for its FIFO queue. Such parameters 19

need to be explicitly registered with the simulation engine so that their values can be specified in the simulation network description file (the .tcl file). This kind of registration can be accomplished by using the vBind() macro. The usage of the vBind() macro is explained below:

vBind ( exported_name , the address of the corresponding parameter variable );

The first parameter (exported_name) is the exported name of the parameter variable while the second one is the address of the parameter variable. Note that a parameter variable’s exported name can be different from its real name used in the C++ program. After performing this operation, now the value of this start-time parameter can be specified in a .tcl file. Later on, when the simulation begins, the value specified in the .tcl file will be set to the corresponding parameter variable declared in the C++ program (i.e., the simulation engine).

2.1.3 Run-Time Get/Set Variable Registration

Sometimes it is very useful to observe the status of a variable, a node, or a protocol while a simulation is running. For example, a user may be interested in seeing how the current queue length of a FIFO queue varies during a simulation. To support this functionality, a module developer should register these variables with the simulation engine so that they can be exported and accessed at run-time. The simulation engine provides the macro EXPORT() to support this functionality. Its usage is explained below:

EXPORT (variable name, permission mode); 20

In the above macro, the first parameter is the name of the exported variable while the second one is a flag to indicate the access permission mode for this variable. Two access permission modes are supported. They are the READ_ONLY and WRITE_ONLY, respectively, and can be combined.

2.2 Register a New Module with the GUI Node Editor

This section explains how to register a new module with the GUI node editor. This step is necessary because a NCTUns 1.0 user normally uses the GUI node editor to facilitate specifying a node’s protocol stack (used protocol modules) and the values used by these modules. Registering a new module with the simulation engine alone does not automatically let the GUI node editor know that a new module has been added to the simulation engine. Therefore, it is required that a module developer also register a new module with the GUI node editor.

To do so, three operations are required. First, a module developer should add a block of information describing the new module into the simulator’s module description file (the mdf.cfg). Second, the developer should design a GUI layout for this module’s parameter dialog box. Third, if the developer wants to make this module parameter dialog box look “beautiful,” he/she may need to spend some time adjusting the “look” of the dialog box. The following sections will present the details about registering a new module with the GUI node editor.

21

2.2.1 The Role and Format of the Module Description File (mdf.cfg)

The module description file (mdf.cfg) is used to describe all of the modules that have been registered with the simulation engine. When the GUI program starts, it will read this file once to learn what modules are already registered with the simulation engine. In contrast, the GUI node editor will read this file each time when it is invoked in the GUI program to learn the newest module parameter dialog box layouts

The module description file is composed of several module description blocks. A module description block starts with the “ModuleSection” keyword and ends with the “EndModuleSection” keyword. A module description block is divided into three parts – the HeaderSection, InitVariableSection, and ExportSection, respectively. Similarly, these sections start with the “HeaderSection”, “InitVariableSection”, and the “ExportSection” keyword, respectively, and end with the “EndHeaderSection”, “EndInitVariableSection”, and the “EndExportSection” keyword, respectively.

In HeaderSection, a module developer can describe the property and characteristics of the new module, including the module name, which network type this module can support (either wired, wireless, or both), the module group name (note that the modules that belong to the same module group will be collected and displayed in the same category in the GUI node editor), the version number, the author, etc. Table 2.2.1 explains the meanings of these fields shown in HeaderSection.

Field Name

Meaning

ModuleName

The name of this module

22

ClassName

The name of the class corresponding to this module

NetType

The network type that the module can support (wired, wireless or both)

GroupName

The name of the group that this module belongs to

AllowGroup

An option for future use. This is used to indicate which module groups are allowed to connect to this module group.

PortsNum

The number of ports that this module can support (future use)

Version

The version of this module

Author

The author of this module

CreateDate

The creation date of this module

Introduction

A short description or comment about this module

parameter

A start-time parameter variable. The GUI program reads this “mdf.cfg” file to know what parameters will be used at start-time. With this information, it will export these start-time parameters in the generated .tcl file.

TABLE 2.2.1 THE MEANINGS OF THE FIELDS OF THE HEADERSECTION

The possible set of values for each parameter variable will be explained in detail in Section 2.2.3.

2.2.2 The Try-and-Error Designing Module Dialog Layout Process

The NCTUns 1.0 network simulator provides a convenient environment to enable a user to easily perform many tasks. However, right now, due to the lack of manpower and research fund, there is still one thing that cannot be performed easily, which is generating the GUI layout of a module’s parameter dialog box. 23

Ideally, a module’s parameter dialog box GUI layout should look “beautiful.” That is, its parameter input fields should be concisely and neatly arranged in the dialog box. However, it is very difficult for the GUI program to automatically design a “beautiful” GUI layout for a module’s parameter dialog box. This is because whether the outlook of a parameter dialog box looks beautiful is highly subjective. As such, this job must be done by (and is left to) the user.

The NCTUns 1.0 network simulator adopts a flexible way to specify the layout of a dialog box. A module developer can specify the layout for variables that need to be initialized in the “InitVariableSection” section. He/she can also specify the layout for the variables that allow run-time accesses in the “ExportSection” section. These are done through the use of some XML-like layout description statements. (The detailed syntax and semantic of these layout description statements are discussed in the following section.) A user can edit these statements to manually design and adjust the GUI layout of a dialog box. Since the node editor will re-read the mdf.cfg file each time when it is invoked, a user can use a try-and-error process to adjust the dialog box’s GUI layout until it looks “beautiful” enough for him/her. To be more precise, after the user makes some changes to the dialog box’s GUI layout, he/she can re-invoke the node editor to see how the new GUI layout would look like.

Apparently, this approach is not as intuitive as some commercial GUI builder programs, which can easily build a dialog box by dragging GUI objects around on a dialog box. In the future, if manpower and research fund permit, we certainly would like to provide our own GUI builder. Right now, since normally a module has only a few parameters to set, we have no problem using the try-and-error process to make 24

“beautiful” dialog boxes.

2.2.3 The Syntax and Semantic of the Layout Description Statements

i. HeaderSection The first table collects the relevant variables and their meanings. The second table lists the possible value set for each variable.

Field Name

Meaning

ModuleName

The name of this module

ClassName

The name of the class corresponding to this module

NetType

The network type that the module can support

GroupName

The name of the group this module belongs to

AllowGroup

An option for future use. To indicate which module groups can connect to this module group

PortsNum

The number of ports that this module can support

Version

The version of this module

Author

The author of this module

CreateDate

The creation date of the module

Introduction

A short description or comment about this module

Parameter

A start-time parameter variable. The GUI program reads this “mdf.cfg” file to know what parameters are used at start-time. With this information, it will export these start-time parameters in the generated .tcl file.

TABLE 2.2.3.1 THE MEANINGS OF THE VARIABLES USED IN THE HEADERSION

25

Field Name

Possible Values

ModuleName

A user specified string

ClassName

A user specified string

NetType

Wire , Wireless , or Wire/Wireless

GroupName

AP, ARP, PSBM, MROUTED, HUB, MAC80211, MAC8023, MNODE, SW, PHY, WPHY, INTERFACE, nctunsdep, User specified.

AllowGroup

XXXXX (not used now)

PortsNum

SinglePort , MultiPort

Version

A user specified string

Author

A user specified string

CreateDate

A user specified string. The recommend format is as follows: dd/mm/yy_seq#

Introduction

A user specified comment description string

Parameter

The format of a parameter statement is explained as follows: Parameter Name Value Attribute The possible attributes are listed below: “local”, “global”, “autogen”, “autogendonotsave”. “local” means that this parameter is used only in this module. If its value is updated , it will not be copied to other modules of the same kind. “global” means that the value of this parameter, if updated, will be copied to the same parameter of the same modules in the network. ”autogen” means that the value of this parameter will be

26

automatically generated by the GUI program. However, a user can still replace the auto-generated value with his/her desired value. “autogendonotsave” is similar to “autogen”. However, its value cannot be changed by a user. No matter how a user replaces the auto-generated value with his/her desired one, the final value will still be determined by a pre-defined formula. TABLE 2.2.3.2 THE POSSIBLE VALUES FOR THE VARIABLES USED IN THE HEADERSION

Normally, a possible value of an autogendonotsave parameter is a formula consisting of the three predefined variables: $CASE$, $NID$, and $PID$.

$CASE$ represents the main file name of a simulation case’s topology file. It will be repaced by the main file name when this variable is accessed. For example, if a simulation case’s topology file is saved with the filename “test.tpl”, $CASE$ will be replaced by “test”. $NID$ represents the ID of the node to which this module is attached. Analogously, $PID$ represents the ID of the port to which this module is attached.

ii. InitVariableSection Normally, a user should specify the caption and the size of the diabox. The keyword “Caption” indicates the caption of the dialog box, and “FrameSize width height“ indicates the size of the dialog box. For example, Caption

"Parameters Setting"

FrameSize

340 80

These statements will generate a dialog box of 340x80 pixels with a caption as 27

“Parameters Setting”. After specifying the caption and the size of the dialog box, now a user can arrange the layout inside the dialog box. A dialog box would contain a number of GUI objects, such as a button, a cancel button, a textline, etc. Each GUI object corresponds to a description block in “InitVariableSection”, which always starts with “Begin” and ends with “End”. The following is an example:

Begin BUTTON Caption Scale ActiveOn Enabled Action Comment

b_okL "OK" 270 12 60 30 MODE_EDIT TRUE ok "OK Button"

End The description blocks for different objects share several common and basic attributes. For example, the caption and scale commands are used commonly. A “BUTTON”-like object is an example of an object consisting of only basic attributes. Let’s first take the simple “BUTUON” object as an example. More special attributes will be discussed later.

For a “BUTTON” object, the keyword “BUTTON” follows the keyword “Begin” and it is followed by the object name “b_ok”. The following table lists its attributes:

Attribute name

Possible values

Comment

Caption

User specified

The caption of this object

Scale

User specified

The four numbers represent (x, y, width, height). 28

ActiveOn

MODE_EDIT ,

An option to specify in which mode this

MODE_SIMULATION object should is active. MODE_EDIT stands for the period of time before a simulation is run. MODE_SIMULATION stands for the period of time during which a simulation is running. Enabled

TRUE, FALSE

If an object is not enabled, it will not be displayed (dimmed). That is, a user cannot operate this object.

Action

Ok , cancel

An attribute used by button-like object, such as the button and cancel button to indicate which action it will perform after a user presses it.

Comment

user specified

Comment for this object

TABLE 2.2.3.3 THE BASIC ATTRIBUTES USED TO DESCRIBE AN OBJECT

a. LABEL “LABLE” is used to display some comment on a dialog box. The attributes of a LABEL object are the same as those of a “BUTTON” object.

b. RADIOBOX/CHECKBOX In RADIOBOX/CHECKBOX, there are some new attributes. Let’s take the following example to explain:

29

Begin RADIOBOX

arpMode

Caption

"ARP Mode"

Scale

10 15 260 135

ActiveOn

MODE_EDIT

Enabled

TRUE

Option

"Run ARP Protocol" Enable

flushInterval

Enable

l_ums

Disable ArpTableFileName OptValue

"RunARP"

EndOption

Option

"Build ARP Table In Advance" Disable flushInterval Disable l_ums Enable

ArpTableFileName

OptValue

"KnowInAdvance"

VSpace

40

EndOption

Type

STRING

Comment

"ARP Mode"

End

It is a RADIOBOX block whose name is “arpMode”. The first four statements describe the caption, size, in which mode this radiobox should be active, and when it should be enabled. Then two option blocks follow, each of which starts with the “Option” keyword and ends with the “EndOption” keyword. “Option“ specifies the string for this option that should be shown in the dialog box. The “OptValue” specifies the real value that will be assigned to “arpMode” if this option is selected. The “Enable” and “Disable”statements inside the “Option” block specify that, when a user selects this option, which objects should be enabled or disabled. The term “VSpace” is used to specify the vertical height of the area used for this option. 30

b. TEXTLINE TEXTLINE provides a text field for inputting or outputting data. A module developer can indicate the type of the data read from a textline. The data will be interpreted as a value of the type indicated by the “TYPE” keyword.

c. GROUP GROUP is used to organize related objects together. It can contain any number of other objects that are related to an area. Like other objects, it has four basic attributes “Caption”, “Scale”, “ActiveOn”, and “Enabled” to define the caption, the size of its area, the active mode, and the enabled/disabled conditions

31

ExportSection “ExportSection” provides an area in a dialog box in which a user can get/set the current value of a variable at run-time. “Caption” and “FrameSize” are the two basic attributes for this section. If a module doesn’t have any run-time accessible variables, “Caption” should be set to “”, a null string, and “FrameSize” should be set to 0 0. Besides the objects discussed above, there are two useful objects that are new in this section. They are the “ACCESSBUTTON” and “INTERACTIONVIEW.” The formats of these two objects are shown in the following examples:

Begin ACCESSBUTTON Caption Scale ActiveOn Enabled

ab_g2 "Get" 215 55 70 20 MODE_SIMULATION TRUE

Action ActionObj

GET "max-queue-length"

Reference Comment

t_mq "get"

End Begin INTERACTIONVIEW iv_arp Caption "Arp Table" Scale 10 20 200 30 ActiveOn MODE_SIMULATION Enabled TRUE Action ActionObj

GET "arp-table"

Fields Comment

"MAC Address" "IP address" "Arp Table"

End 32

For an “ACCESSBUTTON” object, it is used to get or set the value of a single-value

run-time

variable.

There

are

three

new

attributes

for

“ACCESSBUTTON.” They are “Action”, “ActionObj”, and “Reference”, respectively. The value of “Action” can be “GET” or “SET” to indicate when a user presses this button which operation should be peformed. “ActionObj” indicates the name of the object that the GET/SET operation should operate on. Finally, “Reference” points to the name of the GUI object (e.g., a TEXTLINE object) in which the retrieved value should be displayed. For example, the current queue length of a FIFO module may be GET and displayed at a TEXTLINE GUI object named “curqlen”

For an “INTERACTIONVIEW” object, it is used to display the content of a multi-column table at run-time. Normally, it is used to GET a switch table, an ARP table, or an AP’s association table. Besides “Action” and “ActionObj”, there is a new attribute called “Fields” to specify the names of the fields (columns) of the table. The “Fields” attribute is followed by several quoted strings, each of which represents the name of a field.

2.3 A Complete Example

In this section, we use a step-by-step example to show how to add a new module named “myFIFO” to the NCTUns 1.0 network simulator. We hope that a module developer can easily develop and add his/her own module after reading this quick-tour example. 33

2.3.1 Adding a myFIFO Module

To save time, we clone the source code of the existing “FIFO” module and give it a new name called “myFIFO”. We will illustrate how to integrate the “myFIFO” module (a new module) into the NCTUns 1.0 network simulator in the following:

1. Determine a class name for your module. In this case, the class name of the module is set to “myFIFO”. This class name must be different from all class names that are already used in the simulation engine C++ program. Then consider the group to which the new module should belong and store the source code in the appropriate directory. If it should belong to a new module group, its module group name can be a new name. The GUI node editor will create a new group category for it and place it in that category. In this case, the source code of this module can be placed in a new directory corresponding to this new group. In this example, myFIFO belongs to the existing “PSBM” (which means packet scheduling and buffer management) group. As such, we store the module source code in the directory “src/nctuns/module/ps/myFIFO”.

2. After we determine the class name, we can register our module with the simulation engine. First, we open the file “src/nctuns/nctuns.cc”. In main(), we add the following statement:

REG_MODULE(“myFIFO” , myFIFO );

3. Then we determine which variables should be exported at start-time. In the 34

constructor of the class, we use the vBind() macro to register these run-time variables. In this example, we add the following lines:

/* bind variable */ vBind("qmax", &if_snd.ifq_maxlen); vBind("log_qlen", &log_qlen_flag); vBind("log_option", &log_option); vBind("samplerate", &log_SampleRate);

Then the local variable “if_snd.ifq_maxlen” is exported as a start-time variable named “qmax”, which will be used by the simulation engine. Similarly, “log_qlen_flag” is exported as “log_qlen”, “log_option” is exported as the same name “log_option”, and “log_SampleRate” is exported as “samplerate.”

4. Determine which variables to be exported as run-time accessable variables. In this example, we export “queue_length” in myFIFO::init() function:

EXPORT("queue-length", E_RONLY|E_WONLY);

We export “queue-length”with its access mode set to “readable and writeable”.

5. Next, we should write a command handler to deal with run-time access events. By default, the simulation engine knows that a module’s “command()” method is its run-time-access event handler. Here is the relevant piece of source code in myFIFO::command().

35

/* The Get implementation of Exported Variable */ if (!strcmp(argv[0], "Get")&&(argc>=2)) { if (!strcmp(argv[1], "queue-length")) { sprintf(buf, "queue-length: %d\n", if_snd.ifq_maxlen); EXPORT_ADDLINE(buf); return(1); } } /* The Set implementation of Exported Variable */ if (!strcmp(argv[0], "Set")&&(argc==4)) { if (!strcmp(argv[1], "queue-length")) { if_snd.ifq_maxlen = atoi(argv[3]); return(1); } } The above piece of source code first decides whether the input command is a “GET” or “SET” command. It then performs appropriate processes.

6. Register with the GUI node editor. First, we should register the start-time variables. We should add a module description block for “myFIFO” into “mdf.cfg”, which is stored in the directory “/usr/local/nctuns/etc/.” Because in this example, the “myFIFO” module is a module cloned from the “FIFO” module, we simply copy and paste the description block of “FIFO” and alter the values of some fields in its “HeaderSection”. The header section modified for “myFIFO” is shown below. The red-color terms are the fields that we modified. They include the module name, class name, and the information for version control.

36

HeaderSection

ModuleName

myFIFO

ClassName

MyFifo

NetType

Wire/Wireless

GroupName

PSBM

AllowGroup

XXXXX

PortsNum

MultiPort

Version

myFIFO_001

Author

NCTU_NSL

CreateDate

10/12/2002

Introduction

"This is a cloned FIFO module."

Parameter

max_qlen 50

local

Parameter

log_qlen

local

Parameter

log_option FullLog

Parameter

samplerate 1

Parameter logFileName

off

local

local

$CASE$.fifo_N$NID$_P$PID$_qlen.log autogendonotsave

EndHeaderSection

7. Rebuild (recompile and relink) the “nctuns” program (the simulation engine) and the “myFIFO” module. Then the “myFIFO” module will be registered with the simulation engine. To rebuild the simulation engine, you can re-run the install.sh installation script program provided in the NCTUns 1.0 network simulator package. Because you just want to rebuild the simulation engine, you can skip the time-consuming kernel building and tunnel inteface creation steps.

8. Execute the “nctunsclient” program (the GUI program) and then invoke the node editor. Then, we will find that the new module “myFIFO” is now listed in the node editor’s “PSBM” category. This means that the “myFIFO” module has already been

37

registered with the GUI node editor successfully.

As shown above, the procedure for adding a new module to the NCTUns 1.0 network simulator is quite simple and intuitive.

38

Chapter 3 Architecture Overview of the NCTUns 1.0 Network Simulator 3.1 Overview

The NCTUns 1.0 network simulator uses a simulation methodology proposed by S.Y. Wang and H.T. Kung at INFOCOM’99 [1]. Although this methodology is simple, it can easily construct extensible and high-fidelity TCP/IP network simulators. The key facility used by this methodology is the tunnel network interface. By using tunnel network interfaces, a simulator constructed using this methodology can use the existing real-world UNIX kernel code to generate high-fidelity TCP/IP network simulation results.

A tunnel network interface is a pseudo device that functions like a real-world network interface. From the UNIX kernel’s point of view, a tunnel network interface is no different from a real-world network interface. Therefore, when the kernel processes a packet received from or send out to a tunnel network interface, it processes the packet in the same way as it processes a packet received from or sent out to a real-world network interface.

The first network simulator that was constructed using this methodology is the Harvard TCP/IP Network Simulator [2]. In this simulator, it uses an existing real-world FreeBSD protocol stack to provide high-fidelity TCP/IP network simulation results. However, the only type of network that the Harvard network simulator can simulate is point-to-point networks. If a broadcast network (e.g., IEEE 39

802.3 and IEEE 802.11) simulation is needed, the Harvard network simulator cannot simulate such a network. Based on both the methodology proposed in [1] and the Harvard network simulator [2], we design and implement the NCTUns 1.0 network simulator to simulate many types of networks.

The NCTUns 1.0 network simulator has more functionalities and advantages than the Harvard network simulator. For example:

1. The NCTUns 1.0 network simulator has all of the advantages that the Harvard network simulator has. 2. The NCTUns 1.0 network simulator can support emulations. 3. The NCTUns 1.0 network simulator has a highly-integrated and user-friendly GUI environement. 4. The NCTUns 1.0 network simulator uses a distributed architecture to support remote and concurrent simulations. 5. The NCTUns 1.0 network simulator can simulate many types of networks and protocols. (e.g., IEEE 802.3 Ethernet, IEEE 802.11 (b) wireless LAN ad hoc and infrastructure modes are supported.) 6. Existing application programs on a system can immediately run with the simulator without any modification. Although the Harvard network simulator says that it also has such an advantage, for some application programs, they may still need to be modified. 7. The unnatural IP scheme and Routing scheme are hidden from application programs. Application programs can directly use the standard IP and port schemes on a simulated network. 8. The NCTUns 1.0 network simulator provides an open-system architecture, 40

which we call the “module-based platform”. The module-based platform is a platform that allows network simulator users to easily develop their own network protocols and integrate them into the NCTUns 1.0 network simulator.

3.2 NCTUns 1.0 Network Simulator Architecture

Figure 3.1 shows the overall architecture of the NCTUns 1.0 network simulator. In this figure, the whole NCTUns 1.0 network simulator architecture is divided into four small components, which are discussed below:

1. Kernel- supporting component The kernel-supporting component provides kernel services to the NCTUns 1.0 network simulator. This component includes the modifications to TCP/IP protocol stack in the kernel and the system calls added or modified for the NCTUns 1.0 network simulator. Due to the kernel-supporting component, application programs are not aware that their packets are sent or received through a simulated network. This enables current existing application programs such as telnet, ftp, ...,etc., to immediately run with the simulator without any modification. 2. Simulation Engine (S.E) The Simulation Engine (S.E) provides a platform for users to implement and integrate their own network protocols into the NCTUns 1.0 network simulator. By implementing protocols as modules and combining them in a controlled way, users can easily create a device node. As Figure 3.1 shows, 41

node 1 and node 2 are composed of an IF module, ARP module, FIFO module, IEEE 802.3 module, PHY module and Link module. From this figure, we see that the S.E is further divided into several smaller components: a. Module Manager (M.M): The M.M manages all modules that a user registered and used in the simulation. It keeps several data structures to maintain these information. b. Dispatcher: The dispatcher component is responsible for communicating with other external components such as Coordinator (C.O), tcsh and GUI through the IPC or network communication mechanism. c. Script Interpreter: The Script interpreter component reads a .tcl file and parses it to construct a simulated network. This file is used to describe a simulated network. After parsing it, the Script Interpreter notifies the M.M to dynamically create corresponding modules and then combine them together to form nodes. d. NCTUns APIs: The APIs are provided by the NCTUns 1.0 network simulator for modules to ask for the S.E’s services. Hence any module should request for the S.E’s services through these APIs. e. Event: The Event component is a data structure used to encapsulate messages that are exchanged between modules and scheduler component. f. Scheduler: The Scheduler component is the core of the S.E. It uses a polling mechanism to check its event pool to see if there is any expired event to process. In addition, it is also responsible for polling tunnel interfaces to see if there is any packet queued in one of these interface queues and waiting for transmission. 3. Modules 42

The NCTUns 1.0 network simulator provides a module-based platform for users to develop their own network protocols as modules. By implementing network protocols as modules, users can easily use the NCTUns 1.0 network simulator to simulate their protocols. By combining modules, users can easily construct a network device that they want. A module is a set of functions that should follow the syntax and rules defined by the NCTUns 1.0 network simulator. By implementing network protocols as modules and combining them in a controlled way, users can register their modules with the simulator and simulate their protocols. As Figure 3.2 shows, we see that when a packet is read from the interface queue by S.E, this packet is processed by all of the modules on a node to simulate packet transmission or reception. 4. IPC The IPC is a component responsible for message communication between simulator, C.O and GUI or tcsh. The C.O is used to directly communicate with the S.E through the IPC mechanism.

Before a simulation starts, only the S.E, kernel-supporting component and IPC component are present in the simulator. So far, no module is created in the simulation yet. After the simulation starts, the M.M dynamically creates nodes and a network topology according to the .tcl file that the Script Interpreter reads. On the upper side of Figure 3.1, it shows that the simulator creates a network topology with three network devices dynamically – node 1, node 2 and a switch. The switch here is a twoport layer-2 device that connects node 1 and node 2 together.

Figure 3.2 shows a flow path that a packet will take when it is exchanged

43

between the sender and the receiver. Once a packet is generated from an application program, it is copied into the kernel for processing. After the protocol processing in the kernel, the packet is queued in the tunnel interface queue (for each tunnel interface, it has its own tunnel interface queue in the kernel). The S.E will check every tunnel interface queue to see if there is any packet in one of the interface queues. If there exists any outgoing packet, the S.E reads it from the kernel and passes it to the corresponding node’s module stream.

FIGURE 3.1: THE ARCHITECTURE OF THE NCTUNS.

44

FIGURE 3.2 PACKET FLOW IN MODULE STREAM.

3.3 Pseudo-Network Interface

A pseudo network interface is a pseudo device that doesn’t have a real physical network attached to it. From the kernel’s point of view, the functionalities of a pseudo network interface are no different from those of a real network interface. Packets can be sent to or received from a network through a pseudo network interface, just as if they were sent to or received from a real network interface such as an Ethernet interface.

If we want, we can create any type of pseudo network interface in the kernel. But on most UNIX machines, a kind of pseudo network interface has already been provided – the tunnel network interface. The tunnel network interface has functionalities that are like those of a real world network interface. Hence it is convenient for us to use tunnel network interfaces to construct a simulated network in 45

the the NCTUns 1.0 network simulator. Only some modifications to tunnel network interface device driver are needed.

For each tunnel network interface, it has a corresponding special device file in the /dev directory. This means that we can open, read, write and close this special file, just like we manipulate a normal data file on a UNIX system. The corresponding kernel functions that implement these operations are in the /sys/net/if_tun.c file. In the following, we will discuss three main functions of them:

1. tun_output() The tun_output() is called from ip_output() in the kernel. After an outgoing packet is encapsulated as an IP packet, the tun_output() is called. For every tunnel network interface, there is a queue associated with it. An outgoing IP packet is always queued in this queue in tun_output() function. 2. tun_read() The tun_read() is a kernel function that copies packets queued in interface queues in the kernel to a user-level application program. An application program can issue a read() system call to read packets from the kernel to the user space. This read() system call finally will call the tun_read() kernel function. In the NCTUns network simulator, we use tun_read() to simulate packet transmission. 3. tun_write() When an application program issues a write() system call to write a packet to a tunnel network interface, the tun_write() kernel function will be called. This kernel function will copy the packet from the user-level to the kernel. When this kernel function is called, it will call the ip_input(). In the NCTUns

46

1.0 network simulator, we use this kernel function to simulate the packet’s reception.

Figure 3.3 shows an example to demonstrate how a packet is sent and received through a tunnel network interface. In the figure, the tun_output() is called from the IP protocol layer to send packets out. If an application program issues a read() system call to read a packet from a tunnel network interface. The read() system call will indirectly call tun_read() kernel function to read a packet from the kernel to the application program. Contrarily, if an application program issues a write() system call to write a packet to a tunnel network interface, the write() system call will indirectly call the tun_write() to copy the packet from the user-level to the kernel. Through these simple procedures, the NCTUns 1.0 network simulator can simulate packet transmissions and receptions accurately.

FIGURE 3.3: BY USING TUNNEL INTERFACES, ONLY THE LINK NEEDS TO SIMULATE. THE COMPLICATED TCP/IP PROTOCOL STACK NEED NOT BE SIMULATED. INSTEAD, THE REAL-LIFE WORKING TCP/IP PROTOCOL STACK IS DIRECTLY USED IN THE SIMULATION.

47

Chapter 4 Kernel Modifications 4.1 Introduction

The NCTUns 1.0 network simulator is a system-dependent network simulator. It needs some special services from an operating system to work correctly. Normally, these services are not provided by a general operating system. Hence some operating system modifications are needed to provide such services for the NCTUns 1.0 network simulator. The NCTUns 1.0 network simulator is constructed and runs on an open source operating system platform – the FreeBSD system. Hence in this section, we review the overall network implementation of the FreeBSD. And in the next section, we will discuss the kernel modifications made for the NCTUns network simulator

In the FreeBSD, the networking code is organized into three layers, as shown in Figure 4.1.1 – the socket layer, protocol layer and interface layer, respectively. On the left side of this figure we note where the seven layers of the OSI reference model fit in with the FreeBSD network organization. From this figure, we also can see that there are three queues placed between these three layers – the socket queue, interface queue, and IP input queue. Following is the description of them:

1. The socket layer is a protocol-independent interface which application processes use to access the lower protocol-dependent layer. An application processes use this socket layer through system calls. For example, An application process may use the sendto() system call to send an outgoing 48

packet or may use the recvfrom() system call to receive a packet from a network. 2. The protocol layer contains all the protocol family implementations such as TCP/IP, OSI, and Unix Domain. In Figure 4.1.1, we only show the TCP/IP protocol. This is because the main kernel modification that we made to the protocol layer in the FreeBSD is the TCP/IP protocol stack. 3. The interface layer contains the device drivers that communicate with network devices. In UNIX system, a device could be either a real or a pseudo device. But no matter whether the device is a real or a pseudo-device, this layer is always present for each of them. In the NCTUns network simulator, the tunnel network interface is used. Due to the use of the pseudo devices, the NCTUns network simulator can deceive UNIX kernel into thinking that a simulation packet is a real packet. As such the kernel processes the packet in the same way as it processes a real packet. This characteristic increases the accuracy of simulation results.

49

FIGURE 4.1.1: THE GENERAL ORGANIZATION OF NETWORKING CODE IN FREEBSD.

We mentioned that three queues are present between three layers in Figure 4.1.1. These three queues have their own functions at their respective layer, which are described below:

1.

The socket queue is a queue used to hold outgoing or incoming packets. For every socket, it has its own queue.

2. The interface queue is a queue in the interface layer to hold outgoing packets. It is also one queue per interface (Ethernet, loopback, SLIP, PPP, etc.). When the protocol layer passes a packet into the interface layer, the interface layer always queues the outgoing packet in its interface queue first. Then it checks the interface to see if the interface is in the active state. If the interface is not active, the protocol layer will try to start it. Once the interface is active, it will continuously try to pass the packet at the head of its interface queue into the network device for transmission until its interface queue is empty. 4. The IP input queue is one queue per protocol. Hence there is only one IP input queue in the FreeBSD networking organization.

Figure 4.1.2 shows how the protocol layer processes an incoming and an outgoing packet and the interaction between the socket layer and the protocol layer. On the upper side of this figure we can see that application processes manipulate packets through the socket system calls. Once the socket system call is called, the UNIX kernel changes to kernel mode (or privileged mode) and the internal socket layer function is called to service an application process’s request. On the left side of the same figure we can see how an outgoing packet is processed by the functions that are for outgoing packet processing in the protocol layer. Similarly on the right side of 50

this figure we also can see how an incoming packet is handled by the functions that are for incoming packet processing. In the following, we use Figure 4.1.2 to discuss:

1. Application processes may receive and send packets through the socket system calls. For example, an application processes may use the sendto() system call to a send packet to network. When this system call is called by the application process, the FreeBSD changes it’s mode to kernel mode and the appropriate internal socket layer function in the kernel is called to service the request of the application process. Corresponding to sendto() system call, the internal socket layer function, sosend(), is called in the kernel. 2. For each outgoing packet in the processing of the TCP/IP protocol stack, it may be processed in turn by tcp_output() (or udp_output()), ip_output() and ether_output(). Continuing from the above example, when the sosend() is called, the sosend() copies data in an application process into a socket buffer in the kernel. Then the tcp_output() (or udp_output()) is called to encapsulate the outgoing packet as a TCP/UDP segment. After completing the transport layer process, in the network layer the ip_output() is called to encapsulate the packet as an IP datagram. Finally the IP datagram is encapsulated as a ether-frame in ether_output() and queued in the interface queue. The NIC then sends the packet at the head of the interface queue to the network. 3. When a NIC receives a packet from the network, the ether_input() is called to de-encapsulate the incoming ether-frame and then dispatch it to an appropriate upper function. If the ip_input() of the network layer is called, the ip_input() may continue to call transport layer function (tcp_input() or udp_input()) to forward the packet or just reply an ICMP message to the sender. If the processing in the transport layer is completed, the incoming 51

packet is queued in a socket buffer to wait for read by an application process through the system call (recvfrom(), recv(), read(), etc).

FIGURE 4.1.2: PROTOCOL LAYER PROCESSING FOR INCOMING AND OUTGOING PACKETS.

In the following, Figure 4.1.3 depicts the socket I/O functions in the socket layer. In which, Figure 4.1.3(a) shows the system calls, write(), writev(), sendto() and sendmsg(), which we refer to collectively as the write system calls. We can see from this figure that all the write system calls, directly or indirectly, call the sosend(), which does the work of copying data from an application process to a socket buffers in the kernel and passing data to the protocol layers. Similarly, Figure 4.1.3(b) shows the system calls, read(), readv(), recvfrom(), recvmsg(), which we refer to collectively as the read system calls. As the write system calls, we can clearly see from Figure 4.1.3(b) that all the read system calls utilize a common function, in this case 52

soreceive(), to do packet receptions. The soreceive() function transfers data from a receiver buffer of the socket to the buffer specified by an application process.

(a)All

socket output is handled by sosend().

(b) All socket input is processed by soreceive().

FIGURE 4.1.3: SOCKET I/O PROCESSING.

4.2 IP Scheme and Routing Scheme

Definition:

Routing Scheme

For a simulator constructed based on the methodology proposed in [1] running on an operating system, the Routing Scheme is used on the simulator to do the following things:

1. To deceive the kernel to route a simulation packet to its corresponding pseudo-device based on the packet’s virtual IP address. The Routing format is shown as follows: Host route:

SrcNetID.SrcHostID.DstNetID.DstHostID

Network route:

SrcNetID.SrcHostID.0.0 SrcNetID.SrcHostID.DstNetID.0

2. To determine if a node should drop, forward or receive an incoming 53

simulation packet. Definition:

IP Scheme

For the Routing scheme to work correctly on a simulated network, which is based on a methodology proposed in [1], a new IP format is introduced for this methodology. This new IP format should have a capability of encoding the IPv4 source and destination IP addresses in a 32-bits length space. The format of this new IP address is shown below: 1.0.NetID.HostID:

for network interface

SrcNetID.ScrHostID.DstNetID.DstHostID:

(S.S.D.D format) for sending packet.

For example: As Figure 4.2.1 shown, if the node 1 wants to send a packet to node 3, the destination IP address should be specified as follows: 1.3.1.1.

Because of these special IP and Routing schemes, a standard IPv4 address is divided into two parts in our simulator. One is for source IP address and another is for destination IP address. This division results in a limitation for our network simulator. It is the maximal number of usable simulated IP addresses. From these two schemes, we can see that the maximal number of usable IP address is 2^16 = 65536. This scale is still an acceptable limitation for our simulator to simulate a complex network.

4.2.1 An IP Scheme and Routing Scheme Example

In this subsection, we illustrate how the IP scheme and Routing scheme work on our network simulator with a simple network topology, which is shown in Figure 4.2.1. On the left side of this figure is a subnet whose network address is 1.0.1.0. On the other side is another subnet whose network address is 1.0.2.0. A router, here the node 4, connects these two sub-networks together. The TUNX marked near a node is

54

the pseudo network interface – the tunnel network interface, used for that node.

FIGURE 4.2.1: A SIMPLE NETWORK TOPOLOGY.

To simulate a network topology on our network simulator, we should set some configurations, which are listed below: 1. Configure each tunnel network interface the simulator uses with an IP and Ethernet MAC address. For each network interface, no matter whether it is a real-life or a pseudo network interface, to work on a system, these information should be configured. Table 4.2.1 lists the configuration of each tunnel network interface for Figure 4.2.1. In this table, we can see that each tunnel interface is configured with an IP address in the 1.0.X.X format and a netmask 255.255.255.0, which is an 8-bits network address.

TABLE 4.2.1: TUNNEL CONFIGURATION FOR FIGURE 4.2.1.

55

2. Configure the kernel routing table. To configure the kernel routing table, we should use the Routing scheme to aggregate each node’s routing table in a single kernel routing table.Without using the routing scheme, each node’s routing entries may conflict with those of nodes. However, if the Routing scheme is introduced, the routing entry conflicting problem won’t happen. Table 4.2.2 to Table 4.2.7 show each node’s routing table for the network topology depicted in Figure 4.2.1. On the left-hand side of these tables is the routing table which is seen from a node’s point of view. On the middle of the tables is also a routing table. However, it is seen from the kernel’s point of view. Finally the right side is the commands that should be typed in the FreeBSD command shell to add a routing entry to the kernel. For example, there are two routing entries in node 1’s routing table – one is for the default route, in this case (default 1.0.1.4 fxp0), and the other is for direct route (1.0.1/24 Link#1 fxp0), which means that the destination is directly reachable via an interface and requiring no intermediary system to act as a gateway. In the kernel routing table, the default route is represented as (1.3/16 tun1 tun1) and the direct route is represented as (1.3.1/24 tun1 tun1), which are shown in the middle of Table 4.2.2. Note that the routing table in a router, in this case the node 4 routing table shown in table 4.2.5, should be noted specially. In this table, the last two routing entries are added specially for the routing scheme to route a packet to its correct tunnel network interface.

TABLE 4.2.2: RT FOR NODE 1

56

TABLE 4.2.3: RT FOR NODE 2

TABLE 4.2.4: RT FOR NODE 3

TABLE 4.2.5: RT FOR NODE 4

TABLE 4.2.6: RT FOR NODE 5

TABLE 4.2.7: RT FOR NODE 6

57

After the configuration of tunnel network interfaces and kernel routing entries, traffic generated from some existing application programs could be routed by kernel to their correct tunnel network interfaces according to the destination IP address in the packets. In the following we illustrate how the kernel correctly routes a packet to its destination according to the routing scheme and IP scheme presented in Figure 4.2.2 to Figure 4.2.7 (Note: Figure 4.2.2, Figure 4.2.3 and Figure 4.2.5, Figure 4.2.6 are another presentation of Figure 4.2.1).

Figure 4.2.2 and Figure 4.2.3 demonstrate a non-forwarding case. In figure 4.2.2, node 1 sends a request packet to node 3. We can clearly see that only node 3 picks up the incoming packet because the packet is destined to it. Contrarily, the others nodes will discard the incoming packet in their tunnel network interface. Figure 4.2.3 shows that only the node 3 receives an incoming packet and sends a reply packet to the sender, which is node 1 in the case. Figure 4.2.4 illustrates in detail how the routing scheme works for the communication between node 1 and node 3. At the top of this figure, an application program generates a packet destined to node 3. Then the kernel network protocol stack encapsulates the outgoing packet as an ether-frame. When this ether-frame reaches the tunnel-interface (here it is tun1 according to the 1.3.1.1 routing entry), the destination and source IP address of the ether-frame are 1.3.1.1 and 1.0.1.3, respectively, as shown in Figure 4.2.4. After the tunnel network interface’s processing, the destination and source IP address of the outgoing packet are modified to 1.0.1.1 and 1.0.1.3, respectively. This doing has an advantage that if we attach a packet filter such as tcpdump to the tunnel network interface, the IP address of captured packet will be in a standard IP format but not the special IP scheme format. Once other nodes which are in the same network as node 1 receives the packet sent from node 1 , the first two bytes of both destination and source IP address are 58

modified to the last two bytes of the receiving tunnel network interface’s IP address. In this case, the IP addresses are modified to 1.1.1.1 and 1.1.1.3 in node 3, respectively, because the IP address of the receiving tunnel network interface is 1.0.1.1. After the IP modification in the receiving tunnel network interface is finished, the tunnel network interface uses a 4-bytes destination IP address of the received packet to generate two 4-bytes IP addresses, which are in the IP scheme format and shown in following: ( 1.1 . 1.1) => ( 1.0.1.1 , 1.0.1.1) : prepend 1.0 to both the first and the last two bytes of the destination IP address to generate two 4-bytes IP addresses. Finally, each receiving tunnel network interface checks to see if these two IP addresses belong to the same node. If yes, it means that the incoming packet has reaches its destination node and the first two bytes of the destination IP address should be modified to 1.0, in this case 1.0.1.1 in node 3. As such if the packet is passed to the IP layer, the IP protocol will know whether the incoming packet has reached its destination. After the application program on node 3 receives and processes the incoming packet which is destined for it, it may reply a packet to the sender, in this case node 1. As we know the replied destination IP address will be the source IP address of the receiving packet. Through the received procedures in the tunnel network interface mentioned above, the source IP address of the incoming packet is in the IP scheme format – SrcNetID.SrcHostID.DstNetID.DstHostID, in this case 1.1.1.3 of the source address in the incoming packet. Following the same procedures, the replied packet will be received by the sender, here node 1.

59

FIGURE 4.2.2: A NON-FORWARDING CASE (1).

FIGURE 4.2.3: A NON-FORWARDING CASE (2).

60

FIGURE 4.2.4: A NON-FORWARDING (3).

Figure 4.2.5 to Figure 4.2.7 demonstrates a forwarding example. As Figure 4.2.5 and Figure 4.2.6 show that node 1 sends a packet to node 5 and node 5 replies a packet to node 1. The destination IP address used by the application program running on node 1 is 1.3.2.2. The sending and receiving procedures in sending and receiving interface are the same as before. The only difference is the sending and receiving procedures on the forwarding node, which is node 4 in the case. On the middle of Figure 4.2.7, it depicts how the routing scheme is used to forward a packet. When a tunnel network interface, here tun4 in the case, receives an incoming packet, the source and destination IP addresses in this packet are modified to 1.4.1.3 and 1.4.2.2, respectively. In this case, the destination IP address 1.4.2.2 is used to generate two IP addresses – 1.0.1.4 and 1.0.2.2. In Figure 4.2.5 or Figure 4.2.6, we can see that these two IP addresses are not owned by the same node. This means that the packet received on node 4 has not reached its destination. Hence the destination IP address should not be modified to 1.0.2.2. Next, the packet with unmodified source and destination IP addresses, here the1.4.1.3 and 1.4.2.2, respectively, is passed to the IP protocol layer. Because the incoming packet with the destination 1.4.2.2 doesn’t belong to any node 61

on a simulated network, the incoming packet on tun4 is forwarded to the network 1.0.2.0. From Table 4.2.5, the forwarding packet with the destination IP address 1.4.2.2 is routed to the 1.0.2.0 network by matching 1.4.2/24 routing entry and the output interface is selected to be tun5. So far, the packet is forwarded from 1.0.1.0 network to 1.0.2.0 network. Following the same procedures, the receiving node also can reply a packet back to the sender, here node 1.

FIGURE 4.2.5: A FORWARDING CASE (1).

FIGURE 4.2.6: A FORWARDING CASE (2).

62

FIGURE 4.2.7: A FORWARDING CASE (3).

4.2.2 Kernel Modifications for IP and Routing Scheme

For the IP and Routing scheme to work correctly, we should modify the kernel to provide the IP and Routing scheme mechanisms. In the NCTUns network simulator, these two mechanisms were already added into the FreeBSD 4.2 kernel. In this kernel modifications, two main functions, the tunoutput() and tunwrite(), are modified in the if_tun.c in the FreeBSD 4.2 kernel. The file if_tun.c is a device driver for the tunnel network interface, which is used to simulate packet receptions and transmissions:

1. The tunoutput() is a kernel function used to simulate a packet transmission. Whenever a packet is sent by an application program and routed to its corresponding tunnel network interface by the kernel, the tunoutput() is called to continue the outgoing processing in the interface layer. After the processing in the interface layer, the packet is queued in the interface queue waiting for transmission. The Routing scheme for outgoing packets is implemented in this kernel function. The pseudo code of this mechanism

63

added for packet transmission is shown as follows: /* tunoutput - queue packets from higher level ready to put out. */ int tunoutput(ifp, m0, dst, rt) struct ifnet *ifp; struct mbuf *m0; struct sockaddr *dst; struct rtentry *rt; { TUNDEBUG ("%s%d: tunoutput\n", ifp->if_name, ifp->if_unit); :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS if (rt != 0) { Get source and destination IP address of the outgoing packet And re-form them in the form of 1.0.X.X IP format. :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: /* If destination address is a Loobpack address, set M_NCTUNS * flag on to avoid checksum inspection in protocol layer and * pass this packet to protocol layer. */ if ( Loopback ) { m0->m_flags |= M_NCTUNS; family_enqueue(AF_INET, m0); return(0); } :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Encapsulate the outgoing IP packet as an ether-frame :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: } #endif :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: return(0); }

2. The tunwrite() kernel function is used to simulate packet receptions. Whenever a tunnel network interface receives a packet, this function is called to continue the reception processing. Normally, when this function processes an incoming packet, the only thing to do is passing the incoming packet to its upper layer – the protocol layer. To provide the Routing scheme mechanism, the Routing scheme for packet reception in a tunnel network interface is implemented in this function. The pseudo code of this mechanism added for the packet reception is shown below:

/* the cdevsw write interface - an atomic write is a packet - or else! */ static int tunwrite(dev, uio, flag) dev_t dev; struct uio *uio; int flag; { TUNDEBUG("%s%d: tunwrite\n", ifp->if_name, ifp->if_unit); :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS { top->m_flags |= M_NCTUNS; :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: /* Check to see if the destination address equal to this tunnel’s address. * If yes, we just do nothing, otherwise, modify the first two bytes of * destination IP address to this tunnel’s last two bytes of IP address. */ ip = mt_tidtoip((u_long)ifp->if_unit);

64

p = (u_char *)&ip; ip_hdr = mtod(top, struct ip *); p1 = (u_char *)&(ip_hdr->ip_src.s_addr); p1[0] = p[2]; p1[1] = p[3]; p1 = (u_char *)&(ip_hdr->ip_dst.s_addr); p1[0] = 1; p1[1] = 0; if ((n1=mt_iptonid(ip)) < 1) goto recv; if ((n2=mt_iptonid(ip_hdr->ip_dst.s_addr)) < 1) goto recv; if (n1 != n2) { p1[0] = p[2]; p1[1] = p[3]; } } recv: #endif :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: return family_enqueue(family, top); }

In the above pieces of code that we added, the purpose of top->m_flags |= M_NCTUNS statement is to inform the kernel that any packet with M_NCTUNS flag set is a packet in a simulated network. Thus the upper protocol – IP and TCP/UDP protocol will ignore the result of checksum check. To neglect the checksum inspection in the protocol layer, we added the following pieces of code to the ip_input(), tcp_input() and udp_input() in the FreeBSD source code, respectively:

/* for ip_input() */ If (sum) { if (!(m->m_flags & M_NCTUNS)) { ipstat.ips_badsum++; goto bad; } } /* for tcp_input() */ if (th->th_sum) { if (!(m->m_flags & M_NCTUNS)) { tcpstat.tcps_rcvbadsum++; goto drop; } } /* for udp_input() */ if (uh->uh_sum) { if (!(m->m_flags & M_NCTUNS)) { udpstat.udps_badsum++; m_freem(m); return; } }

65

4.3 Translation

The translation is a mechanism to used hide unnatural IP format in the both IP and Routing schemes and unnatural port number from application programs. Our simulator uses a single kernel to simulate many nodes on a simulated network. Due to this design, the resource of the kernel will be shared by these nodes. Although our simulator works well without introducing this translation mechanism, it results in an unnatural IP format and port number representation for application programs. Therefore another mechanism, translation, is introduced to hide these unnatural representations from application programs.

Before proceeding to introduce this mechanism, a modification to a command shell, the tcsh, should be illustrated first. The modification to this command shell is in order to cooperate with the kernel modification to achieve the purpose of the port and IP translation. We modified the tcsh for the NCTUns network simulator so that it has the following properties:

1. The modified tcsh should have an ability to accept an argument which indicates a node ID, as shown follows: # tcsh n 3 node_3#

; start the modified tcsh ; the modified tcsh is running for command input

2. For any process forked from such a modified tcsh, the tcsh stores a node ID k into the process table of the process in the kernel to indicate that this process is running on virtual node k on a simulated network (In the above case k is 3). Besides storing the node ID information into a process table, the tcsh also marks a PCB (protocol control block) and a socket structure in the kernel 66

with a node ID. These structures are discussed in later sections. 3. For any process forked from our modified tcsh, the tcsh should register the process with the simulator in the kernel. This enables the timer mechanism used in the process to be triggered based on the virtual time of a simulated network.

4.3.1 IP Address Translation

Because of the IP scheme and the Routing scheme, the IP address and routing entries are not used in a nature way. For example, in the example of Section 4.2.1 if we want to send a packet from node 1 to node 3, we should use 1.3.1.1 IP address to implicitly specify a fact that the packet is generated by node 1 and destined to node 3. But in a real network, the 1.0.1.1 IP address should be used by an application program to send a packet to node 3. As another example, an application program may use system calls (eg, recvfrom()) to get its peer host’s IP address. The IP address that the application program gets will be in the srcNetID.srcHostID.dstNetID.dstHostID format. This is very strange for users or application programs. Hence we introduced the IP translation mechanism to translate our unnatural IP address into natural one.

The IP translation mechanism is to translate IP address from 1.0.X.X format to S.S.D.D and vice versa. For transmitting packets, the destination IP address should be in the S.S.D.D format to fit in with our routing scheme. But in order to hide the S.S.D.D unnatural IP format from application programs, the 1.0.X.X IP format should be used in application programs to specify their destinations. For this purpose, the kernel functions which are relatived to outgoing packet processing should be modified to support the IP translation from 1.0.X.X to S.S.D.D format. Contrarily, for the

67

packet receptions, an application program can use some system calls such as recvfrom() to get the source IP address from an incoming packet. As mentioned before, without special processing, this IP address is used in an unnatural way. Because the kernel only knows the S.S.D.D IP format on a simulated network, if we want to hide the S.S.D.D format from application programs, the IP translation from S.S.D.D format to 1.0.X.X should be introduced to the kernel functions which are relatived to incoming packet processing.

For the IP translation, we modified some important kernel data structures, which are listed below: /* process table in sys/sys/proc.h */ struct proc { TAILQ_ENTRY(proc) p_procq; /* run/sleep queue. */ LIST_ENTRY(proc) p_list; /* List of all processes. */ :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: char p_pad3[2]; /* padding for alignment */ #ifdef NCTUNS /* We use p_pad field to store node ID for NCTUns */ #define p_node p_pad3 #endif /* NCTUNS */ :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }; /* Internet protocol control block in sys/netinet/in_pcb.h */ struct inpcb { #ifdef NCTUNS LIST_ENTRY(inpcb) inp_NCTUNSlist; u_int32_t nodeID; u_short inp_vport; #endif /* NCTUNS */ LIST_ENTRY(inpcb) inp_hash; /* hash list */ :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }; /* socket structure in sys/sys/socketvar.h */ struct socket { #ifdef NCTUNS u_int32_t nodeID; #endif /* NCTUNS */ :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: };

In above, we added a nodeID into the kernel data structures. In which, the p_pad3 is an existing field in the process table. We only renamed it to keep our nodeID information. The reason we can do it is that the field in the process table is not used in the kernel. Another reason why we use it is for code compatibility. If we add a new field into this process table, all of the existing application programs on the 68

FreeBSD system will need to be recompiled to fit in with the new kernel. Otherwise, the existing application programs cannot run on the new kernel.

The purpose of adding nodeID into these data structures is to let the kernel know which process belongs to which virtual node on a simulated network. Hence the kernel will have enough information to modify a packet’s IP address. In the following, we list all of the kernel code what we modified for the IP translation:

1. The connect() is a system call used in an application program to establish a connection to its destination host. For connection-oriented protocols such as TCP, connect() establishes a connection to the specified foreign address. For connectionless protocols such as UDP or ICMP, connect() records the foreign address for use in sending future datagrams. One of the arguments in this kernel function is the destination socket address, which contains destination IP address and port number. Once the connect() is called in an application program, this information is stored in the inp_faddr and inp_fport fields in the PCB (protocol control block), respectively. Here, we have a chance to translate the destination IP address specified in the argument in connect() from the 1.0.X.X format to the S.S.D.D format in the connect() system call. As the following piece of code shows, the ‘name’ parameter points to a destination socket address. We use this information to get a destination IP address and translate it to S.S.D.D format.

69

int connect(p, uap) struct proc *p; register struct connect_args /* { int s; caddr_t name; int namelen; } */ *uap; { ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: error = getsockaddr(&sa, uap->name, uap->namelen); ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS nid = (u_short *)p->p_node; if (*nid > 0) { struct sockaddr_in *sin; u_long srca; u_char *ptr, *ptr1; sin = (struct sockaddr_in *)sa; ptr = (u_char *)&(sin->sin_addr.s_addr); /* if dst IP is in 1.0.X.X format, it means that it is a simulated packet. * The IP translation from 1.0.X.X to S.S.D.D fomrat should be * introduced. */ if (ptr[0]==1 && ptr[1]==0) { /* For nodes which have attached more than one interface, * we just randomly pick up interfce from it as an outgoing * interface. */ srca = mt_randnidtoip(*nid); if (srca != 0) { ptr1 = (u_char *)&srca; ptr[0] = ptr1[2]; ptr[1] = ptr1[3]; } else printf("connect(): mt_randnidtoip error!\n"); } } #endif ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }

2. In Figure 4.1.3(a), it shows that all of the socket write system calls are handed by the sosend() kernel function finally. According to this fact, the sosend() is a good place to do the IP translation. Note that the ‘addr’ parameter in sosend() is a pointer to a destination IP address specified by an application program. We derive a node ID from this information and then translate the destination IP from 1.0.X.X to S.S.D.D format. The modified code of sosend() is listed below:

70

int sosend(so, addr, uio, top, control, flags, p) ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: { ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS nid = (u_short *)p->p_node; if (addr && (*nid>0)) { dsa = (struct sockaddr_in *)addr; dsta = (u_char *)&(dsa->sin_addr.s_addr); /* If a packet with IP address in 1.0.X.X format, then this packet is * a simulated packet. We should translate destination IP from * 1.0.X.X format to S.S.D.D format. */ if ((dsta[0]==1)&&(dsta[1]==0)) { /* For nodes which have attached more than one interface, * we just randomly pick up interfce from it as an outgoing * interface. */ srcip = mt_randnidtoip(*nid); if (srcip != 0) { srca = (u_char *)&srcip; dsta[0] = srca[2]; dsta[1] = srca[3]; } } } #endif ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }

3. Figure 4.1.3(b) shows that all of the socket read system calls are handled by the soreceive() kernel function. Application programs always receive packets through the socket read system calls. Hence the soreceive() is the best place where we can modify to support the IP translation from S.S.D.D to 1.0.X.X format. Note that, here we translate IP from S.S.D.D to 1.0.X.X format. The following piece of code lists the modification of soreceive() kernel function: int soreceive(so, psa, uio, mp0, controlp, flagsp) ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: { ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS if (psa&&(so->nodeID>0)) { sin = (struct sockaddr_in *)(*psa); p = (u_char *)&(sin->sin_addr.s_addr); p[0] = 1; p[1] = 0; } #endif ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }

71

4.3.2 Port Number Translation

The port translation is a mechanism to translate a real port number to a virtual port number. The virtual port number is a port number which is from a node’s point of view. But the real port number is the port number in the kernel. Hence the difference between them is that each node has its virtual port range from 1 to 65535, but for the real port number, they share only one port range from 1 to 65535 in the kernel. On our simulator, every virtual node uses kernel port resource (port range from 1 to 65535, but only 5000 ~ 65535 is a legal usable range for user processes on FreeBSD) to establish their own connections. This results in a problem that the same application programs running on different nodes can not bind to the same port number in a simulated network. For example, suppose that are two web servers running on node 1 and node 2, respectively. As we know that the well-known port number of the web server is 80. On a simulated network, both web servers on node 1 and node 2 will try to bind to port 80, which is shown in Figure 4.3.1(a). But only one web server can bind to port 80 successfully and the other should bind to another port number. This also results in a problem because now ports cannot be used in the natural way.

In order to overcome this problem, a port-mapping mechanism is introduced. The port-mapping is a mechanism to used map a port number to an unused and legal port number. As we can see in Figure 4.3.1(b), we could use this port-mapping mechanism to map the port numbers 80 in node 1 and node2 to port number A and port number B, respectively, where A > 5000 and B > 5000 but A is not equal to B. Although the port-mapping mechanism solves the problem stated above, it also results in an unnatural port number usage for the same application programs running on different nodes. In the above example, the port numbers of the two web server 72

running on node 1 and node 2 are mapped to port A and port B, respectively. This makes the web server no longer have a well-known port number, in the case 80. Also users now should always keep in mind that web server 1 on node 1 uses port number A

w

h

i

l

e

web server 2 on node 2 uses port number B.

normal

port-mapping

port-mapping+port translation

FIGURE 4.3.1: THE PORT USAGE ON A SIMULATED NETWORK.

So far, the simulator can still work correctly even though only the port-mapping mechanism is introduced. But it is unnatural for users to keep in mind that the port number of web server on node 1 is A and the port number of web server on node 2 is B. Hence, a port translation mechanism is introduced to solve this problem. As Figure 4.3.1(c) shows, the port numbers 80 in node 1 and node 2 are mapped to A and B, respectively. But the port-mapping results, here port A and B, are not seen by application programs. Instead of seeing port A and B, the web servers on node 1 and node 2, respectively think that the port number they both use is 80. Users can still use a well-known port (e.g., 80) for some specific application program(e.g., a web server).

Normally, the port-mapping and port-translation mechanisms are not supported by the general kernel. Hence in the NCTUns network simlator, the kernel modification about these two mechanisms should be made by us. In the following, we 73

list the kernel data structure modifications for port-mapping and port-translation :

/* Internet protocol control block in sys/netinet/in_pcb.h */ struct inpcb { #ifdef NCTUNS LIST_ENTRY(inpcb) inp_NCTUNSlist; u_int32_t nodeID; u_short inp_vport; #endif /* NCTUNS */ LIST_ENTRY(inpcb) inp_hash; /* hash list */ u_short inp_fport; /* foreign port */ u_short inp_lport; /* local port */ ::::::::::::::::::::::::::::::::::::::::::::::: #define inp_faddr inp_dependfaddr.inp46_foreign.ia46_addr4 #define inp_laddr inp_dependladdr.inp46_local.ia46_addr4 ::::::::::::::::::::::::::::::::::::::::::::::: };

The above data structure is an Internet Protocol Control Block (PCB). The PCB is used at the protocol layer to hold various pieces of information required for each TCP or UDP socket. The main information it contains is about a connection: foreign and local IP addresses, foreign and local port numbers… … etc. As the above data structure shows, three fields are added into this structure. In which, the inp_NCTUNSlist is used to chain the PCB to the mtable. The mtable is a data structure added for the NCTUns network simulator in the kernel to record all the information about our simulator. This data structure will be discussed in section 5.4. In Figure 4.3.2, it shows that upon a PCB is generated by the kernel for a new TCP/UDP connection, the PCB will be chained to the mtable according to the node ID in the PCB. Once the connection is closed, the PCB will be released by the kernel. This circumstance also should be reflected in the mtable.

74

FIGURE 4.3.2: THE PCB CHAINED IN MTABLE.

As we said before, the PCB contains information about a layer-4 connection such as TCP or UDP. Whenever the kernel receives a packet at layer-4 protocol, it uses the source and destination IP addresses, source and destination port numbers and the protocol information in a packet to find a PCB which is corresponding to that connection. Besides the five tuples in a PCB mentioned before, we also add one field, the inp_vport, to the PCB for the NCTUns network simulator. In the PCB, the inp_lport is used to record a real local port number. But the inp_vport, which we added, is used to record a virtual local port number which corresponds to the inp_lport. As for inp_fport in a PCB, normally, it is used to record a foreign host’s port number. But in our simulator, we use it to record the foreign host’s virtual local port number. Hence this field no longer keeps the foreign host’s real port number. Instead, a virtual port number which is associated with a foreign host’s real port number is recorded in it. Figure 4.3.3 shows an example to illustrate this. Node 1 is a server with an IP address of 1.0.1.2 and the node 2 is a client with an IP address of 1.0.1.1. The server internally binds to port 5001, which corresponds to virtual port 75

6000. The client uses an ephemeral real port 5002 (whose virtual port is 5001) to connect to server. The commands that the client and server may use to establish a TCP connection are shown as follows:

Node_1#

rtcp –p 6000

; bind to virtual port 6000

Node_2#

stcp –p 6000 1.0.1.2

; connect to virtual port 6000

After the three-way handshaking, a new PCB is created in the kernel for node 1 to accept the request from node 2. The PCBs and their contents in node 1 and node 2 are shown in Figure 4.3.3. In the PCB of node 2, the local port of node 2 is the value of the foreign port number of node 1. Also in node 1, the inp_fport is the value of the inp_vport of node 2.

FIGURE 4.3.3: PCBS IN A SIMPLE EXAMPLE.

To implement port-mapping mechanism, the NCTUns network simulator uses a global variable maintained in the kernel, here NCTUNS_lastport, to record the next usable real port number which is in the real port range of 5000 ~ 65535. If an ephemeral port is needed on a simulated network, the kernel will try to bind to the 76

port number specified in NCTUNS_lastport. If it encounters failure when binding to a port number, the NCTUNS_lastport will be increased and the binding process will be tried again. This real port selection is implemented as a function named mt_getunuserport() in the kernel. Similarly, for each node’s information kept in the kernel (the node_info structure which will be discussed in next section), we maintain a virtual port index, lastport, to record its next usable virtual port. Virtually, this usable virtual port range is in 1 ~ 65535 for each virtual node. When a real port is bound successfully, the virtual port on a node needed to be selected by the kernel to associate with that real port number. This virtual port selection for each node is implemented in the mt_getunusevport() function in the kernel. In the following, we will discuss the main kernel modifications for the port mapping and the translation:

1. The in_pcbbind() is a kernel function used to bind a local address and a port number to a socket. It is called from three system calls: i. ii. iii.

from bind a TCP or UDP socket – bind() system call. from connect() system call from listen() system call

The inp_pcbbind() is an important kernel function used to do the port-mapping. In the following pseudo-code, we use a real port number to associate with a virtual port which is selected by mt_getunusevport(). This real port number maybe be selected by mt_getunuserport() or already set in the parameter ‘nam’. Note that the mt_bind() is a function we added for port-mapping to chain the PCB to the mtable.

/* in_pcbbind() in sys/netinet/in_pcb.c */ int in_pcbbind(inp, nam, p) :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: {

77

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: if (TAILQ_EMPTY(&in_ifaddrhead)) /* XXX broken! */ return (EADDRNOTAVAIL); :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS if (inp belongs to virtual connection) { vport = mt_getunusevport(); if (no local port be speicified) { rport = mt_getunuserport(); } else rport = local port specified in nam; mt_bind(inp, rport, vport); } #endif :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }

2. When a connection is close, the in_pcbremlists() will be called to remove the PCB from various lists in the kernel. This situation should also be reflected in the mtable. Hence, when this kernel function is called, the mt_unbind() should be called to remove the PCB from our mtable. /* in_pcbremlists() in sys/netinet/in_pcb.c */ void in_pcbremlists(inp) struct inpcb *inp; { ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS if (inp->nodeID > 0) mt_unbind(inp); #endif }

Next, we will introduce the kernel modifications in the transport layer protocol. Before proceeding it, we have to explain the port-mapping recovery rule first: Definition:

Port-Mapping Recovery

If a packet transmitted on a simulated network is received at transport layer such as TCP and UDP protocol, then the source and destination port number of the transport layer header in the receiving packet should be modified according to the following rules: for src port: R -> V ( rport -> vport ) for dst port: V -> R ( vport -> rport ). Where R is a real port number and V is a virtual port number.

3. When an incoming packet reaches the TCP protocol, the tcp_input() is called. In this kernel function, we implemented the port-mapping recovery in it. This

78

is because the tcp_input() uses the source and destination port numbers, source and destination IP addresses contained in an incoming packet to look up its corresponding PCB in the kernel PCB list. If no port-mapping recovery is introduced, the tcp_input() won’t find a corresponding PCB. This means that there doesn’t exist such a TCP connection. Hence, the incoming packet will be dropped in kernel. The port-mapping recovery is shown in the following piece of code.

void tcp_input(m, off0, proto) register struct mbuf *m; int off0, proto; { ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: #ifdef NCTUNS odport = ntohs(th->th_dport); nodeIDd = mt_iptonid(ip->ip_dst.s_addr); if (nodeIDd > 0) { /* Do port-mapping recovery here */ rport = mt_VtoRport(nodeIDd, ntohs(th->th_dport)); if (rport > 0 ) th->th_dport = rport; vport = mt_RtoVport(th->th_sport); if (vport > 0) th->th_sport = htons(vport); } #endif ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: if (so->so_options & SO_ACCEPTCONN) { … … … … … … … … … … … … … … … … … #ifdef NCTUNS /* if the socket is under listen state and a new connection request is received, * the mt_bind() should be called to reflect that a new PCB is created. */ if (inp->nodeID > 0) { inp->inp_vport = odport; mt_bind(inp); } else inp->inp_vport = 0; #endif } ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }

4. Like the TCP protocol, when an incoming packet is received at UDP protocol – the udp_input(), the port-mapping recovery is also needed. In the following, we shows a pseudo-code of the port-mapping recovery in udp_input() kernel function:

79

void udp_input(m, off, proto) register struct mbuf *m; int off, proto; { ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::. #ifdef NCTUNS odport = ntohs(uh->uh_dport); nodeIDd = mt_iptonid1(ip->ip_dst.s_addr, m->m_pkthdr.rcvif); if (nodeIDd > 0) { /* Do porting-mapping recovery here */ rport = mt_VtoRport(nodeIDd, ntohs(uh->uh_dport)); if (rport > 0) uh->uh_dport = rport; vport = mt_RtoVport(uh->uh_sport); if (vport > 0) uh->uh_sport = htons(vport); } #endif :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: }

4.4 System Calls added for Simulation Engine

To implement the NCTUns network simulator, some special services from the kernel are necessary. Among these special services, some of them are implemented as system calls but some are not. The supports which are not implemented as system calls have discussed in the above section. In this section, we will introduce the system calls that we added for the NCTUns network simulator and discuss their functions. The system calls we added are shown below:

int sys_NCTUNS_getSystemVirtualClock(p, uap) parameter: none In order to have a global and unique system clock on the simulation, we maintain a global system virtual clock in the kernel. This clock may be referenced by external application programs such as ping to report performance (eg, RTT, throughput).

int sys_NCTUNS_seeWhichTunHasPacketToSend(p, uap) parameter: struct sys_NCTUNS_seeWhichTunHasPacketToSend_args /* 80

{

} */

char

*tunWhichHasPacketToSend;

int

numberOfTunToTest;

*uap;

The NCTUns network simulator uses a tunnel network interface to simulate a NIC’s behaviors. For an outgoing packet, after it is processed, it is queued in the tunnel network interface queue. The user-level simulator engine then tries to transmit the packet in the queue. The mechanism that the S.E uses to know which tunnel network interface has packet to send is through the polling mechanism. By polling each tunnel network interface through this system call periodically, the S.E will have enough information to read packet from the kernel and push it to module stream. Although this system call is provided by the NCTUns network simulator, the NCTUns network simulator doesn’t use this system call to poll each tunnel network interface. Instead, the NCTUns network simulator uses a memory mapping technique to poll each of them. This memory mapping technique maps all of the information about tunnel network interface queue to a user-leve array. The S.E could directly poll each tunnel interface queue without calling this system call periodically. This improves performance on the NCTUns network simulator.

int sys_NCTUNS_regproc(p, uap) parameter: struct sys_NCTUNS_regproc_args /* { pid_t } */

regp_pid;

*uap;

For an application program which runs on a simulated network, the timer used by it should be triggered based on the global system virtual clock, which is mentioned before. Therefore, any application program which runs on a simulated 81

network should be registered with the kernel first through this system call. As the parameter shows, the reg_pid is the process ID.

int sys_NCTUNS_mapTable(p, uap) parameter: struct sys_NCTUNS_mapTable_args /* { char action; u_long nid; u_long ifid; char *mac; char *s_port; } */ *uap;

For the NCTUns network simulator, we maintain a data structure in the kernel to record all of the information about a simulated network and the usage of each tunnel network interface. The data structure, which we named it mtable, is shown as below: struct if_info { SLIST_ENTRY(if_info) u_long u_char struct

nextif; tunid; mac[6]; ifaddr*ifa;

}; struct node_info { SLIST_HEAD(, if_info) SLIST_ENTRY(node_info) u_long u_long struct inpcbhead u_short

ifinfo; nextnode; nodeID; numif; /* number of interfaces */ inp_NCTUNShead; lastport;

};

The if_info is a data structure used to record information about a tunnel network interface, e.g., IP and Mac address of an interface. The node_info, which is the mtable, is a data structure used to record information about a node on a simulated network. In this system call, it provides four types of services for application programs. The ‘action’parameter in sys_NCTUNS_mapTable_args is 82

used to determine which kind of service is requested. The values of this parameter are shown below:

MT_FLUSH:

When the simulation starts, the simulator issues this type of request to kernel to flush all data structure maintained in the kernel, here the mtable.

MT_ADD:

When the simulation starts, the simulator tries to parse a topology file to gather information about the usage of all tunnel network interface (eg, node 1 uses tun2) and the configuration of each tunnel (eg, the IP of tun2: 1.0.1.3, the MAC addr of tun2: 1:1:1:3:3:3). Because these information should be maintained in the kernel, the simulator needs to copy them to the kernel through this system call with MT_ADD flag on.

MT_DISPLAY:

The flag MT_DISPLAY is used for debugging. It doesn’t support any service to the simulator.

int sys_NCTUNS_checkCallout(p, uap) parameter: none This system call triggers each user-level process which uses a system timer. Because the timer which application programs use on a simulated network should be triggered based on the simulator’s system virtual time rather than the system’s real time, application programs should be registered with the kernel through this system call before they starts. For example, an application process may use a sleep() system call to sleep. Because this process is executed on a simulated network, the time which the sleep() uses should be based on the virtual

83

time rather than the real time. int sys_NCTUNS_misc(p, uap) parameter: struct sys_NCTUNS_misc_args /* { char action; int value1; int value2; u_long *value3; } */ *uap; This system call is to set or get the miscellaneous information maintained in t h e k e r n e l a b o u t t h e s i m u l a t o r. T h e ‘a c t i o n ’ a r g u m e n t i n sys_NCTUNS_misc_args data structure shown above determines the service types, which are shown below:

MSC_GETNIDINFO:

If the system call is called with this flag on, the system call will return a tunnel configuration in a node to the simulator. For example, for a router, it may have more than one interface attached to it. Once this system is issued with this flag on, all tunnel configurations belong to this router is returned to the simulator.

MSC_REGPID:

Generally, this type of service is used in a command shell to assign a node ID to an application process’s process table in the kernel. With this node ID information, the IP and port translation could be introduced in the kernel.

84

Chapter 5 Simulation Engine – S.E 5.1 Architecture of the Simulation Engine

The Simulation Engine (S.E) is the core of the NCTUns network simulator. It provides a module-based platform for users to develop their protocols and integrate them into our simulator. By linking modules in a controlled way, users can easily create any arbitrary network device on our network simulator. Similarly, by connecting network devices together, users can arbitrarily create a network topology and simulate it. Figure 5.1 depicts the architecture of the Simulation Engine in the NCTUns network simulator. As this figure shows, the S.E is composed of six components. They are Scheduler, Event, Module Manager, Script Interpreter, Dispatcher, and NCTUns APIs, which we already discussed roughly in Chapter 2.

When the simulator starts, the Script Interpreter component will parse a topology file. The parsing results from the Script Interpreter are then passed to the M.M component. The M.M component creates a simulation network according to these information. After that, the S.E initiates its Scheduler component to start its timer. Then the scheduler will poll each tunnel network interface periodically to try to read a packet from a tunnel network interface. Upon reading a packet from a tunnel interface, the scheduler calls the first module in a node and passes the packet to it. After processing the packet, the first module continues to pass the packet to the second module, third module and so on in the same node. In the modules, modules could request a service from the S.E. through the NCTUns APIs. For example, they could set up a timer through the timer APIs. When the timer expires, the scheduler will call a 85

function which is specified by a function pointer in the timer structure.

FIGURE 5.1: THE ARCHITECTURE OF THE SIMULATION ENGINE.

5.2 Event

The event is the most basic interface in the S.E that is used to communicate with the Scheduler component. For a module, the module could ask for an event from the S.E to notify the scheduler when to call a handler function. For example, a module may get an event for simulating packet transmissions (Upon expiration time, a handler function is called to transmit a packet). As another example, a module may ask for an event from the S.E to implement its polling mechanism. Hence, if a module wants to request a service from the S.E, it should encapsulate its request as an event. Then the scheduler will accept and provide services for it.

In addition, an event also provides a platform for easily building a high-level service for modules. As Figure 5.1 shows, the event component contains three smaller components – Packet, Timer and Event Manager. These components provide a high-level service to moudles based on the event interface:

86

1. For the Packet component, it encapsulates packet as an event. We name this kind of event “ePacket”. By encapsulating a packet as an ePacket, the packet can directly be scheduled in the Scheduler component. 2. For the Timer component, it provides a timer mechanism for modules. For modules, they may have many chances to use the timer mechanism. For example, in a MAC (Media access control) module, we may ask for a timer to know if the MAC should retransmit a packet that was sent before – (While the MAC sends a packet, it also sets a timer. When the timer expires but no ACK has been received yet, the MAC will know that the outgoing packet was lost and should be retransmitted again). 3. For the Event Manager, users can register a periodical event with the Event manager. The periodical event is an event which will be periodically processed by the event Scheduler. Under the control of the Event manager, users can dynamically enable or disable a scheduling event through either a tcsh command interpreter or a GUI environment. struct event { u_int64_t u_int64_t

timeStamp_; perio_;

NslObject int int

*calloutObj_; (NslObject::*memfun_)(struct event *); (*func_)(struct event *);

void u_char struct event

*DataInfo_; priority_; *next_ep;

};

The above data structure lists the declaration of an event structure. In this data structure, the ‘timeStamp_’ is the expiration time of an event. Note that there is a system global virtual time maintained in the Scheduler component. Once this virtual time equals to this ‘timeStamp_’, the scheduler component will call the handler 87

function which is specified in the event structure. The middle fields of the above declaration are used to specify a handler function. In these statements we can see that the handler function could be either a normal function or a member function of an object. If a handler function is a normal function, the ‘func_’ is used and this handler function that the ‘func_’points to should be in the following form: int (Event *ep) { … … … … … … … … … … … … … … … … … … … … … … … … … … }

Similarly, if a handler function is a member function of an object, both the ‘calloutObj_’ and the ‘memfun_’ are used and the handler function which both the ‘calloutObj_’and the ‘memfun_’specify should be in the following form: int ::(Event *ep) { … … … … … … … … … … … … .. … … … … … … … … … … … … .. }

Above, the is a handler function name. The is an object name. Note that the only parameter in a handler function, no matter whether the handler function is a normal function or a member function, is a pointer to an event. This event is an event which causes a handler function to be called by the Scheduler. For example, if the Scheduler component processes an event B and then call out a handler function which is specified in that event, the parameter in the handler function will be a pointer to B.

The ‘perio_’ member in the event data structure is used to indicate if an event is a periodic event. The periodical event is an event that the Scheduler needs to process periodically. If this field, ‘perio_’is set to zero in an event, we call this event a normal event. Otherwise, we call it a periodic event. The ‘DataInfo_’ is a pointer to an

88

unknown data type. Hence, for an event it could attach any information to it. By setting this value, we could implement any kind of high level services for modules, just as the three components mentioned above – Packet, Timer and ‘Evnet Manager’. The ‘priority_’ is the priority of an event. Presently, it is only used in the Module Binder, which will be discussed in the section 5.3. Finally, the ‘next_ep’ is a pointer used to chain events in a single linked-list.

5.2.1 Timer

The S.E in the NCTUns network simulator provides a timer mechanism for modules. Its implementation in the S.E is based on the event interface. Briefly speaking, a timer is an event in the S.E. However, the functionality of a timer is more powerful than that of an event. In the following, we list the declaration of the timer structure in the S.E: class timerObj { public: Event_ u_char u_char u_int64_t

*callout_event; busy_; paused_; rtime_;

/* timer state */ /* remainding time */

timerObj(); ~timerObj(); virtual void cancel(); virtual void start(u_int64_t time, u_int64_t pero); virtual void pause(); virtual int resume(u_int64_t time); virtual int resume(); … … … … … … … … … … … … … … … … … … … … … … … … … … … … . … … … … … … … … … … … … … … … … … … … … … … … … … … … … . };

In the above structure, the ‘callout_event’ is a pointer to an event. If any component wants to communicate with the Scheduler, it should use an event interface. Through this event interface, the component could ask for a request from the 89

Scheduler. Here the ‘timerObj’ uses such an event interface, the ‘callout_event’, to communicate with the Scheduler. The ‘busy_’ is a status variable used to indicate the idle or busy state of a timer. The busy state means that the timer is being scheduled in the Scheduler. Otherwise an idle state is set in ‘busy_’ to indicate an idle timer. The ‘pasued_’is a flag used to indicate if a timer which is in the busy sate is paused or not. If a timer is paused, the timer will leave the the Scheduler and the ‘rtime_’ will be set to a time which indicates the remainding time of the timer. For a timer, if we want, we can resume it if it was paused before.

About the timer manipulation, the timer mechanism provides four operations to manipulate a timer:

1.

start()

The start() operation is used to start a timer. Whenever a module

asks for a timer from the S.E, it should use this operation to start the timer. 2. cancel() The cancel() operation is used to cancel a timer which is in the busy state. There are many chances to use this operation in a module. For example, when a packet is sent in a sender, a timer should be set. After that, if an ACK is received in the sender before the timer expires, the timer should be chanceled. 3. pause() Whenever a timer is in the Scheduler, the pause() could be used to pause an active timer for a moment. Later, on another operation, resume(), can be used to resume the timer. 4.

resume() The resume() operation is used to resume a paused timer. In the S.E, the timer mechanism provides two types of resume operation for a module. If a timer is paused, a resume(void), which has no parameter, could be used to simply resume the timer. Otherwise, another resume(extr-time)

90

operation, which has a parameter, could be used. In addition to resume a paused timer, the resume(extra-time), which has a parameter, could extend a timer’s expiration time. The parameter in this type of resume() is the extra time, which will extend the original time in a timer to extra-time + original expiration time.

The timer mechanism provides an extensible mechanism for users. If a more powerful functionality of a timer than that of a basic timer is needed, users can extend the current timer mechanism to have more powerful functionality.

5.2.2 Packet

A Packe-Object is a data structure used to encapsulate user data in a well-known format in the NCTUns network simulator. Every user data in a simulated network should be encapsulated as a Packet-Object. Through this Packet-Object interface, all of the modules in a simulated network could process it. Figure 5.2.1 depicts how an IEEE 802.11 frame is encapsulated in the Packet-Object format.

91

FIGURE 5.2.1: DATA IS ENCAPSULATED AS A PACKET-OBJECT.

From this figure, we can see that the user-data is encapsulated as an IEEE 802.11 frame. In the real-world network, this IEEE 802.11 frame will be stored in a continuous space. But in our network simulator, the whole IEEE 802.11 frame is stored into two different and discontinuous memory spaces – the IEEE 802.11 header and Ether header are stored in a PT_DATA memory buffer and the whole IP datagram is stored in a PT_SDATA memory buffer. The PT_DATA or PT_SDATA shown in the figure is a memory buffer, which we call a “Packet Buffer” (pbuf). The pbuf is the most basic unit in a Packet-Object that is used to store user data. For each pbuf, its buffer size is 128 bytes by default. In the NCTUns network simulator, there are totally three types of p s of pbuf all have their own special headers. But, beside their special headers, they also maintain a command header.buf in a Packet-Object – the PT_DATA, PT_SDATA, and PT_INFO pbufs. These three type

92

As mentioned before, the event interface could be used to construct a high-level service for modules. Because of this mechanism, if a Packet-Object is attached to an event, then we call this type of data structure an “Event-Packet” (ePacket). Briefly speaking, this Event-Packet is an event. But in fact, an Event-Packet has more powerful functionality than that of an event. If a Packet-Object wants to be passed to a module, this ePacket should be attached to an event to form an ePacket. This is because the only data type which is used to communicate between module and module or module and the Scheduler is the ePacket data type.

A Packet-Object may contain all types of pbuf, which are shown in Figure 5.2.2 shows. For each Packet-Object, it is only allowed to contain one PT_DATA pbuf and one PT_SDATA pbuf. However, for the PT_INFO pbuf, a Packet-Object is allowed to contain more than one of this type of pbuf. This is because the PT_INFO pbuf is not used to store user data but some information about the user data.

For a module, it is not permitted to directly access a pbuf in a Packet-Object. If a module wants to access user data stored in a Packet-Object, the Packet-Object provides some packet-related APIs for the module to do it. These APIs, are listed in Appendix B.

5.2.2.1 Packet Buffer (pbuf)

The Packet buffer (pbuf) is the most basic unit in a Packet-Object. For each pbuf, the size of a pbuf is of length of 128 bytes by default. In Figure 5.2.2, it depicts these three types of pbuf: the PT_DATA, PT_SDATA and PT_INFO pbufs. From this figure,

93

we can see that all of the types of pbuf have their special headers. However they also have a common header, which is shown below: struct p_hdr { struct pbuf short int char }; #define p_next #define p_type #define p_len #define p_dat

*ph_next; ph_type; ph_len; *ph_data; p_hdr.ph_next p_hdr.ph_type p_hdr.ph_len p_hdr.ph_data

The ‘p_next’ is a pointer to a pbuf in chain. Note that, only the PT_DATA and PT_INFO pbufs can be chained together. The PT_SDATA pbuf can not be chained with the PT_DATA and PT_INFO. We can see from Figure 5.2.1, the PT_SDATA pbuf is pointed by a field in the PT_DATA pbuf header. The ‘p_len’ specifies how many data are stored in a pbuf. Its value can not exceed the whole length of a pbuf. From Figure 5.2.2, we can see that for a pbuf, the whole length of it is 128 bytes, and the usable space depends on the type of pbuf. Hence, for a PT_DATA pbuf, the maximal usable space is 128 Bytes – sizeof(struct p_hdr) – sizeof(sturct s_exthdr) = 98Bytes. For a PT_INFO pbuf, the maximal usable space is 128 Bytes – sizeof(struct p_hdr) = 114 Bytes and for a PT_SDATA pbuf, maximal usable space is 128 Bytes – sizeof(struct p_hdr) – sizeof(struct s_exthdr) = 98 Bytes. The ‘p_dat’pointer points to somewhere in a pbuf. As Figure 5.2.2 shows it points to the starting address of the usable space in a pbuf. Note that, the ‘p_dat’ in the PT_SDATA pbuf doesn’t point to somewhere in its pbuf. Instead, it points to a cluster buffer, which will be discussed latter. The last member, ‘p_type’, in the common header is used to specify the type of a pbuf. Table 5.2.1 shows all of the types of a pbuf. Note that the last type “PT_SDATA | PT_SINFO”, in fact, is a PT_SDATA pbuf. The only difference is that besides that the PT_SDATA|PT_SINFO pbuf is used to store user data, this type of 94

pbuf is also used to store packet information. However, a PT_SDATA pbuf is just used to stored user data. In this case if packet information is needed, it should be stored in aPT_SINFO pbuf.

p_type PT_DATA PT_SDATA

PT_INFO PT_SINFO | PT_SDATA

Description This type of pbuf is used to store user data and will be duplicated if the pkt_copy() API is used to duplicate a packet. This pbuf is used to store user data and won’t be duplicated if the pkt_copy() is used to duplicate a packet. Instead, the member, “p_refcnt”, will be increased to indicate how many packet-objects share this pbuf. The purpose of using this type of pbuf is to save memory spaces and avoid data copy. This type of pbuf is not used to store user data. Instead, some information about user data is stored. User data is not stored in the usable space in the PT_SDATA pbuf. Instead, it is stored in a cluster buffer. Hence if this usable space is used to stored packet information, then this flag will be set. TABLE 5.2.1: THE VALUE OF THE P_TYPE.

FIGURE 5.2.2: THE PBUF TYPES.

95

5.2.2.2 PT_DATA pbuf

The PT_DATA pbuf is used to store user data. As we mentioned before, its maximal usable size is 128 Bytes – sizeof(struct p_hdr) – sizeof(sturct s_exthdr) = 98 Bytes. However, this usable size is only for those pbufs with PF_EXTEND flag unset. If this type of pbuf with PF_EXTEND flag is set, a cluster buffer whose size is 1024 Bytes by default is attached to it. The PF_EXTEND is for those data whose size exceeds 98 Bytes and destine to store in a PT_DATA pbuf. The special header in a PT_DATA type pbuf is shown in Figure 5.2.3 and its data structure is shown below:

struct s_exthdr { u_int64_t u_int32_t char }

com1; com2; *com3;

#define p_pid #define p_tlen #define p_sptr #define p_flags #define p_extclstr

p_data.EHDR.exthdr.com1 p_data.EHDR.exthdr.com2 p_data.EHDR.exthdr.com3 p_data.EHDR.e_data.DHDR.flags p_data.EHDR.e_data.DHDR.d_data.DHDR0.ext

The ‘p_pid’ is a packet ID. For every Packet-object, they have their own unique packet ID. However, there is one kind of Packet-Object that does not follow this rule. It is the duplicate Packet-Object. The duplicate Packet-Object is a Packet-Object which is duplicated from an existing Packet-Object. For a Packet-Object duplicated from another Packet-Object, the new Packet-Object will have the same packet ID as the original Packet-Object’s packet ID. The ‘p_tlen’ is the total length of user data which is encapsulated as a Packet-Object. This length includes the length of data which are stored in PT_DATA and PT_SDATA pbuf. Note that the data stored in a PT_INFO pbuf is not added to the ‘p_tlen’. The ‘p_sptr’ is a pointer points to a 96

PT_SDATA pbuf. If a Packet-Object contains a PT_SDATA pbuf, the ‘p_sptr’ will point to it. As for the ‘p_flags’, there are five independent values for it, which are shown in Table 5.2.2. The last member, ‘p_extclstr’, is available only if the PF_EXTEND is set in ‘p_flags’. If the PF_EXTEND is set in a Packet-Object, the Packet-Object will generate a cluster buffer and the ‘p_extclstr’will point to it, just as Figure 5.2.3 shows. For a cluster buffer in a PT_DATA pbuf, the size of it is 1024 bytes by default. The reason why introducing a cluster buffer mechanism in a PT_DATA pbuf is to avoid the problem that the size of stored data may exceed the usable space in a PT_DATA pbuf.

P_flags

Description

PF_SEND Indicate that a Packet-Object is an outgoing packet. PF_RECV Indicate that a Packet-Object is an incoming packet. PF_WITHSHARED If a Packet-Object contains a PT_SDATA pbuf, then this value will be set. PF_WITHINFO If a Packet-Object contains more than one PT_INFO pbuf then this value will bet set. PF_EXTEND If the size of a user data stored in a PT_DATA pbuf is larger than the useable space in a PT_SDATA, then this value will be set and the data will be stored in an extended cluster. (See Figure 5.2.3) TABLE 5.2.2: THE VALUE OF P_FLAGS

FIGURE 5.2.3: THE PT_DATA PBUF.

97

5.2.2.3 PT_SDATA pbuf

The PT_SDATA pbuf is also used to store user data. However, as its name indicates, the PT_SDATA pbuf is dedicated to store data which is shared between Packet-Objects. The PT_SDATA pbuf is introduced in the packet mechanism to save that memory space and avoid data copy. On the right-hand side of Figure 5.2.1, it shows a PT_SDATA pbuf is used to store data. From this figure, we can clearly see that when a PT_SDATA pbuf is created, a cluster buffer is always attached to it. The length of a cluster buffer depends on the size of user data which will be stored in it. In the following, we list the special header in a PT_SDATA pbuf: struct s_exthdr { u_int64_t u_int32_t char }

com1; com2; *com3;

#define p_pid #define p_refcn #define p_cluster

p_data.EHDR.exthdr.com1 p_data.EHDR.exthdr.com2 p_data.EHDR.exthdr.com3

The ‘p_pid’is a packet ID. For each Packet-Object, it maintains a packet ID in it. If a Packet-Object is duplicated from another Packet-Object, the duplicate Packet-Object will have the same packet ID as the original Packet-Object’s packet ID. The ‘p_refcnt’ is a reference count used to indicate how many Packet-Objects share this pbuf. The ‘p_cluster’ is a pointer what points to a cluster buffer. When a PT_SDATA pbuf is created, a cluster buffer is generated and attached to it.

The maximal usable size of a PT_SDATA pbuf is unlimited. It is different from that of a PT_DATA pbuf (The maximal usable size of a PT_DATA is 1024Bytes by default). This is because a user data to be stored in a PT_SDATA pbuf will be stored 98

in a cluster buffer and its size depends on the size of a user data. From Figure 5.2.1 we can see that the cluster buffer of the PT_SDATA pbuf has a reserved space. This reserved space is of length of 98bytes, which is the size of total usable space in a PT_DATA pbuf. The purpose of this reserved space is for the pkt_aggregate() packet API. A user data encapsulated as a Packet-Object may be divided into two parts and stored in different types of pbuf. Just as Figure 5.2.1 shows, a whole IEEE 802.11 frame is divided into two parts and are stored in a PT_DATA and a PT_SDATA pbuf, respectively. Hence a continuous frame is divided into more than one fragment and these small fragments will be stored in different pbufs separately. Doing this results in a discontinuous packet. A discontinuous packet in some situation may cause some problems. For example, if users want to access the whole data which is encapsulated as a Packet-Object and stored in different pbufs, then it will cause the simulator to crash when users access a discontinuous point. We can clearly see this situation from Figure 5.2.1. The IEEE 802.11 frame which is encapsulated as a Packet-Object is divided into two small fragments (One for IEEE 802.11 header plus ether header and one for IP datagram). Users can access this frame via the pkt_get() packet API which is supported by the NCTUns network simulator. The pkt_get() API returns the head of a frame. Under this situation in Figure 5.2.1, users can only directly access the data stored in a PT_DATA pbuf through this API (in this case IEEE 802.11 header and ether header). If users try to access the data which is not stored in the PT_DATA (in this case IP datagram), it will cause the simulator to crash. This crash results from the fact that whole IEEE 802.11 frame is not stored in a continuous space. The NCTUns network simulator provides another API for users to directly access the whole user data – the pkt_aggregate(). If users want to directly access the whole data which is encapsulated as a Packet-Object, they can use the pkt_aggregate() API first to get the whole data which is stored in a continuous space. The operation that the 99

pkt_aggregate() API does is to copy user data stored in a PT_DATA pbuf to the reserved space in a cluster buffer of a PT_SDATA pbuf. This is why the length of the reserved space in a cluster buffer of a PT_SDATA pbuf equals to that of the usable space in a PT_DATA pbuf. However, this API will generate a performance overhead. Whenever this API is used, it will copy data from a PT_DATA pbuf to a PT_SDATA pbuf. This will lower the simulation performance.

5.2.2.4 PT_INFO pbuf

The PT_INFO pbuf is not used to store user data. Instead, it is used to store information about stored data. The packet information is something about data description. For example, a receiving host in a simulated network may need to know from which host an incoming packet originates. This information may be stored in a PT_INFO pbuf by the originating host. The PT_INFO pbuf has no special header. It only has a command header. Hence the usable size of this type of pbuf is 114 Bytes (128Bytes – sizeof(struct p_hdr) = 114 Bytes).

The PT_INFO pbf uses a special way to store user’s data information. It divides the usable space in a PT_INFO pbuf into many small blocks. By default, each block is 56 Bytes in length (6 Bytes for information name and 50Bytes for packet information). Hence for each PT_INFO pbuf, it can store only two user’s packet information (114/56 = 2). Figure 4.2.4 shows the architecture of a PT_INFO pbuf. We can clearly see that the size of each block is 50bytes plus 6byes. If any packet-information is stored in a PT_INFO pbuf, we should give it a unique name. The length of this unique name is 5bytes and the last byte is ‘\0’. This name will be used by a Packet-Object to 100

uniquely identify a packet-information in a PT_INFO pbuf. From this figure, we also can see that all the PT_INFO pbufs are chained together. As we said before, each PT_INFO pbuf can store only two user’s packet-information. Hence if more than two packet-informations need to be stored in a Packet-Object, then the Packet-Object will generate two PT_INFO pbufs and chain them together. This makes that there is no limitation for a Packet-Object to have a limited number of PT_INFO pbuf.

FIGURE 5.2.4: THE PT_INFO PBUF.

5.2.3 Event Manager

The Event Manager is a component in the S.E used to manage all static events. The static event is an event which is not originated from any module. Instead, the static event is originated from an event table at the starting time of a simulation. The event table is a table used to record all events which are registered with the event table by users. It keeps all of the information about every static event. At the initial state of a simulation, the S.E reads the event table and generates a static event for each entry in the event table for Scheduler dynamically. After the initial state, users can dynamically enable or disable a static event which is being scheduled in the Scheduler component through the dispatcher component in the S.E. For the event table, we 101

list its data structure below: struct event-table { char char u_int64_t void

*name; flag;

/* event name */ /* the state of current event: Active/Inactive */

period; *data;

/* periodical time to call handler */ /* the parameter to handler */

/* call out handler: either a normal function * or a member function of an object */ NslObject *calloutObj; int (NslObject::*memfun)(Event *); int (*func)(Event *); };

5.3 Scheduler

The Scheduler in the S.E is the most important component. It maintains a system global virtual time. Every events, no matter whether generated by a module or the event table, should be triggered based on this system global virtual time. According to Figure 5.1, we can see that the Scheduler component implements two small mechanisms. Hence, in the following, we illustrate these two mechanisms with the following piece of code: for(currentTime_=0; currentTime_ether_dhost, dst_mac_addr, 6); (void)memcpy(eh->ether_shost, src_mac_addr, 6); eh->ether_type = ETHERTYPE_IP; /* Store 500bytes data in PT_DATA pbuf */ ptr = mypkt->pkt_malloc(500); (void)memcpy(ptr, data2, 500); }

150

FIGURE B.2: AN EXAMPLE OF PKT_MALLOC().

151

API Name: pkt_prepend(), pkt_sprepend() Class Name: Packet Synopsis: virtual int pkt_prepend(char *data, int length) virtual int pkt_sprepend(char *data, int length) Return Value: The return value of both the pkt_prepend() and pkt_spreoend() is 1 for success and -1 for failure. Description: The pkt_prepend() and pkt_sprepend() functions copy the data specified by ‘data’parameter into a Packet-Object. The difference between pkt_prepend() and pkt_sprepend() is that the pkt_prepend() stores the data into the PT_DATA pbuf in a Packet-Object but the pkt_sprepend() stores data into the PT_SDATA pbuf. If the ‘length’ parameter of pkt_prepend() is larger than the “PCluster” (the PCluster is the maximum storage size of data that could be stored in a PT_DATA pbuf), this function call will fail and return -1.Contrarily, this limitation won’t apply to pkt_sprepend() function because this function has no maximal storage size limitation. However there is still a limitation on this function. If a Packet-Object has not had a PT_SDATA pbuf attached to it, this function call of pkt_sprepend() will fail and return -1. Although the pkt_prepend() function is somewhat like pkt_malloc(), there is still a difference between them. The pkt_malloc() only allocates a usable space in a PT_DATA pbuf. It does not copy data to the allocated space. The pkt_prepend() function, besides allocating a usable space in a PT_DATA pbuf, copies data to the allocated space.

152

API Name: pkt_seek() Class Name: Packet Synopsis: virtual int

pkt_seek(int offset)

Return Value: The pkt_seek() function always returns 1 to the caller. Description: The pkt_seek() function strips data from a Packet-Object. If this function is used, the data in the PT_DATA pbuf of the Packet-Object is stripped off first. When there is no data in the PT_DATA pbuf, the pkt_seek() tries to strip data from the PT_SDATA pbuf if there exists a PT_SDATA pbuf in the Packet-Object. The ‘offset’ parameter indicates the length of data to be stripped off. Its value could be either a positive or a negative value. For a positive value, data in a Packet-Object is stripped off. For a negative value, the length of a Packet-Object is increased instead.

153

API Name: pkt_get(), pkt_sget() Class Name: Packet Synopsis: virtual char *pkt_get(void) virtual char *pkt_get(int offset) virtual char *pkt_sget(void) Return Value: The return value of pkt_get() function is a pointer to the beginning of stored data in a PT_DATA pbuf. Similarly, the pkt_sget() function returns a pointer to the caller. But the pointer points to the beginning address of stored data in a PT_SDATA pbuf if a Packet-Object contains a PT_SDATA pbuf. Otherwise, a NULL value is returned. Description: The pkt_get() and pkt_sget() APIs access data stored in a packet. For the pkt_get() API, this API could access data which is stored either in the PT_DATA or PT_SDATA pbuf. If no data in the PT_DTA pbuf, pkt_get() tries to access the data in the PT_SDATA pbuf. Only if a Packet-Object contains a PT_SDATA pbuf, will the pkt_get() try to access this type of pbuf. The ‘offset’ parameter in the pkt_get() indicates the offset from which to access a Packet-Object. For example, suppose that a frame consists of a 14-bytes ether-header and 100-bytes IP datagram. In this condition, the pkt_get() returns a pointer to the ether header, but the pkt_get(14) returns a pointer to the IP-datagram instead of the ether-header. The function of another API, pkt_sget() is the same as that of pkt_get(). The only difference between them is that the pkt_sget() only access data in a PT_SDATA pbuf if the packet has a attached PT_SDATA pbuf. The pkt_get(), on the other hand, could access both PT_DATA and PT_SDATA pbuf.

154

API Name: pkt_sattach(), pkt_sdeattach() Class Name: Packet Synopsis: virtual char *pkt_sattach(int length) virtual int pkt_sdeattach(void) Return Value: The pkt_sattach() function returns a pointer to the beginning address of a usable memory space for success. Otherwise, a NULL value is returned. The pkt_sdeattach() returns 1 for success and -1 for failure. Description: The pkt_sattach() function attaches a PT_SDATA pbuf to a Packet-Object. When a Packet-Object is created, only a PT_DATA pbuf is involved. If a shared memory space is needed, the pkt_sattach() could be used to attach a shared PT_SDATA pbuf to the Packet-Object. The ‘length’ parameter in pkt_sattach() indicates the cluster size of a PT_SDATA pbuf. For a PT_SDATA pbuf, it does not store data in its pbuf. Instead of storing data in the PT_SDATA pbuf, the PT_SDATA pbuf allocates another cluster to store a great deal of data. The size of this cluster is not fixed. Instead, the cluster size is specified by the ‘length’parameter of pkt_sattach() function. The pkt_sdeattach() function detaches a PT_SDATA pbuf from a Packet-Object. The memory space used by the PT_SDATA pbuf which is detached from a Packet-Object will be released only if the reference count in the PT_SDATA pbuf reaches zero.

155

API Name: pkt_addinfo(), pkt_saddinfo() Class Name: Packet Synopsis: int pkt_addinfo(char *iname, char *info, int length) int pkt_saddinfo(char *iname, char *info, int length) Return Value: The return value is 1 for success, otherwise -1 for failure. Description: The pkt_addinfo() copies the packet information specified in ‘info’ to a PT_INFO pbuf in a Packet-Object. The packet information is a description about the data which is stored in PT_DATA and PT_SDATA pbuf. For example, in wireless network a PT_INFO pbuf might be used to store the frequency used to transmit data. If the Packet-Object (the data is encapsulated as a Packet-Object) contains no PT_INFO pbuf or the PT_INFO pbuf doesn’t have enough space for the request, a new PT_INFO pbuf is created by pkt_addinfo() function. The pkt_saddinfo() function also copies packet information specified by the ‘info’ parameter to a Packet-Object. But the destination pbuf that the pkt_saddinfo() copies to is a PT_SDATA not a PT_INFO pbuf. For any PT_SDATA pbuf, a transmitted data is stored in a cluster but not in the PT_SDATA pbuf itself. The space of a PT_SDATA pbuf is used to store packet information, which is difference from a PT_INFO pbuf. The packet information which is stored in a PT_SDATA pbuf is shared information. Because a PT_SDATA pbuf may be shared by more than one Packet-Object, all Packet-Objects that share the same PT_SDATA pbuf could have the same property described by the packet information. Every packet-information stored in PT_INFO pbuf should have a name. This name is specified in the parameter ‘iname’. The length of packet information is specified in the ‘length’ parameter. Note that the length of a packet-information should not exceed 50 bytes. This is because each PT_INFO pbuf is divided into many small blocks to store packet information and each of them is 50 bytes by default.

156

API Name: pkt_getinfo(), pkt_sgetinfo() Class Name: Pakcet Synopsis: char char

*pkt_getinfo(char *iname) *pkt_sgetinfo(char *iname)

Return Value: The return value of both of these two functions is a pointer to a packet-information, if success. Otherwise, a NULL value is returned. Description: The pkt_getinfo() function searches all the PT_INFO pbufs in a Packet-Object and finds the packet-information which is specified in the ‘iname’ parameter. The pkt_sgetinfo() also finds the packet-information specified in the ‘iname’ parameter, but it searches the desired packet information in a PT_SDATA not in a PT_INFO pbuf. Each packet-information should not exceed 50 Bytes. This is because each PT_INFO pbuf is divided into many small blocks to store packet-information and each of them is 50 bytes by default.

157

API Name:

pkt_getlen(), pkt_getpid(), pkt_getpbuf(), pkt_getflags() rt_gateway() Class Name: Packet Synopsis: virtual int u_int64_t inline struct pbuf inline short inline u_long

pkt_getlen(void) pkt_getpid(void) *pkt_getpbuf(void) pkt_getflags(void) rt_gateway(void)

Description: The pkt_getlen() function gets the total packet length of a Packet-Object. This total packet length only contains the data stored in the PT_DATA and PT_SDATA pbuf. The data stored in a PT_INFO pbuf is not treated as normal data. Hence pkt_getlen() won’t contains the length of data stored in a PT_INFO pbuf. The pkt_getpid() returns a packet ID. Each Packet-Object which doesn’t include duplicate Packet-Objectst always has a packet ID. If a Packet-Object is duplicated, the duplicated Packet-Object will have the same packet ID as the original Packet-Object. For example, suppose that Packet-Object A is a Packet-Object with ID 100. If Packet-Object B is a Packet-Object duplicated from Packet-Object A, then the packet ID of Packet-Object A and B will be the same. The pkt_getpbuf() returns a pointer to a PT_DATA pbuf in a Packet-Object. If the address of the PT_DATA pbuf is returned, we could use this PT_DATA pbuf to access the whole pbufs in the Packet-Object. The pkt_getflags() returns the value of the ‘p_flags’ flag in a PT_DATA pbuf. About the value of this flag, please see the packet introduction. The rt_gateway() gets a gateway in a Packet-Object. For each Packet-Object, it has a gateway information. This information may be used in a routing module to specify the next hop of a Packet-Object.

158

API Name: pkt_setflow(), pkt_setgw() Class Name: Packet Synopsis: inline void inline void

pkt_setflow(short flow) pkt_setgw(u_long gw)

Description: The pkt_setflow() marks a Packet-Object as an outgoing or incoming packet. For an outgoing packet, the PF_SEND should be set in the ‘flow’ parameter. Contrarily, the PF_RECV should be set for an incoming packet. The pkt_setgw() sets the gateway in a Packet-Object. For each Packet-Object, it has a gateway information. This information may be used in a routing module to specify next hop of a Packet-Object.

159

API Name: pkt_aggregate() Class Name: Packet Synopsis: inline char

*pkt_aggreagate(void)

Return Value: The return value of pkt_aggregate() function is a pointer to the data stored in a Packet-Object. Description: The pkt_aggregate() gets the data stored in a Packet-Object. It is somewhat like the pkt_get() or pkt_sget() function. But in fact, the pkt_get() only could access the data stored in ether PT_DATA or PT_SDATA pbuf, and the pkt_sget() only could access the PT_SDATA pbuf. If data is divided into two parts and stored in PT_DATA and PT_SDATA pbufs, respectively, neither pkt_get() nor pkt_sget() function could access the whole data in the Packet-Object. The pkt_aggregate() function has no such a limitation. If data is stored in PT_DATA and PT_SDATA pbufs separately, the pkt_aggreaget() will copy data in both the PT_DATA and PT_SDATA pbufs into a continuous space to form continuous data. A PT_SDATA pbuf has a cluster buffer to store a great deal of data. Besides the space used to store data, the cluster also contains a usable space, whose length equals to the length of a usable PT_DATA pbuf, which is 98 bytes. If pkt_aggregate() is called, the pkt_aggregate() tries to copy the data stored in a PT_DATA pbuf to the usable space in the PT_SDATA. Hence, the data could be in a continuous space. Note that if a Packet-Object’s PF_EXTEND flag is set, the pkt_aggregate() won’t copy data in PT_DATA pbuf to PT_SDATA pbuf. Instead, only data stored in PT_SDATA pbuf is accessed. This is because with PF_EXTEND flag set, the PT_DATA pbuf in a Packet-Object will use a cluster buffer to store data instead of the 98 bytes usable space in a PT_DATA pbuf.

160

API Name: pkt_setHandler(), pkt_callout() Class Name: Packet Synopsis: inline int pkt_setHandler(NslObject *obj, int (NslObject::*meth_)(Event_ *)) inline int pkt_setHandler(int (func_ *)(Event_ *)) inline int pkt_callout(ePacket_ *pkt) Return Value: For the pkt_setHandler() function, the return value is always 1. For the pkt_callout() function, if success, 1 is returned. Otherwise, -1 is returned. Description: The pkt_setHandler() function sets a handler function in a Packet-Object. If this handler function is set, a module could cause this handler function to be called anytime. For example, if a module finds some error occurs in a Packet-Object, it could call the handler function set in the Packet-Object to deal properly with this packet. A handler function should use one of the following types: int :: (ePacket_ *pkt)) { } int (ePacket_ *pkt)) { }

The ‘pkt’ parameter above is a pointer to an ePacket (Event-Packet). When a handler function is called, the ePacket which specifies the handler will be used as a parameter of the handler function. The pkt_callout() function causes a handler function specified in a Packet-Object to be called. If a handler function is set in a Packet-Object, the pkt_callout() calls out the handler function. Otherwise, nothing is done and -1 is returned.

161

Appendix C – NCTUns APIs API Name: Synopsis:

str_to_macaddr(), macaddr_to_str() #include void void

str_to_macaddr(char *str, u_char *mac) macaddr_to_str(u_char *mac, char *str)

Description: The str_to_macaddr() function forms a 48-bits IEEE 802 address with numerical representation by analyzing input-string, which contains the IEEE 802 address with textual representation. The ‘str’ parameter is a 6-bytes IEEE 802 address with textual representation. The ‘mac’ parameter is a pointer to the space, which is filled with numerical representation of address. The macaddr_to_str() function forms an IEEE 802 mac address of the form xx:xx:xx:xx:xx:xx, which is the textual representation formed by converting numerical representation of 48-bits IEEE 802 mac address. The parameter ‘mac’ is a pointer to a 48-bits IEEE 802 mac address with numerical representation. The ‘str’ is a pointer to a buffer space to store the IEEE 802 mac address with textual representation.

162

API Name: Synopsis:

ipv4addr_to_str(), str_to_ipv4addr() void void

ipv4addr_to_str(u_long ipv4addr, char *str) str_to_ipv4addr(char *str, u_long ipv4addr)

Description: The ipv4addr_to_str() function forms an IPv4 address of the form xx.xx.xx.xx, which is the textual representation formed by converting a numerical representation of the IPv4 address. The ‘ipv4addr’parameter is an unsigned-long integer used to store the IPv4 address with the numerical representation of 32-bit IPv4 address. The ‘str’parameter is a pointer to a space where the resulting IPv4 address in textual representation is stored. The str_to_ipv4addr() function forms an IPv4 address in numerical representation by converting the textual representation of the IPv4 address. The ‘str’ parameter is a pointer to an IPv4 address in textual representation and the ‘ipv4addr’ is an unsigned-long integer used to store the resulting address in numerical representation.

163

API Name: Synopsis:

vbind(), vbind_bool(), vbind_ip(), vbind_mac() int int int int int int int int

vbind(NslObject *obj, char *name, int *var) vbind(NslObject *obj, char *name, double *var) vbind(NslObject *obj, char *name, float *var) vbind(NslObject *obj, char *name, u_char *var) vbind(NslObject *obj, char *name, char **var) vbind_bool(NslObject *obj, char *name, u_char *var) vbind_ip(NslObject *obj, char *name, u_long *var) vbind_mac(NslObject *obj, char *name, u_char *var)

Return Value: The return value of above functions is 1 for success and < 0 for failure. Description: The functions listed above bind a variable in a module to a script file. Whenever a variable is bound to a script file, the vbind() function registers the variable with a variable-binding table, which is maintained in the S.E. Once the simulation starts, the S.E in the simulator reads and parses a script file at its initial state. In the meantime, upon matching a variable with one registered with the variable-binding table, the S.E will initialize that variable with the value specified in the script file. The present binding functions provided by S.E are list as below: 1. vbind()

The variable the vbind() function binds could be one of the following data types: . integer . double . float . unsigned char

2. vbind_bool() vbind_bool() binds a Boolean variable to a script file. 3. vbind_ip() binds an IPv4 address to a script file. 4. vbind_mac() binds a IEEE 802 MAC address to a script file.

164

API Name: Synopsis:

createEvent(), freeEvent() Event_ int

*createEvent(void) freeEvent(Event_ *ep)

Return Value: The return value of createEvent() function is a pointer to a new event. If a NULL value is returned, it means that the function failedl. The return value of freeEvent() is 1 on success and < 0 on failure. Description: The createEvent() function creates a new event structure and the freeEvent() functions releases the space of the event. An event structure has a DataInfo_ field used to hold any type of data. When the freeEvent() is used to release the memory space of an event, the freeEvent() first checks the DataInfo_ field in the event structure to see if it has data attached. If yes, the freeEvent() tries to release the memory space used by the data pointed by DataInfo_ field.

165

API Name:

setEventTimeStamp(), setEventResume(), scheduleInsertEvent() setEventCallOutFunc(), setEventCallOutObj()

Synopsis: int int int int int

setEventTimeStamp(Event_ *ep, u_int64_t timeStamp, u_int64_t perio) setEventResume(Event_ *ep) scheduleInsertEvent(Event_ *ep) setEventCallOutFunc(Event_ *ep, int (*fun)(Event_ *), void *data) setEventCallOutObj(Event_ *ep, NslObject *obj, int (NslObject::*memf)(Event_ *), void *data)

Return Value: The return value of above functions is 1 on success and < 0 on failure. Description: The setEventTimeStamp() function sets the timestamp of an event. The time unit of a timestamp is 1 clock tick in the simulator’s virtual time. The first parameter ‘ep’ is a pointer to an event. The second parameter ‘timeStamp’ is the expiration time, whose time unit is 1 clock tick in the simulator’s virtual time. The ‘perio’ parameter, if it is a non-zero value, causes the event to become a periodic event. The setEventResume() function resumes a periodic event. When an event with a non-zero ‘perio’ expires, the setEventResume() could be used to resume that event immediately without resetting its timestamp. The next expiration time which the setEventResume() sets will be set to the current time + perio. After that, the event will be reinserted into the scheduler to wait for the next expiration. The scheduleInsertEvent() inserts an event to the event scheduler. After setting the event, this function should be used to insert an event into the event scheduler. The setEventCallOutFunc() and setEventCallOutObj() functions set a handler function in an event. For a handler function, it could be either a normal function or a member function of an object. The setEventCallOutFunc() is for the normal function and the setEventCallOutObj() is for a member function. The following shows the prototypes of these two handler function, in which the ‘ep’ 166

parameter is a pointer to an event which causes the handler function to be called: int (Event_ *ep) } int :: (Event_ *ep) { }

167

API Name: Synopsis:

setFuncEvent(), setObjEvent() int int

setFuncEvent(Event_ *ep, u_int64_t timeStamp, u_int64_t perio, int (*func)(Event_ *), void *data) setObjEvent(Event_ *ep, u_int64_t timeStamp, u_int64_t perio, NslObject *obj, int (NslObject::*memf)(Event_ *), void *data)

Return Value: The return value of above functions is 1 for success and < 0 for failure. Description: The setFuncEvent() and setObjEvent() functions set an event and then insert it into the event scheduler. These functions are equivalent to the following operations: setFuncEvent() == setEventTimeStamp() + setEventCallOutFunc() + scheduleInsertEvent()

or setObjEvent() == setEventToimeStamp() + setEventCallOutObj() + scheduleInsertEvent()

Sometimes, this kind of function is more convenient for setting and starting an event.

168

API Name: Synopsis:

set_tuninfo() int

set_tuninfo(u_int32_t nid, u_int32_t portid, u_int32_t tid, u_long *ip, u_long *netmask, u_char *mac)

Return Value: The return value is 1 for success and < 0 for failure. Description: The set_tuninfo() function configures a tunnel network interface (in case the tunnel interface) when a simulation starts. When a node needs a network interface, this function should be used to assign a tunnel network interface to it. For a tunnel network interface configuration, the following information should be configured to it: . nid . portid . tid . ip

node ID the tunnel belongs to. port ID of the node which use a tunnel network interface. tunnel ID for each tunnel, it is treated as a real network interface. Hence an IP should be assigned. . netmask the netmask of an assigned IPv4 address. . mac the IEEE 802 mac address a tunnel associates with.

169

API Name: Synopsis:

RegToMBPoller() int

RegToMBPoller(MBinder *mbinder)

Return Value: The return value is 1 for success and < 0 for failure. Description: The RegToMBPoller() function registers a polling request of a Module-Binder(MB) with the Module-Binder Poller(MBP). The MB is a mechanism to bind two modules together. In the MB, there is a queue used to hold packets. If a packet could not be pushed to the next module immediately, this packet will be queued in a queue and a polling request is issued. Later, if the module can process another packet, the S.E scheduler will de-queue the packet from the queue in the MB and push it to the next module.

170

API Name: Synopsis:

nodeid_to_ipv4addr(), ipv4addr_to_nodeid() u_int32_t u_int32_t

nodeid_to_ipv4addr(u_int32_t nid, u_int32_t port) ipv4addr_to_nodeid(u_long ip)

Return Value: The return value of nodeid_to_ipv4addr() is a 32-bit IPv4 address if the function call succeeds. Otherwise, a zero value is returned. Contrarily, the return value of ipv4addr_to_nodeid() is a 32-bit node ID which owns this IPv4 address. Description: The nodeid_to_ipv4addr() function uses both node ID and port ID to find a corresponding IPv4 address in a node. One node may have more than one tunnel network interface. Hence if an IPv4 address is queried in such a node, the port ID should be specified, which should range from 1 to n, to this node where n is the total number of tunnel network interfaces attached. Contrarily, the ipv4addr_to_nodeid() function uses an IPv4 address to query a corresponding node who has a tunnel interface with this IPv4 address.

171

API Name: Synopsis:

ipv4addr_to_macaddr(), macaddr_to_ipv4addr() u_char u_long

*ipv4addr_to_macaddr(u_long ip) macaddr_to_ipv4addr(u_char *mac)

Return Value: The return value of ipv4addr_to_macaddr() is a pointer to an IEEE 802 mac address in numerical representation. If a NULL value is returned, it means that the function failed. The returned value of macaddr_to_ipv4addr() is an IPv4 address in 32-bit numerical representation. If a zero value is returned, the function call failed. Description: The ipv4addr_to_macaddr() function uses an IPv4 address as a key to get the corresponding IEEE 802 mac address. For each tunnel interface, it is always associated with an IPv4 address and an IEEE 802 mac address. This function is used to do the mapping from an IPv4 address to an IEEE 802 mac address. The other function, macaddr_to_ipv4addr() is used to do the reverse mapping. It maps a given IEEE 802 mac address to an IPv4 address.

172

API Name: Synopsis:

macaddr_to_nodeid() u_int32_t

macaddr_to_nodeid(u_cahr *mac)

Return Value: The return value of macaddr_to_nodeid() is a node ID. If a zero value is returned, it means that the function call failed. Description: The macaddr_to_nodeid() function uses an IEEE 802 mac address which is specified in ‘mac’ parameter as a key to find a corresponding node ID. A node may have more than one tunnel network interfaces attached to it. Each of them will be associated with an IEEE 802 mac address. The macaddr_to_nodeid() function is used to map a mac address to its corresponding node ID.

173

API Name: Synopsis:

is_ipv4_broadcast() u_char

is_ipv4_broadcasat(u_int32_t nid, u_long ip)

Return Value: If the return value is 1, the examined IPv4 address is a layer 3 broadcast address. If a zero value is returned, the examined IPv4 address is a normal layer 3 IPv4 address. Description: The is_ipv4_broadcast() function examines a given IPv4 address to see if it is a layer 3 broadcast address. In the function, the first parameter is a node ID and the second is an examined IPv4 address.

174

API Name: Synopsis:

getifnamebytunid(), getportbytunid() char u_int32_t

*getifnamebytunid(u_int32_t tid) getportbytunid(u_int32_t tid)

Return Value: The returned value of getifnamebytunid() is a pointer to a interface name. If a NULL value is returned, the function call failed. The returned value of getportbytunid() is a port number of a node. Description: The getifnamebytunid() function gets a tunnel network interface name. For each tunnel interface used in a simulated node, the simulator always gives it an interface name such as fxp0. If an interface name is wanted, this function could be used to get its name. The getportbytunid() function gets the port number of a tunnel interface in a simulated node. For each simulated node, it may have more than one tunnel interfaces. Similarly, each used tunnel interface should be associated with its local unique port number. For example, tunnel 2 with port number 1 and tunnel 5 with port number 2 are used in a simulated node 1, respectively. The getporbytunid(2) returns port number 1 and getportbytunid(5) returns port number 2.

175

API Name:

GetCurrentTime(), GetNodeCurrentTime() GetSimulationTime()

Synopsis: u_int64_t u_int64_t u_int64_t

GetCurrentTime(void) GetNodeCurrentTime(u_int32_t nid) GetSimulationTime(void)

Return Value: The return value of the above functions is a time in a simulator system’s virtual time. Description: The GetCurrentTime() function gets the global simulator system’s virtual time. In the simulator, the S.E maintains a global system virtual time. All the components in the simulator use this global time. The GetNodeCurrentTime() function gets one node’s virtual time. In order to reflect the fact that in a real network it is likely that the clocks of nodes are different from each other, each node should maintain a local virtual time. The GetNodeCurrentTime() is used to get each node’s virtual time. The GetSimulationTime() gets the simulation time in a simulation.

176

API Name: Synopsis:

InstanceLookup() NslObject

*InstanceLookup(u_int32_t id, char *name)

Return Value: The return value of this function is a pointer to a module instance. If a NULL value is returned, the function call failed. Description: The InstanceLookup() function uses both node ID and module instance name to find a module instance. Not that the module instance name is not a module name. The module name is a name used to register a module with the simulator, but the module instance name is used to register with the module manager. In the script file, a module declaration may look like as follows: Module MAC802_3 :

Node1_Mac8023

The MAC802_3 is the module name and the Node1_Mac8023 is the module instance name. The syntax of module declaration in a script file is shown below: Module :

177

API Name:

createPacket(), freePacket() pkt_copy()

Synopsis: ePacket_ int ePacket_

*createPacket(void) freePacket(ePacket_ *pkt) *pkt_copy(ePacket_ *src)

Return Value: The return value of cretaePacket() is a pointer to a new created packet. If it returns a NULL value, the function call failed. The freePacket() returns 1 for success and < 0 for failure. The return value of pkt_copy() is a pointer to a duplicate packet. If a NULL value is returned, the function call is failure. Description: The createPacket() creates a new packet. The return value is a pointer to an event. Before returning this event to caller, the createPacket() attaches a packet to the DataInfo_ field of the event structure. If a packet is not needed anymore, the memory space that it uses should be released. The freePacket() function is used to release the memory space that a packet uses. The pkt_copy() function duplicates a packet. The ‘src’ parameter of it is a packet which should be duplicated.

178

API Name: Synopsis:

reg_regvar(), get_regvar() int void

reg_regvar(NslObject *obj, char *name, void *var) *get_regvar(u_int32_t nid, u_int32_t portid, char *vname)

Return Value: The return value of reg_regvar() is 1 for success and < 0 for failure. Description: The reg_regvar() and get_regvar() functions are for the Inter-Module Communications (IMC) occurring in the same node. The NCTUns network simulator uses a stream mechanism to chain all modules together in a node. Hence it provides an IMC mechanism for modules to communicate with other modules in the same node. The reg_regvar() function registers a variable in a module with the register-table. For each variable to be accessed by all modules in the same node, it should be registered to var-register table. The reg_regvar() is provided for this purpose. A macro REG_VAR() is also provided for an alias of the reg_regvar() function. The prototype is shown as follows: REG_VAR(var_name, variable) The ‘var_name’ of REG_VAR() macro and the ‘name’ of reg_regvar() are a variable name to be registered. And the ‘variable’ of REG_VAR() and ‘var’ of reg_regvar() is a pointer to a variable to be registered. The get_regvar() function accesses a variable which has registered with the var-register table. After a variable is registered with a var-register table, the other modules in the same node could use this function to read or write that variable. Also a macro GET_REG_VAR() is provided for an alias of this function. The prototype of it is shown as follows: GET_REG_VAR(portid, var_name, type) The ‘portid’ of GET_REG_VAR() macro and ‘portid’ of get_regvar() specify a 179

ID of the port where the desired variable is in. The ‘vname’ of get_regvar() and ‘var_name’ of GET_REG_VAR() macro are the names of a desired variable. Note that the ‘type’ of GET_REG_VAR() is a data type used to cast a returned value. For example, if GET_REG_VAR(1, “test”, char *) is used, then the retuned value will be cast to a data type of char pointer.

180

API Name: Synopsis:

GetNodeLoc(), GetNodeAntenna() int int

GetNodeLoc(u_int32_t nid, double &x, double &y, double &z) GetNodeAntenna(u_int32_t, double &x, double &y, double &z)

Return value: The return value of above functions is 1 for success and < 0 for failure. Description: The GetNodeLoc() function gets the current position of a node. The returned information of a node position will be stored in the parameter ‘x’, ‘y’ and ‘z’. Each node in a simulation has its position information. This position information is updated periodically by the S.E in the simulator. When a simulation starts, the simulator reads a scenario file to create events to periodically update a node’s position. The syntax of the scenario file is as follows: $node_() set

The above syntax says that the node_id arrives at (X, Y, Z) position at time arrival_time at a speed of speed. Before moving next, the node will pause for the time specified in pause_time. The GetNodeAntenna() function gets an Antenna position of a node. Only a wireless node could have an antenna. Hence this function is used for a wireless node.

181

API Name: Synopsis:

nctuns_export(), export_addline() int

nctuns_export(NslObject *modu, char *name, u_char flags) export_addline(char *cm)

int Return Value:

The return value is 1 for success and < 0 for failure. Description: The nctuns_export() function exports a variable in a module to external component. This function is provided by the S.E for module communication with other external components. If a module variable is exported, the external components such as tcsh or GUI could access that variable through the dispatcher component in the S.E. The parameter ‘mdou’ and ‘name’ specify which module exports a variable. The ‘flags’ parameter indicates the attribute of an exported variable, whose value is shown as follows: E_RONLY E_WONLY E_RONLY | E_WONLY

the exported variable is read only the exported variable is write only the exported variable has both read and writer permission

The NCTUns network simulator also provides an alias name for this function. It is a EXPORT() macro. The syntax of it is shown as follows: EXPORT(name, flags)

The parameter ‘name’and ‘flags’are the same as nctuns_export() function. The export_addline() function permits a module to send a message to external components though the S.E dispatcher. The dispatcher in the S.E provides a buffer to store any type of data. Modules could use this function to send their information to external components. Also the NCTUns network simulator provides an alias name for this function, which is shown below: EXPORT_ADDLINE(cm)

The ‘cm’ parameter of export_addline() function and EXPORT_ADDLINE() macro is a pointer to a message.

182

API Name: Synopsis:

tun_write(), tun_read() int int

tun_write(int tunfd, ePacket_ *pkt) tun_read(int tunfd, ePacket_ *pkt)

Return Value: If tun_write() succeeds, it returns the number of bytes that it writes. Otherwise, a negative value is returned. For tun_read() function, the return value is described as follows: -1 -2 -3 Otherwise

illegal tunnel file descriptor or illegal packet format. the packet structure already has a PT_SDATA pbuf attached. read error. the number of bytes read from the tunnel interface.

Description: The tun_write() function writes a packet into a tunnel interface to simulate packet receptions. Before using this function, make sure that a tunnel network interface has been registered with the interface-poller (IF-Poller). If a packet is successfully written to a tunnel interface, this packet will be written to the O.S kernel and be processed in the kernel TCP/IP protocol stack, just as a normal packet reception. The tun_read() function reads a packet from a tunnel interface to simulate packet transmission. Before using this function, be sure that a tunnel interface has been registered with interface-poller (IF-Poller). Whenever the O.S kernel sends a packet through a tunnel interface, the packet will be queued in the tunnel interface queue. The tun_read() could be used to read a packet from this qeueue. The ‘tunfd’ parameter of these two functions is a file descriptor to a tunnel. For a tunnel, the O.S kernel always treats it as a file. Hence, in order to identify a tunnel, a file descriptor is used. The ‘pkt’ parameter is a pointer to a packet. The data stored in a packet would be written to or read from a tunnel interface.

183

API Name: Synopsis:

reg_IFpolling() u_long

reg_IFpolling(NslObject *obj, int (NslObject::*meth)(Event_ *), int *fd)

Return Value: The return value is 0 for failure. Otherwise is a tunnel ID. Description: The reg_IFpolling() function registers a tunnel interface with the Interface Polling Queue (IFPQ). If a tunnel interface is registered, the Interface Poller (IFP) polls the tunnel interface to see if it has packets in its tunnel interface queue. If the queue has packets, the IFP calls a handler function specified by the parameter ‘obj’ and ‘meth’. Therefore, for each tunnel interface used in a simulation, it should be registered to IFPQ so that packets could be read from and written to the tunnel interface. The reg_IFpolling() also opens a device file of a tunnel interface. In UNIX system, a tunnel interface is treated as a file. Hence the file descriptor is used to identify a tunnel interface. After re_IFpolling() opens a tunnel device file, the file descriptor is assigned to the parameter ‘fd’. This file descriptor then will be passed to caller.

184

API Name: Synopsis:

getConnectNode() u_int32_t

getConnectNode(u_int32_t nid, u_int32_t portid)

Return Value: The return value is a node ID. If a zero value is returned, it means the function callfailed. Description: The getConnectNode() function gets the ID of a node’s neighboring node. The parameter ‘nid’ and ‘portid’ are used to uniquely specify the ID of a node’s neighboring node, which is directly connected to the port in that node. For example, if node 2 has two ports and port 1 connects to node 1 and port 2 connects to node 3, respectively. The function call, getConnectNode(2, 1) will return node 1 and getConnectNode(2, 2) will return node 3.

185

API Name:

getTypeName(), getNodeName() getNodeLayer(), getModuleName()

Synopsis: char char u_char char

*getTypeName(NslObject *node) *getNodeName(u_int32_t nid) getNodeLayer(u_int32_t nid) *getModuleName(NslObject *obj)

Description: The getTypeName() function gets a device type of a node. Similarly, the getNodeName() gets a node’s name. In a script file, the Create command shown below is used to create a node: Create Node as with name =

The getTypeName() will return the and the getNodeName() will return the . These information are described by the Create command in a script file. The getNodeLayer() function returns a number to indicate the OSI layer a node belongs. The getModuleName() function gets a name of a module instance. The ‘obj’ parameter of this function is a module instance. For each module instance, it may belong to a specific module, which could be developed by general users. Before a module is added into the simulator, this module should be registered with the simulator and should be given a namet. This name is called the module name.

186

API Name: Synopsis:

getScriptName(), getNumOfNodes() char u_int32_t

*getScriptName(void) getNumOfNodes(void)

Description: The getScriptName() function returns a script file name. The script file is a file used to describe a network topology and its setting. Once a simulation starts, it reads a script file to simulate the network described in the script file. The getNumOfNodes() function returns the total number of nodes the simulator simulates on a simulated network environment. Here the network environment and nodes are described in a script file.

187