A Novel End-to-End Architecture for Exploiting

0 downloads 0 Views 3MB Size Report
1.2 Rationale for Designing a Solution for TCP . ...... protocol stack is not capable of simultaneously utilizing these multiple network ... Internet Key Exchange version 2. IP ...... Traditionally, in order to make mobile servers reachable behind the NAT, manual configurations ...... [92] P. Albitz and C. Liu, DNS and Bind, 4th ed.
A Novel End-to-End Architecture for Exploiting Multihoming in Mobile Devices for Mobility Management and Bandwidth Aggregation

Author Mr. Muhammad Yousaf Reg. No. 07-UET/PHD-CASE-CP-41

Supervisor Dr. Amir Qayyum

ELECTRICAL AND COMPUTER ENGINEERING DEPARTMENT CENTER FOR ADVANCED STUDIES IN ENGINEERING UNIVERSITY OF ENGINEERING AND TECHNOLOGY TAXILA, PAKISTAN

Summer 2013

A Novel End-to-End Architecture for Exploiting Multihoming in Mobile Devices for Mobility Management and Bandwidth Aggregation

A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Engineering by:

Muhammad Yousaf 07-UET/ PhD-CASE-CP-41 Approved by: External Examiners

____________________

____________________

Dr. Affan A. Syed

Dr. Auon Muhammad Akhtar

Associate Professor, National University of Computer and Emerging Sciences (FAST-NUCES), Islamabad

Assistant Professor, Riphah International University, Islamabad

Internal Examiner / Thesis Supervisor ____________________ Dr. Amir Qayyum CASE, Islamabad

ELECTRICAL AND COMPUTER ENGINEERING DEPARTMENT CENTER FOR ADVANCED STUDIES IN ENGINEERING UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA, PAKISTAN

ii

DECLARATION

The substance of this Ph.D. thesis is the original work of the author and due references and acknowledgements have been made, where necessary, to the work of others. No part of this thesis has been already accepted for any degree, and it is not being currently submitted in candidature of any degree.

_________________________________ Mr. Muhammad Yousaf Reg. No. 07-UET/PHD-CASE-CP-41 Thesis Scholar

Countersigned:

_______________________________ Dr. Amir Qayyum Thesis Supervisor

iii

Table of Contents Table of Contents ............................................................................................................................ iv List of Figures ................................................................................................................................ vii List of Tables ................................................................................................................................... ix List of Publications in ISI Indexed Impact Factor Journals................................................................ x List of Publications in International Conferences & Book Chapters................................................ xi Acknowledgements ........................................................................................................................ xii ABSTRACT ...................................................................................................................................... xiii Abbreviations Used ........................................................................................................................ xv Chapter 1 ........................................................................................................................................ 1 1.

Introduction ............................................................................................................................. 1

1.1 Background .............................................................................................................................................1

1.1.1 Benefits of Multihoming ...............................................................................................1 1.1.2 Challenges of Handling Multihoming ...........................................................................2 1.1.3 Scope of the Thesis........................................................................................................3 1.2 Rationale for Designing a Solution for TCP ............................................................................................6 1.3 Rationale for an End-to-End Design .......................................................................................................6 1.4 Problem Statement ..................................................................................................................................8 1.5 Own Contribution ....................................................................................................................................9 1.6 Chapter summary ..................................................................................................................................10

Chapter 2 ...................................................................................................................................... 12 2.

Literature Review .................................................................................................................. 12

2.1 Mobility Management with Multihoming .............................................................................................14

2.1.1 Network Layer Solutions for Mobility Management ..................................................14 2.1.2 Layer 3.5 Solutions for Mobility Management ...........................................................17 2.1.3 Transport Layer Solutions for Mobility Management .................................................20 2.1.4 Session Layer Solutions for Mobility Management ....................................................24 2.2 Bandwidth Aggregation and Multihoming ............................................................................................25

iv

2.2.1 Physical Layer Solutions for Bandwidth Aggregation ................................................26 2.2.2 Link Layer Solutions for Bandwidth Aggregation ......................................................26 2.2.3 Network Layer Solutions for Bandwidth Aggregation................................................27 2.2.4 Transport Layer Solutions for Bandwidth Aggregation ..............................................30 2.2.5 Session Layer Solutions for Bandwidth Aggregation .................................................31 2.2.6 Application Layer Solutions for Bandwidth Aggregation ...........................................33 2.3 Location Management Service ..............................................................................................................35

2.3.1 Problems in Location Management in NAT Environment ..........................................36 2.3 Chapter Summary ..................................................................................................................................37

Chapter 3 ...................................................................................................................................... 41 3.

Proposed System Architecture .............................................................................................. 41

3.1 Design Principles of Proposed Architecture ..........................................................................................41 3.2 Proposed Architecture Design ...............................................................................................................43

3.2.1 Session Layer Components .........................................................................................44 3.2.2 Cross-Layer Components ............................................................................................48 3.3 Handover Management with Proposed Architecture .............................................................................56 3.4 Bandwidth Aggregation with Proposed Architecture ............................................................................60 3.5 Proposed Solution for Location Management in NAT Environment ....................................................63 3.6 Chapter Summary ..................................................................................................................................66

Chapter 4 ...................................................................................................................................... 67 4.

Experimentation of the Proposed Architecture ..................................................................... 67

4.1 Implementation Design .........................................................................................................................67 4.2 Experimental Evaluation of Proposed Architecture ..............................................................................68

4.2.1 Throughput and Latency during Handovers ................................................................69 4.2.2 Throughput Gain during Bandwidth Aggregation.......................................................72 4.2.3 Scalability of Proposed Architecture ...........................................................................75 4.2.4 Overhead of Proposed Architecture ............................................................................76 4.3 Results of Proposed Solution for Generating LGD Trigger .....................................................................78

v

4.3.1 Prediction Accuracy and Computation Cost of Predictions ........................................80 4.3.2 Memory Requirement for Prediction Process .............................................................82 4.4 Discussion on Proposed NAT Automatic Port-Forwarding Technique ..................................................83

4.4.1 Discussion on Security Issues .....................................................................................84 4.4.2 Discussion on Efficiency Comparison ........................................................................85 4.5 Chapter Summary ..................................................................................................................................85

Chapter 5 ...................................................................................................................................... 87 5.

Performance Comparison with Existing Protocols ................................................................ 87

5.1 Analytical Modelling for Performance Comparison ..............................................................................87

5.1.1 Modelling End-to-End Handover Delay .....................................................................90 5.1.2 Modelling Throughput Degradation Time ..................................................................94 5.2 Comparative Analysis of Handover Delay ..............................................................................................96 5.3 Comparative Analysis of Service Disruption Time ...............................................................................101 5.4 Comparison of Protocol Overhead ......................................................................................................104 5.5 Comparison of Security Issues .............................................................................................................107 5.6 Comparison of Implementation Issues ................................................................................................108 5.7 Comparison of other Qualitative Parameters......................................................................................109

5.7.1 Application Transparency .........................................................................................110 5.7.2 Multihoming Support & Seamless Handover............................................................111 5.7.3 Simultaneous Handover Support ...............................................................................111 5.7.4 Wilful Handover Support ..........................................................................................111 5.7.5 Support of LGD Prediction Intelligence....................................................................111 5.7.6 Cross Layer Optimization Support ............................................................................112 5.7.7 Requirement of Deploying Additional Network Entity ............................................112 5.8 Chapter Summary ................................................................................................................................112

Chapter 6 .................................................................................................................................... 114 6.

Conclusion and Future Work............................................................................................... 114

7

References ........................................................................................................................... 118 vi

List of Figures FIG 1: TYPES OF MULTIHOMING IN IP NETWORKS ..................................................................................................... 2 FIG 2: END-HOST MULTIHOMING .......................................................................................................................... 3 FIG 3: DIFFERENT BANDWIDTH AGGREGATION APPROACHES .................................................................................... 27 FIG 4: NETWORK LAYER APPROACH WITH NETWORK PROXY FOR BANDWIDTH AGGREGATION......................................... 29 FIG 5: A SESSION LAYER STRIPING SCHEME............................................................................................................ 32 FIG 6: APPLICATION LAYER STRIPING .................................................................................................................... 34 FIG 7: DIFFERENT COMPONENTS OF THE PROPOSED ARCHITECTURE SHOWN IN DOTTED LINES ........................................ 43 FIG 8: ASSOCIATION WITH MULTIPLE TCP CONNECTIONS......................................................................................... 47 FIG 9: PREDICTION MODULE FOR LINK GOING DOWN GENERATION IN IEEE 802.11U STACK ......................................... 51 FIG 10: ARCHITECTURE FOR INTELLIGENT GENERATION OF LINK GOING DOWN TRIGGER ................................................ 51 FIG 11: TDNN MODULE FOR ESTIMATING FUTURE LINK CONDITIONS ........................................................................ 52 FIG 12: HANDOVER AND BANDWIDTH AGGREGATION DECISION ALGORITHM............................................................... 54 FIG 13: FLOW CHART OF ALGORITHM FOR HANDOVER & BANDWIDTH AGGREGATION DECISIONS ................................... 55 FIG 14: MESSAGE EXCHANGE DURING SIMPLE HANDOVER ....................................................................................... 57 FIG 15: DIFFERENT HANDOVER SCENARIOS ........................................................................................................... 58 FIG 16: MESSAGE EXCHANGE DURING SIMULTANEOUS HANDOVER OF SCENARIO-2 ...................................................... 59 FIG 17: MESSAGE EXCHANGE DURING SIMULTANEOUS HANDOVER OF SCENARIO-3 ...................................................... 59 FIG 18: MESSAGE EXCHANGE DURING SIMULTANEOUS HANDOVER OF SCENARIO-4 ...................................................... 60 FIG 19: MESSAGE EXCHANGE FOR ADDING INTERFACE IN BANDWIDTH AGGREGATION .................................................. 61 FIG 20: STATE TRANSITION DIAGRAM FOR HANDOVER AND BANDWIDTH AGGREGATION STATES...................................... 62 FIG 21: FORMAT OF PROPOSED DHCP OPTION FOR NAT AUTO CONFIGURATION......................................................... 63 FIG 22: INTERACTION BETWEEN DHCP CLIENT, DHCP SERVER AND NAT BOX ............................................................. 64 FIG 23: INTERACTION SCENARIO FOR PORT MAPPING AND DNS LOCATION UPDATES .................................................... 65 FIG 24: COMPONENTS OF THE IMPLEMENTATION DESIGN OF THE PROPOSED ARCHITECTURE ........................................... 68 FIG 25: TEST TOPOLOGY USED FOR PERFORMANCE ANALYSIS OF PROPOSED ARCHITECTURE ............................................ 69 FIG 26: THROUGHPUT DURING HANDOVER FROM ONE WLAN NETWORK TO OTHER WLAN NETWORK ............................ 70 FIG 27: THROUGHPUT DURING HANDOVER FROM WIMAX NETWORK TO WI-FI NETWORK ............................................ 71 FIG 28: THROUGHPUT DURING HANDOVER FROM WI-FI NETWORK TO WIMAX NETWORK ............................................ 72 FIG 29: THROUGHPUT DURING BANDWIDTH AGGREGATION OVER WI-FI AND WIMAX NETWORKS.................................. 73 FIG 30: THROUGHPUT DURING BANDWIDTH AGGREGATION WITH INCREASING NODE DENSITY ........................................ 74 FIG 31: THROUGHPUT DURING BANDWIDTH AGGREGATION OVER TWO WI-FI INTERFACES ............................................. 75 FIG 32: EFFECT OF OVERHEAD OF PROPOSED ARCHITECTURE ON THROUGHPUT ............................................................ 77 FIG 33: MOVEMENT PATTERN OF MOBILE NODE FOR CAPTURING VARYING LINK CONDITIONS ........................................ 79 FIG 34: AVAILABLE REACTION TIME GAIN WITH 10 PREDICTED SAMPLES ..................................................................... 81 FIG 35: MOVEMENT SCENARIOS OF MOBILE NODE ................................................................................................. 88 FIG 36: MESSAGE EXCHANGE IN NON-OVERLAPPING REGIONS .................................................................................. 89 FIG 37: END-TO-END PACKET LOSS PROBABILITY VS. HANDOVER DELAY IN NON-OVERLAPPING REGION............................. 97 FIG 38: END-TO-END PACKET LOSS PROBABILITY VS. HANDOVER DELAY WHILE ENTERING INTO OVERLAPPING REGION ......... 98 FIG 39: END-TO-END PACKET LOSS PROBABILITY VS. HANDOVER DELAY WHILE LEAVING THE OVERLAPPING REGION ............ 98 FIG 40: BANDWIDTH VS. HANDOVER DELAY IN NON-OVERLAPPING REGION .................................................................. 99 FIG 41: BANDWIDTH VS. HANDOVER DELAY WHILE ENTERING IN OVERLAPPING REGION ................................................ 100

vii

FIG 42: BANDWIDTH VS. HANDOVER DELAY WHILE LEAVING THE OVERLAPPING REGION ................................................ 101 FIG 43: END-TO-END PACKET LOSS PROBABILITY VS. THROUGHPUT DEGRADATION TIME IN NON-OVERLAPPING REGION .... 102 FIG 44: END-TO-END PACKET LOSS PROBABILITY VS. THROUGHPUT DEGRADATION TIME WHILE ENTERING IN OVERLAPPING REGION ................................................................................................................................................. 103 FIG 45: END-TO-END PACKET LOSS PROBABILITY VS. THROUGHPUT DEGRADATION TIME WHILE LEAVING OVERLAPPING REGION ........................................................................................................................................................... 103 FIG 46: NUMBER OF PACKETS SENT AFTER VERTICAL HANDOVER INITIATION VS. PROTOCOL OVERHEAD IN BYTES ............. 106 FIG 47: NUMBER OF PACKETS SENT AFTER VERTICAL HANDOVER COMPLETION VS. PROTOCOL OVERHEAD IN BYTES .......... 106

viii

List of Tables TABLE 1 : PREDICTION ACCURACY TABLE................................................................................................................ 80 TABLE 2: COMPUTATION COST AND AVAILABLE REACTION TIME DUE TO PREDICTION ..................................................... 82 TABLE 3: VALUES OF DIFFERENT PARAMETERS USED FOR PERFORMANCE COMPARISON ................................................... 96 TABLE 4: COMPARISON OF PROTOCOL OVERHEAD OF PROPOSED ARCHITECTURE WITH EXISTING PROTOCOLS .................. 105 TABLE 5: COMPARISON OF SECURITY SERVICES PROVIDED BY PROPOSED ARCHITECTURE AND EXISTING PROTOCOLS.......... 108 TABLE 6: COMPARISON OF IMPLEMENTATION ISSUES OF PROPOSED ARCHITECTURE WITH EXISTING PROTOCOLS ............... 109 TABLE 7: COMPARISON OF QUALITATIVE PARAMETERS OF PROPOSED ARCHITECTURE WITH EXISTING PROTOCOLS ............ 110

ix

List of Publications in ISI Indexed Impact Factor Journals 1) M. Yousaf, A. Qayyum, & S. A. Malik, "An Architecture for Exploiting Multihoming in Mobile Devices for Vertical Handovers & Bandwidth Aggregation", Springer Wireless Personal Communications (WPC) Journal, ISSN: 0929-6212, IF 0.458 (2011), Volume 66, Issue 1, pp. 57-79, DOI 10.1007/s11277011-0326-3, September 2012. (Note: Main work regarding proposed architecture, its implementation and performance analysis is presented in this paper.)

2) Peer Azmat Shah, Muhammad Yousaf, Amir Qayyum, & Halabi B. Hasbullah, “Performance Comparison of End-to-End Mobility Management Protocols for TCP”, Elsevier Journal of Network and Computer Applications (JNCA), ISSN: 1084-8045, IF 1.065 (2011), Volume 35, Issue 6, pp. 1657-1673, DOI 10.1016/j.jnca.2012.05.002, November 2012. (Note: Additional work regarding analytical performance comparison of proposed architecture with existing techniques was conducted in collaboration with the first author and is presented in this paper.)

x

List of Publications in International Conferences & Book Chapters 1) Peer Azmat Shah, Muhammad Yousaf, Amir Qayyum, and Halabi B. Hasbullah "Effectiveness of Multihoming and Parallel Transmission during and after the Vertical Handover", In proceedings of IEEE International Conference on Computer and Information Sciences (ICCIS-2012), World Congress on Engineering, Science and Technology (ESTCON 2012), June 12-14, 2012, Kualalumpur, Malaysia, Pages 625-629, DOI: 10.1109/ICCISci.2012.6297105, ISBN: 978-1-4673-1937-9 2) Sadaf Yasmin, Muhammad Yousaf, and Amir Qayyum, "Security Issues Related with DNS Dynamic Updates for Mobile Nodes: A Survey", In proceedings of ACM International Conference on Frontiers of Information Technology (FIT-10), December 2123, 2010, Islamabad, Pakistan, DOI: 10.1145/1943628.1943645, ISBN: 978-1-4503-0342-2 3) E. Elahi, M. Yousaf, A. Sheikh, M. M. Rehan, O. M. Chughtai, and A. Qayyum, "On the Implementation of End-to-End Mobility Management Framework (EMF)", In proceedings of 6th IEEE International Conference on Wireless and Mobile Computing (WiMob 2010), October 11-13, 2010, Niagra Falls, Canada, Pages 458-465, DOI: 10.1109/WIMOB.2010.5645028, ISBN: 978-1-4244-7743-2 4) Peer Azmat Shah, Muhammad Yousaf, Amir Qayyum, and Shahzad A. Malik, "An Analysis of Service Disruption Time for TCP Applications using End-to-End Mobility Management Protocols", In proceedings of 7th ACM International Conference on Advances in Mobile Computing & Multimedia (MoMM 2009, ERPAS), December 14-16, 2009, KL, Malaysia, DOI: 10.1145/1821748.1821817, ISBN: 978-1-60558-659-5 5) M. Yousaf, Sohail Bhatti, Maaz Rehan, A. Qayyum, and S. A. Malik, "An Intelligent Prediction Model for Generating LGD Trigger of IEEE 802.21 MIH", In proceedings of 2009 International Conference on Intelligent Computing (ICIC 2009), September 16–19, 2009, Ulsan, Korea, Springer Verlag Lecture Notes in Computer Sciences (LNCS), Volume 5754/2009, Pages 413-422, DOI: 10.1007/978-3-642-04070-2_47, Print ISBN: 978-3642-04069-6 6) Maaz Rehan, Muhammad Yousaf, Amir Qayyum, Shahzad Malik, "A Crosslayer User Centric Vertical Handover Decision Approach based on MIH Local Triggers", In proceedings of Second Joint IFIP Wireless and Mobile Networking Conference (WMNC’2009), September 9-11, 2009, Gdańsk, Poland, Springer Wireless and Mobile Networking series, Volume 308/2009, Pages 359-369, DOI: 10.1007/978-3-642-03841-9_3, Print ISBN: 978-3-642-03840-2 7) Shah, P.A. Yousaf, M. , "End-to-end mobility management solutions for TCP: An analysis", In proceedings of IEEE 11th International Conference on Computer and Information Technology (ICCIT 2008), 24-27 Dec. 2008, pages 696-701, Khulna, Bangladesh, DOI: 10.1109/ICCITECHN.2008.4803068, ISBN: 978-1-4244-2135-0 8) Yousaf, M. Qayyum, A. , "On End-to-End Mobility Management in 4G Heterogeneous Wireless Networks", In proceedings of IEEE International Networking and Communications Conference (INCC 2008), May 1-3, 2008 pages 118-123, Lahore, Pakistan, DOI: 10.1109/INCC.2008.4562703, ISBN: 978-1-4244-2151-0

xi

Acknowledgements I would like to thank Higher Education Commission (http://www.hec.gov.pk/), Government of Pakistan, for funding this study through the Indigenous PhD Scholarship.

I

would

also

like

to

thank

National

ICT

R&D

Fund

(http://www.ictrdf.org.pk/), Ministry of Information Technology, Government of Pakistan for funding the associated research project. Moreover, I would also like to thank EMF Project team members who contributed in designing and implementing the project. Especially, I would like to thank Dr. Amir Qayyum, Dr. Shahzad Ali Malik, Mr. Ehsan Elahi, Mr. Peer Azmat Shah, Mr. Maaz Rehan, Mr. Sohail Masood Bhatti, Ms. Sadaf Yasmeen, Ms. Ambreen Sheikh, Mr. Muhammad Omer Chughtai and more. I would also like to thank CoReNeT (Center of Research in Networks and Telecom, http://corenet.org.pk/) members for their encouragement and positive feedback.

xii

ABSTRACT Mobile devices with the support of multiple network interfaces have become common these days. Applications on mobile devices can utilize the existence of these multiple network interfaces for many useful services like mobility management and bandwidth aggregation. However, currently available implementation of networking communication protocol stack is not capable of simultaneously utilizing these multiple network interfaces. Although many schemes have been proposed for providing the mobility management and bandwidth aggregation services to multihomed mobile devices, however, these schemes have limitations that hinder their large-scale deployment. Some schemes depend on the deployment of additional network entities in network infrastructure while others require changes in networking communication protocol stack implemented in current operating system kernels. End-to-end architecture presented in this thesis not only overcomes these limitations, but also fills some existing gaps that make the proposed architecture feasible for implementation in real scenarios. The proposed architecture utilizes the simultaneous transmission over multiple network interfaces for providing the services of vertical handover, simultaneous movement of communicating nodes, wilful handover, location updates and bandwidth aggregation. In order to provide these services, the proposed architecture neither requires deployment of additional entities in network infrastructure nor it require changes in communication protocol stack implemented in operating system kernel. Moreover, in order to make timely handover decisions, this thesis also presents a prediction technique that intelligently generates IEEE 802.21 MIH Link Going Down trigger. For evaluating its performance in various handover and bandwidth aggregation scenarios, the architecture presented in this thesis is implemented and evaluated on Linux and Windows platforms. Performance analysis shows the ability of the proposed architecture to perform seamless handover in regions of overlapping coverage area of two access xiii

networks. Similarly, significant throughput gain is observed during bandwidth aggregation over multiple network interfaces. Towards the end of the thesis, performance of the proposed architecture in terms of handover delay, service disruption time, protocol overhead, etc. has been compared with existing end-to-end mobility management protocols. With the capability to support simultaneous transmission over multiple network interfaces and intelligent prediction of IEEE 802.21 MIH Link Going Down trigger, the proposed architecture performed significantly better than the existing protocols.

xiv

Abbreviations Used 3G

3rd Generation Telecommunication System

AP

Access Point

API

Application Programming Interface

ATM

Asynchronous Transfer Mode

BA

Bandwidth Aggregation

BAHO

Bandwidth Aggregation and Handover aware library

BARWAN

Bay Area Research Wireless Access Network

BMP

Buffer Management Policy

BS

Base Station

BSD

Berkeley Software Distribution

CDMA2000

Code Division Multiple Access 2000

CID

Connection Identifier

CIP

Cellular IP

CM

Connection Manager

CN

Corresponding Node

CoA

Care of Address

CU

Connection Update

DHCP

Dynamic Host Configuration Protocol

DNS

Domain Name System

DDNS

Dynamic Domain Name System

EDGE

Enhanced Data rate for GSM Evolution

EMF

End-to-end Mobility management Framework

FDM

Frequency Division Multiplexing

FFNN

Feed Forward Neural Network

FIFO

First In First Out

FTP

File Transfer Protocol

GPRS

General Packet Radio Service

GRE

Generic Routing Encapsulation

GSM

Global System for Mobile communication

HAWAII

Handoff Aware Wireless Access Internet Infrastructure

HI

Host Identifier

HIP

Host Identity Protocol

HM

Handover Manager

HMIP

Hierarchical Mobile IP

HNP

Home Network Prefix

HoA

Home Address

ICMP

Internet Control Message Protocol

IDMP

Intra-Domain Mobility management Protocol

IEEE

Institute of Electrical and Electronics Engineers

IETF

Internet Engineering Task Force

IKEv2

Internet Key Exchange version 2

IP

Internet Protocol

ISP

Internet Service Provider

I-TCP

Indirect TCP

ITU

International Telecommunications Union

LCT

Local Connection Translation

LGD

Link Going Down

LM

Location Management

LMA

Local Mobility Anchor

MAC

Media Access Control

MAG

Mobile Access Gateway

MIH

Media Independent Handover

MIHF

Media Independent Handover Function

MIP-RR

Mobile IP Regional Registration

Page xvi

| Abbreviations Used

MLME

MAC Layer Management Entity

MM

Mobility Management

MMSP

Mobile Multimedia Streaming Protocol

MN

Mobile Node

MNP

Mobile Network Prefix

MOBIKE

IKE Mobility and Multihoming Protocol

MPTCP

Multipath TCP

MR

Mobile Router

MSGCF

Media State Generic Convergence Function

M-TCP

Migratory TCP

NAT

Network Address Translation

NEMO

Network Mobility

N-ISDN

Narrowband Integrated Services Digital Network

PHY

Physical Layer

PMIPv6

Proxy Mobile IPv6

pTCP

Parallel TCP

R2CP

Radial Reception Control Protocol

RFC

Request For Comments

RMTP

Reliable Multiplexing Transport Protocol

RN

Remote Node

SA

Security Association

SCT

System Call Translator

SCTP

Stream Control Transmission Protocol

SDM

Space Division Multiplexing

SeND

Secure Neighbour Discovery

SIGMA

Seamless IP diversity based Generalized Mobility Architecture

SLM

Session Layer Mobility Management

Page xvii

| Abbreviations Used

SL-TCP

Socketless TCP

SONET

Synchronous Optical Network

TCP

Transmission Control Protocol

TCP-R

TCP Redirection

TCP-v

TCP Virtual

TDM

Time Division Multiplexing

UDP

User Datagram Protocol

ULS

User Location Server

USB

Universal Serial Bus

VA-TCP

Vertical handoff Aware TCP

VC

Virtual Connectivity

VHO

Vertical Handover

WCDMA

Wideband Code Division Multiple Access

WDM

Wavelength Division Multiplexing

WG

Working Group

WiMAX

Worldwide Interoperability for Microware Access

WLAN

Wireless Local Area Network

Page xviii

| Abbreviations Used

Chapter 1 1. Introduction

1.1 Background In computer networks, the term multihoming refers to the capability of an entity to possess multiple network interfaces. This entity can be a network, an intermediate node or an end node. If a network site has the connectivity from more than one network service providers then this is called site multihoming. Now-a-days, organizations usually have connectivity from multiple network service providers in order to have better network services in terms of throughput, failsafe reliability, load balancing and so on. Intermediate devices such as routers are meant to interconnect multiple different networks, therefore, they have always been multihomed. With the advent of modern-age technological improvements, the trend of multihoming has become popular in end nodes also. If such an end node is a mobile device then this device is termed as multihomed mobile device. Most of the current laptops are usually equipped with Ethernet, WiFi and Bluetooth interfaces and moreover can also have the WiMAX and 3G network access through USB interface. Similarly, due to the availability of Bluetooth, WiFi, GPRS/EDGE and 3G network interfaces, today’s smart phones have also been transformed into the multihomed mobile devices. Figure 1 depicts different types of multihoming in IP networks. 1.1.1 Benefits of Multihoming

The property of multihoming can be used to get number of benefits [1]. Some of the potential benefits of multihoming are: 

Ubiquitous access

Page 1

| Introduction



Reliability and fault tolerance



Mobility Management



Bandwidth aggregation



Load balancing of Internet traffic over multiple interfaces



Traffic engineering



Preference settings of network interfaces Multihoming

Site Multihoming

Host Multihoming

Intermediate Host Multihoming

End Host Multihoming

Fig 1: Types of Multihoming in IP Networks

1.1.2 Challenges of Handling Multihoming

Along with these potential benefits of multihoming there are many challenges also that needs to be handled while dealing with multihoming. Some of these challenges are: 

Determining which traffic should be forwarded to which upstream network



Appropriate source address is a function of upstream network for which the packet is bounded. Multihomed devices can’t have arbitrary address selection [2]



If multiple routers exist on a single link, the host must appropriately select next-hop for each connected network [2]



Ingress filtering for multihomed devices is not straight forward [3]

Page 2

| Introduction

1.1.3 Scope of the Thesis

Although there are many possible dimensions of research in the above stated areas, however, this thesis limits its scope to the following areas only: 

Only the end-host multihoming issues are discussed. Scenario for end-host multihoming is depicted in Figure 2



Not all the services that can be provided by using the multihoming property are covered. Only the services of mobility management and bandwidth aggregation are the scope of this thesis



Focus of this thesis is to design and evaluate efficient end-to-end mechanisms that enable TCP applications to get benefit of the services provided by multihomed devices



Discussion of other transport layer protocols is out of scope of this thesis



Discussion on network centric solutions to provide these services is also out of scope of this thesis

Fig 2: End-Host Multihoming

Many efforts are going on in different standardization organizations to resolve the challenging issues and to get benefits of multihoming in order to provide different services. Major organizations that are involved in these efforts are Institute of Electrical and Electronics Engineers (IEEE) and Internet Engineering Task Force (IETF). IEEE standardized the IEEE 802.21 Media Independent Handover (MIH) standard [4]. The objective of MIH standard is to gather lower layer information from heterogeneous network interfaces and to pass this information to upper layers in unified way and thus hiding the link technology details from the

Page 3

| Introduction

upper layers. MIH provides the information to its upper layer users in the form of different events. These events include the link up trigger, link down trigger, and link going down trigger. At upper layers, these triggers can be used for vertical handover decision making. In IETF, Multiple Interfaces (MIF) Working Group is chartered to resolve the configuration issues of multihomed end hosts. These configuration issues include the challenges of DHCP server configurations, DNS server configurations, etc. To provide the services of mobility management, many working groups are working in IETF. These working groups are standardizing the solutions involving Mobile IP and its variants, micro mobility management, Shim6, Host Identity Protocol (HIP), Mutlipath TCP (MPTCP), Stream Control Transmission Protocol (SCTP) and so on. Very little effort has been carried out in IETF to standardize the Bandwidth Aggregation protocols. Only a few protocols, like SCTP and HIP, discuss the issues related to bandwidth aggregation service. When a mobile node communicating with some corresponding node, leaves its access network and enters in to a new access network, the TCP connection established with the corresponding node is disturbed due to the change in IP address of the mobile node. Number of problems are arisen with this change in IP address. First is that if a corresponding node is sending data to the mobile node, then the corresponding node will continue to send data to the old IP address of the mobile node whereas mobile node has a different new IP address. Thus the data sent by the corresponding node will not reach the mobile node. Second is that if mobile node is sending data to the corresponding node, and the mobile node changes its IP address, then it will send the data with a different IP address. However, due to the disturbance of TCP connection, this data will be discarded on reaching the corresponding node. Another problem is that how the nodes on the Internet will know about the new location of mobile node. It is important if mobile node is hosting some service, and the nodes on the Internet would be able to send connection requests to the mobile node at its new location. Efforts to keep on exchanging data on the ongoing TCP connection, even in the scenario of change of access network, is called handover management. And the efforts to make the nodes on the Internet aware about the new location of the mobile node is called the location management.

Page 4

| Introduction

Thus, the term mobility management refers to the two services i.e. handover management and location management. When a mobile node moves from one cell of an access network and enters into the other cell of the same access network, then the mechanism that enables mobile node to continue its session in the new cell of the same access network, is called the horizontal handover. In this case, the network address of the mobile node does not change and thus TCP connection is not disturbed. Hence, link layer mechanisms are sufficient to support this horizontal handover [5, 6]. However, when a mobile node moves from one network and enters into a different network, then TCP connection is disturbed because of the change of IP address. There are two scenarios in this case. One is that mobile node moves across two networks of the same link layer technology. In this scenario, only one network interface is in action. The other scenario is when mobile node moves across two access networks of different link layer technologies. In this case, the support of both types of network interfaces is required at the mobile node. This is referred to as vertical handover. In this scenario, the existence of multiple network interfaces enables a mobile device to perform the vertical handover. If a node has multiple network interfaces, then each interface has its own bandwidth capacity, available for the applications on this node. Usually an application uses only one network interface at a time to send/receive its data. However, as multiple network interfaces are available to the node, the bandwidth available on each network interface can be aggregated so that application can enjoy the aggregated bandwidth of both the interfaces simultaneously [7]. For example, if a user has two interfaces e.g. WLAN and WiMAX, and bandwidth available to the user through WLAN interface is 2Mbps and the bandwidth available through WiMAX interface is 1Mbps, then in traditional scenario, an application will either be enjoying 2Mbps bandwidth or 1Mbps bandwidth at a time. However, if we aggregate the bandwidth available on both the network interfaces, then application can enjoy the aggregated bandwidth of 3Mbps. In literature, this multiplexing of an application layer flow over multiple network interfaces has also been termed as Striping [8]. Data striping or bandwidth aggregation can be performed at different layers of the communication protocol stack e.g. at application layer, at transport layer, at network

Page 5

| Introduction

layer and at link layer [9]. However, most of the efforts in IETF has been focused at transport layer. 1.2 Rationale for Designing a Solution for TCP As mentioned earlier, the scope of this thesis is to discuss the multihoming property that can be exploited to provide mobility management and bandwidth aggregation services to the TCP applications only. The work presented in this thesis does not handle the UDP, SCTP or any other protocol. Although there are many transport layer protocols, but many studies show that more than 95% of the data traffic over Internet still uses TCP as reliable transport protocol [10-12]. Originally, TCP was not designed to i) use multihoming property, ii) provide mobility management, and iii) to provide bandwidth aggregation. Many solutions have been proposed to overcome these limitations of TCP. However, these solutions have been rarely deployed. The reason for this is that these solutions either require deployment of additional entities in network infrastructure (in the form of proxies, etc.) or require changes in networking protocol stack implemented in operating system kernel. Due to these requirements, neither network operators nor operating system vendors have deployed these solutions on large scale. 1.3 Rationale for an End-to-End Design Internet was initially designed on the principle of “smart-edges simple-network model”. This simple and scalable design, along with deployment of low cost infrastructure equipment, became the reason of the success story of the Internet. Smart-edges design dictates that the services that can easily be implemented at the end devices should be implemented at the end devices [13]. However, with the advancements in information and communication technologies, expectations of the user from these technologies have significantly increased. Now-a-days, users expect services like mobility management, bandwidth aggregation, quality of service, security, etc. that were classically not provided by earlier design of the Internet. In order to provide these services, network designers began to put more and more intelligence in the network [14, 15]. However, inspite of this modified network design, many researchers still believe in the End-to-End design philosophy of the Internet [16, 17]. The architecture presented in this thesis, is designed on the

Page 6

| Introduction

end-to-end designed philosophy. The reason is that as compared to the network centric solution, end-to-end design has many associated advantages as listed below: 

There is no need to deploy additional intermediate entities in network infrastructure to provide the services of mobility management and bandwidth aggregation. This benefit liberates the end users from the dependence on the network service provider, in order to enjoy an end-user service. In-spite of the large standardization efforts of network centric solutions, the dependence of network centric solutions on the service provider, has been the biggest hindrance in the deployment of such solutions.



End-to-end solutions are simple to design & implement as compared to the network centric solutions.



A secure trust relationship between two communicating nodes is relatively simple to maintain in end-to-end solutions as compared to in the network centric solutions that involve the intermediate entities.

Although End-to-end design has many advantages but many challenges are also coupled with these advantages. These challenges include the following: 

Protocol architecture must be above the network layer.



The design cannot use any additional intermediate entity like proxy server, home agent, mobility agent, etc. With this limitation, it is hard to provide micro-mobility solutions as are provided by Mobile IP based micro-mobility management solutions. Without micro-mobility management solutions, it is challenging to provide minimum service disruption time during the handovers.



To facilitate TCP applications, if an end-to-end solution is provided at transport layer, then it becomes necessary to change the current implementation of widely deployed TCP. However, the standardization efforts in this regard are in infancy stage and so far, operating system vendors have not shown enough confidence to implement such solutions.



If an end-to-end solution is provided above transport layer, then application transparency with legacy applications becomes challenging.

Page 7

| Introduction

1.4 Problem Statement Although a number of solutions have been proposed to address the issues highlighted in this thesis, but they could not get wide spread popularity due to following limitations of these solutions: 

Some of the solutions require the deployment of additional entities in network infrastructure to provide the mobility management and/or bandwidth aggregation services. From this perspective, network service providers have been the hindrance in deploying these solutions.



Some of the solutions require changes either in TCP implementation or in other layers of the communication protocol stack that are implemented as part of the operating system kernel. From this perspective, operating system vendors have been the hindrance in deploying such solutions.



Some of the solutions do not provide enough specifications that are necessary to deploy such solutions.



Some of the solutions do not provide the desired services such as seamless handover, wilful handover, simultaneous handover, etc.

Objective of this research work is to overcome these limitations and develop architecture with following characteristics [18]: 

To design an end-to-end architecture that neither requires deployment of additional network entities nor it demands changes in current implementation of TCP/IP protocols.



To provide the vertical handover management service and location management service to mobile devices.



To provide forced as well as wilful handover services to mobile devices.



To design soft handover mechanism for seamless service continuation while a node moves across heterogeneous networks.



To provide bandwidth aggregation service for bandwidth intensive applications by simultaneously utilizing the available multiple network interfaces.

Page 8

| Introduction



To investigate the feasibility of dynamic DNS updates for location management.



To investigate the application transparency for the protocols above the transport layer.



To investigate the Network Address Translation issues for mobile servers in heterogeneous networks.



To incorporate sufficient security mechanisms to avoid certain network security attacks related to the control messages exchange such as session redirection, denial of service, replay attacks, etc.



To develop the mechanisms that use the link layer intelligence provided by the recently standardized IEEE 802.21 MIH function to facilitate the vertical handover as well as bandwidth aggregation decision making.



To design an intelligent mechanism to generate the link going down trigger of IEEE 802.21 MIH for better continuity experience across heterogeneous networks.



To develop a cross layer design for an efficient mobility management and bandwidth aggregation decision making.

1.5 Own Contribution The work presented in this thesis provides the design, implementation and performance evaluation of a novel session layer end-to-end architecture that exploits the multihoming property of modern-age mobile devices in order to provide the services of mobility management and bandwidth aggregation to the TCP applications running on these mobile devices. Developing such an architecture has been possible due to the following research contributions: 

Designing a session layer protocol containing two modules i.e. Association Handler and Data Handler. Association Handler manages the secure association between two communicating nodes whereas Data Handler manages the user data transfer during three states of the system i.e. normal state, handover state and bandwidth aggregation state.



Designing a User Agent module that gets the user preferences about the priorities of network interfaces as well as of applications that will be taken care of by the proposed architecture.



Designing a link layer module that generates the IEEE 802.21 MIH triggers intelligently.

Page 9

| Introduction



Designing a cross layer decision engine module that gets information from User Agent and IEEE 802.21 MIH modules and make necessary handover, location management and bandwidth aggregation decisions.



Implementation of an application transparency module that provides the application transparency to the legacy applications.



Designing an end-to-end mechanism that resolves the Network Address Translation (NAT) issues regarding location updates of the mobile servers.

As presented in Chapters 4 and 5, the experimental implementation and performance comparison results show following advantages of the proposed architecture: 

There is zero handover delay and service disruption time using the proposed architecture in the regions where multiple network interfaces can be used simultaneously.



Significant performance gain in transmission time is observed when data is sent simultaneously on multiple network interfaces using bandwidth aggregation.



These performance gains are achieved without using the support of any additional entity in the network infrastructure.



Implementation of the proposed architecture is achieved without any change in current implementation of the TCP or any component of the operating system kernel.

1.6 Chapter summary This Chapter presented a brief introduction of the subject area starting with the concepts of multihoming and then explaining mobility management and bandwidth aggregation concepts. As there are many possible dimensions of these research areas, therefore scope of this thesis is defined in this Chapter. It was also discussed that this research work focuses on TCP applications to exploit the property of multihoming and to provide the mobility management and bandwidth aggregation services. It was also discussed that why end-to-end design philosophy was adopted for this research work. Rest of the thesis is structured as follows. Chapter 2 of the thesis describes the literature review of the areas that lie within the scope defined in Chapter 1. Proposed system architecture, its main

Page 10

| Introduction

components and how services of mobility management and bandwidth aggregation can be provided using these components, are discussed in Chapter 3.

Chapter 4 presents the

implementation design and experimental results of the proposed architecture. Chapter 5 presents the performance comparison of the proposed architecture with other proposed solutions. In the end, Chapter 6 concludes this thesis with brief discussion on possible future directions in which work presented in this thesis can be extended.

Page 11

| Introduction

Chapter 2 2. Literature Review

This chapter starts with the discussion on standardization efforts being done regarding multihoming in IETF Multiple Interfaces working group. Afterward, it describes that how IEEE 802.21 Media Independent Handover (MIH) standard facilitates the multihoming devices for making vertical handover decisions. Then, different techniques are discussed that provide the services of mobility management and bandwidth aggregation for multihomed mobile devices. As major portion of current Internet traffic uses TCP as reliable transport protocol therefore, work presented in this chapter is more focused on TCP applications. Other types of applications e.g. multimedia applications, that can benefit from multihoming property of mobile devices, are out of scope of this thesis. Therefore, the work done for transport layer protocols other than TCP is not discussed in this chapter. IETF working group multiple interfaces (MIF) is working to address the configuration issues of devices having multiple network interfaces [19, 20]. Working group discusses both physical as well as logical network interfaces. Main objective of MIF is to address the issues related to the global configurations of multihomed devices that may vary among the interfaces [21, 22]. Other than this, MIF is also working to define API considerations for multihomed devices in order to provide better control for applications to select e.g. first hop, source address, DNS selection, etc. [23]. However, how the existence of these multiple network interfaces can be efficiently utilized for providing the mobility management and bandwidth aggregation services to the user applications is currently out of scope of this working group.

Page 12

| Literature Review

IEEE 802.21-2008 MIH standard defines the mechanisms that facilitate the handover process for mobile nodes between IEEE 802 family of networks and other networks e.g. 3GPP and 3GPP2 cellular networks [4]. MIH defines a logical entity Media Independent Handover Function (MIHF) that provides link layer intelligence to the upper layer protocols. MIHF receives link information from lower layers and provides this information to upper layers. Objective of MIHF is to hide link specific complexities from upper layer protocols. Upper layer entity, that uses the information provided by MIHF, is termed as MIH User. MIH User can be any upper layer entity for example a handover decision algorithm [24, 25] or any Layer-3 or above mobility management protocol [26]. Media Independent Event Service (MIES) is a service provided by MIHF that informs about changes in local as well as remote link layer properties in form of triggers like i) Link Detected, ii) Link Up, iii) Link Going Down, iv) Link Down triggers. Link Going Down (LGD) is a predictive trigger that indicates that in near future link connectivity may be lost due to degraded link conditions. It is an indication that mobile node is moving away from the current access network and may soon be out of the coverage area of current access network. Timely generated LGD event can significantly decrease the handover delay of mobility management protocols. This can result in the decrease of packet loss ratio during handover process and thus help to improve user experience in mobility scenarios [27]. An MIH compliant network interface can generate this event by predicting the future link conditions and inferring whether link connectivity will be maintained or not in near future. In order to perform a handover, multiple control messages at different layers of communication protocol stack are exchanged between two communicating nodes. It may include layer 2 authentication and association messages, IP address acquisition messages, handover signalling messages, etc. Hence, it is considered as a costly process. Incorrect generation of LGD trigger can initiate this unnecessary overhead. Therefore, it is important to generate this LGD trigger intelligently. IEEE 802.21 MIH defined new link layer service access points in order to get link specific information from corresponding network interfaces e.g. MIH specific amendments for IEEE 802.11 has been defined as IEEE 802.11u-2011 amendment [28]. However, how LGD

Page 13

| Literature Review

trigger can be generated is out of scope of the MIH standard. IETF has also standardised some procedures to help end nodes to discover the MIH services [29, 30].

2.1 Mobility Management with Multihoming One of the services that can be effectively provided, using multiple network interfaces, is the vertical handover service. Although, most of the wireless access technologies have their own handover mechanisms e.g. IEEE 802.11 family of WLANs has well defined handover procedure [6]. IEEE 802.16-2009 standard for WiMAX defines mobility service at link layer [5]. Similarly cellular networks (e.g. GSM, GPRS, CDMA, WCDMA, CDMA2000) also have mature methods for handling location management and handover management. However, almost all link layer solutions provide only horizontal handoff mechanisms. There is no mature vertical handover solution for multihomed devices at link layer. Vertical handover service is normally provided on upper layer of the protocol stack. Therefore, in the following, only upper layer mobility management solutions have been discussed. 2.1.1 Network Layer Solutions for Mobility Management

Over the Internet, a node is uniquely identified by an IP address. This address specifies the point of attachment of the node to the Internet. Packets destined to the node are routed to the node based on this address. In order to receive IP packets, a node must be located in a network corresponding to its IP address. This restricts the node from moving out of its network and remaining able to receive packets in another network. Mobile IP was proposed to solve the problem of node mobility by redirecting the packets destined for the mobile node from its actual network to its current location. Mobile IP based network layer mobility management solutions can broadly be categorized in macro-mobility management and micro-mobility management solutions [31, 32]. Movement of a mobile node between two network-domains is called macromobility, whereas, the movement of a mobile node between two subnets of same network domain is called micro-mobility. Many micro-mobility management solutions have been proposed for reducing signalling overhead and handover delay during movements within single network domain. Broadly

Page 14

| Literature Review

speaking, these solutions can be classified in tunnel-based and routing-based micro-mobility schemes. For limiting the scope of mobility-related signalling messages, tunnel-based schemes use the concept of local registration along with the encapsulation. Tunnel-based micro-mobility management protocols include the Hierarchical Mobile IP (HMIP) [33], Mobile IP Regional Registration (MIP-RR) [34] and Intra-Domain Mobility management Protocol (IDMP) [35]. In routing-based solutions, in order to forward the packets, routers maintain the host-specific. As mobile nodes move, these host-specific routes are updated. Routing-based micro-mobility solutions include Handoff Aware Wireless Access Internet Infrastructure (HAWAII) [36] and Cellular IP (CIP) [37]. However, all these solutions do not specifically handle the multihoming issues. Following are some mobility management protocols that proposed the solutions for multihoming issues at network layer. Proxy Mobile IPv6 (PMIPv6) is a network based mobility management protocol [38]. In PMIPv6, Mobile Access Gateways (MAGs) and Local Mobility Anchors (LMAs) handle all the mobility management aspects of the mobile node and mobile node remains unaware of its change of network access. MAG works as access router that manages the mobility related signalling for mobile node and LMA works as host agent for mobile node. In this way, Proxy Mobile IPv6 enables IP mobility for a host without requiring its participation in any mobility related signalling. Mobility management entities in network infrastructure are responsible for handling the movement of mobile node and initiating the mobility management related signalling on mobile node’s behalf. Use of multiple interfaces in mobile nodes with PMIPv6 is proposed in [39]. Proposal defines a new 2-level prefix model in which permanent level-1 prefix is allocated by LMA. Level-1 prefix is used to support inter-interface and inter-access handover. A logical interface is used for permanent level-1 prefix. Temporal level-1 prefix is used to support flow handover. Level-2 prefix is allocated by either LMA or MAG. Multihoming enhancements to PMIPv6 are also discussed in [40]. In order to support multiple mobility sessions for multihomed MN, PMIPv6 creates multiple mobility sessions per interface and each mobility session is managed as a separate binding cache entry. LMA can assign multiple network prefixes to a single interface. All these prefixes are managed under single

Page 15

| Literature Review

mobility session. LMA allows handover between two different network interfaces of MN. In this case, all network prefixes associated with one network interface will be associated with other network interface. After handover, LMA assigns same network prefixes, assigned to first network interface, to the second network interface and updates the existing binding cache entry. In this process, if there is some previous binding cache entry associated with second interface, that may be removed and thus existing flows through second network interface may be disrupted. Moreover, there is no way to enable flow specific handover from one network interface to other network interface. To overcome these limitations, a hybrid home network prefix assignment scheme is proposed in [41], in which LMA divides home network prefix into static HNP and dynamic HNP. Static HNP is based on per-MN prefix model and is used for simultaneous access, whereas, dynamic HNP is used only for handovers. Dynamic HNPs are assigned to multiple interfaces and they are switchable from one interface to the other interface. Before handover, dynamic HNP is assigned to one interface and if MN wishes to perform handover to other network interface, it conveys this information to the MAG with which second interface is associated. This MAG sends proxy binding update message along with handover indication to the LMA. LMA updates the binding cache entry associated with the dynamic HNP and subsequent packets are forwarded to the second interface of the MN. LMA assigns different HNPs to different MAGs and thus a multihomed mobile node gets different HNPs. In this way, LMA will have multiple binding cache entries for the mobile node. If some data is received by LMA for this mobile node then this data can be delivered to the mobile node only via one interface. In this situation, mobile node is not able to get sufficient benefit from multihoming capability. [42] proposes to extend the binding cache entry at LMA to bind HNP to several proxy-CoA for the same mobile node. This information is then synchronized between LMA and MAG. With this extension, received data at LMA can be delivered to mobile node through all of its network interfaces. This helps in flow mobility and load sharing through bandwidth aggregation. Network Mobility (NEMO) basic support was proposed in [43]. NEMO support is transparent to mobile network nodes that do not need to take any action for network mobility management.

Page 16

| Literature Review

NEMO provides Internet connectivity to nodes located in an IPv6 mobile network by setting up bidirectional tunnels between mobile routers and their respective home agents. There are several configurations in which a mobile network can be seen as multihomed. Multihomed configurations can be classified depending on how many mobile routers (MR) are there, how many egress interfaces are there, how many care of addresses are available and how many home addresses are available to the MRs [1]. However, standard takes three parameters into account i.e. number of Mobile Routers (MRs), number of Home Agents (HAs), and number of Mobile Network Prefixes (MNPs). In all the cases, a MR is termed as multihomed if i) multiple network prefixes are available either on home link or on foreign link, or ii) MR is equipped with multiple network interfaces. In this case, MR will have multiple (HoA, CoA) pairs. In order to provide security services to multihomed mobile devices, mobility and multihoming extension of IKEv2 (MOBIKE) is presented in [44]. Traditionally, IPsec is used to provide confidentiality, data integrity, access control and authentication services to IP traffic. For dynamically establishing association states of IPsec, Internet Key Exchange (IKEv2) protocol is defined in [45]. Purpose of IKEv2 is to mutually authenticate two hosts, establish one or more security associations (SAs) between them, and manage the established SAs. IKEv2 enables hosts to share information of cryptographic algorithms and local security policies i.e. which kind of traffic should be protected [46]. IPsec had no provision to change IP addresses included in SA pair after an SA is established, however, IP address of SA pair may change due to mobility or failover of a network interface. In these scenarios, IPsec SA was needed to be re-established. However, re-establishing SA is an expensive operation, especially when IP address changes are frequent. MOBIKE allows to change IP addresses in security association for multihomed devices. 2.1.2 Layer 3.5 Solutions for Mobility Management

Currently over the Internet, IP address is used for location identification as well as for end-point identification. Host Identity Protocol (HIP) separates this dual role of IP address by defining an architecture that decouples the transport layer from the network layer. It proposes to use a new Host Identifier (HI) as end point identifier at transport layer and to use IP address as location

Page 17

| Literature Review

identifier at network layer [47]. As HIP works below the transport layer and above the network layer therefore, it can be termed as a layer 3.5 solution. HIP uses public/private key pair as host identities. HIP defines messaging and procedures for basic network level mobility and simple multihoming [48]. The standard defines a generalized locator parameter that allows notifying the peer node about the change of its address either due to mobility or multihoming. Although, HIP describes the end-to-end mobility management procedures, however, it does not discuss the localized mobility management issues. When a mobile node moves in a new network, it acquires a new IP address. It notifies its peer about the new address by sending HIP Update packet that contains a locator parameter. Peer node acknowledges this Update message. In order to support the host multihoming, host is allowed to have multiple locators simultaneously. Mobility and multihoming extension for Host Identity Protocol (HIP) is defined in [48]. HIP also introduced a new network entity rendezvous server for location management and simultaneous mobility support [49]. Provisioning of fault tolerance mechanism by setting up multiple connections between multihomed device and peer node is defined in [50]. This document proposes a failure detection mechanism to decide when to initiate the connection switching and recovering from the failure. Peer nodes periodically exchange bidirectional forwarding detection (BFD) message to know the connectivity status of each other. A node can detect the link failure if it does not receive BFD messages for a specific time. After link failure detection, node can send HIP Update message to change the locator parameter. Some of the implications of adding HIP to the protocol stack, Internet infrastructure, applications, network operator’s perspective about HIP are summarized in [51]. These include: 

There have been two types of implementations of HIP. One is direct kernel modification and other is to implement HIP as user space program and then configure the kernel to route the packets through HIP user-space program.



Applications may be HIP aware or HIP unaware. In case, when HIP is implemented as userspace program then, application transparency can be incorporated by interposing a modified

Page 18

| Literature Review

resolver library. This can be provisioned with dynamic re-linking of resolver library using LD_PRELOAD. Latency due to this additional processing of HIP may be a concern for many applications. 

There might be issues regarding deployment and interaction with the entities in network infrastructure like DNS, NAT, firewalls and HIP rendezvous servers.



From the perspective of network operator, deployment of HIP may require management of public key infrastructure, certificate authority, DNSsec, firewall, etc.

Site-multihoming is a different issue than host-multihoming. Solution for site-multihoming is proposed by Shim6 protocol that is a shim layer within the IP layer [52]. It is placed between IP fragmentation-reassembly sub-layer and IP routing sub-layer. Processing regarding Shim6 is performed on individual hosts and not on site-wide. Hosts in a multihomed site, with multiple IPv6 address prefixes allocated by different network providers, will use Shim6 to setup states of locator pairs with peer host. If due to some reason, one locater pair is not functioning, then failover to a different locator pair can be performed using the Shim6 states. Shim6 is application transparent solution that has minimal impact on upper layer transport and application layer protocols. Shim6 is more suitable for long-lived communication sessions and to establish Shim6 context for short-lived communication sessions is not beneficial. Although, Shim6 is proposed as an extension to IPv6 protocol, however, it can be applied as an extension of IPv4 also. Shim6 allows a host to inform the peer node its preferences about the local interfaces, hence in this context, Shim6 provides traffic engineering capability to the host. Interaction of Shim6 with other protocols e.g. MIPv6, Secure Neighbor Discovery (SeND), SCTP, NEMO and HIP is discussed in [53]. Multihoming support provided by Shim6 may conflict with the support provided by other multihoming capable protocols e.g. HIP. In order to avoid such conflicts, only one supportive protocol is recommended to be used. Therefore, either Shim6 or HIP can be used but not both at the same time [54]. Current specifications of Shim6 protocol only facilitate the failover and load sharing for multihomed site. It does not provide the specification for host mobility [52].

Page 19

| Literature Review

2.1.3 Transport Layer Solutions for Mobility Management

Although, network layer solutions allow mobile devices to move between subnets or to change network interface for performing vertical handover, however, one the limitations of these solutions is that they do not support simultaneous use of multiple network interfaces. These solutions do not have the ability to specify that over which interface each individual type of traffic should be carried on. Moreover, network layer solutions do not support services like session mobility and wilful handover. In order to overcome these limitations, many researchers have proposed the transport layer and session layer solutions. A survey of transport layer solutions is presented in [55]. A brief discussion on these transport layer protocols is presented in the following. Stream Control Transmission Protocol (SCTP) is a transport layer protocol that was proposed to get benefit from multihoming property. A concurrent multipath transfer for SCTP is presented in [56] that mainly discusses the congestion window growth problem, unnecessary fast retransmissions issues and increased ack traffic problem during bandwidth aggregation. In order to provide the vertical handover service using SCTP, cSCTP and mSCTP extensions have been proposed in [57, 58]. cSCTP and mSCTP enable mobile nodes to dynamically update their list of IP addresses in SCTP association. This updating includes add an IP address, delete an existing IP address and change of primary address [59]. As, SCTP is a different transport layer protocol therefore, it does not help TCP applications to benefit from multihoming. In the following those protocols have been discussed that benefit TCP applications. Indirect TCP (I-TCP) proposes to use the gateway as mobility support entity. TCP connection is split at the gateway [60, 61]. Actually two TCP connections are used in this scheme. One is the regular TCP connection between gateway and remote node and other is an I-TCP connection between gateway and Mobile Node. Whenever a Mobile Node moves in a new network, the TCP connection between Mobile Node and gateway is changed. In this scheme, the handover performed is of hard handover type. This scheme also requires changes in the protocol stack at the Mobile Node. Moreover, no option for multi-homing is available in this scheme.

Page 20

| Literature Review

MSOCK+ is a transport layer mobility solution that uses a proxy between mobile node and static correspondent node [62, 63]. For each data stream from mobile node to the static correspondent node, proxy maintains a stable data stream to/from the static host, isolating it from the mobility management issues. Proxy can make and break connections to mobile node as needed to migrate data-streams between network interfaces or subnets. Proxy uses a split–connection model where a mobile node desiring to communicate with remote node, first makes a connection to the proxy and tells it about the remote node with which it want to communicate. Proxy makes a second connection with remote node and after that, it reads data from one connection and writes it to the other, thereby bridging between mobile node and remote node. In this way, each logical communication session between mobile node and remote node is split into two separate TCP connections. Proxy supports the mobility of mobile node by providing a way change mobile– proxy connection while maintaining proxy–static connection unchanged. For example, let a mobile node starting a TCP connection using one network interface. If mobile node looses its IP connectivity from this interface, it could potentially contact proxy using IP address of its other network interface and ask proxy to subsequently copy data from remote node–proxy connection to the new mobile–proxy connection, instead of the old mobile–proxy connection. Although two TCP connections appear as a single TCP connection, MSOCKS causes problems to the IP layer security protocols, like IPSec. Moreover, as MSOCKS need an additional entity in the network therefore it is not an end-to-end solution. For continuous connectivity of Mobile Node, TCP-Redirection (TCP-R) uses the mechanism of revising the pair of IP addresses and port numbers in existing connection [64]. Whenever there is a change in Mobile Node’s IP address, it sends Redirect message to Remote Node. In return, Remote Node performs the authentication process with the requesting Mobile Node. After the successful authentication, Remote Node revises the pair of addresses of existing TCP connection. For new connection establishment TCP-R performs two operations: 1) check whether the CN is TCP-R aware and 2) exchange authentication keys. For location management, TCP-R uses DNS dynamic updates. Although TCP-R is application transparent, however, it does not support multihoming and bandwidth aggregation.

Page 21

| Literature Review

In order to provide secure migration of an established TCP connection across IP address changes, a Migrate TCP option is proposed in [65]. Using this option, TCP peer can suspend an already established connection and reactivate it from another IP address. In this scheme, security is achieved by a connection identifier or token that is secured by a shared secret key negotiated during initial connection establishment process. Proposed option is included in SYN segment that identifies this SYN packet as part of a previously established connection, rather than request for new connection. This Migrate option contains the token that identifies previously established connection on same “destination-address, port” pair. Token is negotiated during initial connection establishment phase using proposed Migrate-Permitted option. After successful token negotiation, TCP connections may be uniquely identified by either their traditional (source address, source port, destination address, destination port) 4-tuple, or a new (source address, source port, token) triple on each host. In this scheme, location management is performed using DNS dynamic updates. Simultaneous mobility management support was not available in basic TCP-Migrate, however, it is supported through DNS assistance in an extension to TCP-Migrate proposed in [16]. In Freeze TCP, before initiating the handover process, the Mobile Node freezes the ongoing TCP connection by advertising a zero window size to the Remote Node [66, 67]. After completing the handover, the freezed connection is unfreezed by sending an update message. Although this scheme reduces the packet loss during handover, however, it performs so at the cost of higher delays. It does not require any intermediate node like proxy, gateway, etc. Changes in TCP protocol are required at Mobile Node. Moreover, Freeze TCP does not support multi-homing. pTCP is a transport layer solution for multi-homed devices that provides mobility and bandwidth aggregation services [68, 69]. It creates and maintains one TCP state for each active network interface. In a pTCP connection, TCP-virtual (TCP-v) pipes are created for each active interface. Whenever a pTCP socket is opened by application, a TCP-v pipe for that connection using active interface is created. pTCP with only one pipe behaves like traditional TCP connection. Whenever, a Mobile Node moves to a new network, pTCP creates a new pipe but does not close the previously established pipe. In this scheme, both communicating nodes need to be pTCP

Page 22

| Literature Review

aware. In order to provide its services, pTCP do not require support from network infrastructure. pTCP does not support wilful handovers service, however, bandwidth aggregation is proposed in [68]. In order to maintain the connection continuity, Local Connection Translation (LCT) based handoff protocol introduces two components, Connection Manager (CM) and Virtual Connectivity (VC) component [70]. CM detects the link layer and network conditions e.g. signal strength, available bandwidth and end-to-end delay. In mobility scenarios, VC is responsible for maintaining connection continuity. VC has an internal component called LCT that maintains mapping between old connection information and new connection information for each active connection. In order to support mobility, this protocol introduces Subscription/Notification (S/N) service at application layer. In order to support these functionalities, an additional entity i.e. S/N server is required in network. An S/N client is also required at each end node to communicate with the S/N server using S/N protocol. VC handles the mobility by sending a Connection Update (CU) message to RN. In response to it, RN can send either Connection Update ACK (CUA) message or a Connection Update Challenge (CUC) message that is acknowledged by the MN. In this scheme, location management is performed using DNS dynamic updates. Tsukamoto proposed to modify the transport layer for handling multiple connections during handover process [71, 72]. A cross layer manager detects changes in status of wireless networks and selects the appropriate network interface. For performing the handover process, a new TCP connection is initiated via new network interface and starts parallel transmission using two TCP connections. For each connection, bandwidth and bandwidth-delay product are calculated and compared for first few packets in slow start phase of TCP. Based on this comparison, handover is initiated or ignored. In this scheme, no details of the control and data flow are given. Not enough information is available for implementation and lot of work is needed to be done. No provision of wilful handover is available in this proposal. Although, bandwidth aggregation may be possible in this scheme, however, author has not discussed about this feature.

Page 23

| Literature Review

Socketless-TCP (SL-TCP) is not a complete mobility management protocol, rather it only provides services for vertical handover execution [73]. A unique Connection IDentifier (CID) at both ends is used separately. Whenever Mobile Node moves across different networks, CID at RN is updated. SL-TCP performs the connection initialization, connection management and connection migration processes. SL-TCP does not discuss the location management service. Vertical handoff Aware-TCP (VA-TCP) uses the abstraction layer introduced by LCT-based handoff protocol to maintain the application transparency [74]. This protocol adds some new messages to support vertical handover. When an MN enters in a new network, VA-TCP dynamically estimates the connection parameters like bandwidth, delay and bandwidth-delay product. MN sends a notification message to RN telling that it has changed its IP address. RN updates the connection parameters. VA-TCP requires changes in protocol stack due to the usage of abstraction layer. As VA-TCP does not support multi-homing and thus it does not provide the bandwidth aggregation. In order to enhance the application performance through path diversity, a relatively new working group Multipath TCP (MPTCP) is established in IETF [75]. MPTCP working group intends to develop the mechanisms that simultaneously use multiple paths for a regular TCP session. Focus of MPTCP is to identify and utilize multiple paths that may be independent of network addresses of end nodes and to devise a common congestion control mechanism for these multiple paths [76]. So far, architectural guidelines have been defined as informational RFC [75]. Threat analysis of using multiple addresses is presented in [77]. MPTCP application interface considerations have been proposed in [78]. MPTCP is being designed on end-to-end principle and is aimed to be deployed without significant changes to the existing Internet infrastructure. However, changes in operating system kernel will be required to implement MPTCP. Moreover, MPTCP requires changes in protocol stack on both ends. 2.1.4 Session Layer Solutions for Mobility Management

An end-to-end Session Layer Mobility management (SLM) solution is presented in [79]. SLM supports mobility management by introducing session layer. SLM operates above TCP and switches data streams between multiple connections. The end-to-end network path is divided into

Page 24

| Literature Review

three separate paths. Two paths are between the applications on the two hosts and socket connectors and the third path between the socket connections of two hosts. SLM is proposed for the provisioning of QoS management. It introduces a new semantic where open data streams are treated as being separate sessions. This new semantic can provide necessary support for applications that demand some specific operational environment. Although SLM does not support the handover management, however, it has the support for location management through a User Location Server (ULS). ULS is an additional network entity along with the DNS server. One of the major limitations of SLM is that it does not have any mechanism to support multihoming property of mobile devices. Furthermore, SLM requires changes at both ends i.e. at mobile node and remote node. SLM is also not an application transparent solution.

2.2 Bandwidth Aggregation and Multihoming The terms of data striping, multiplexing and bandwidth aggregation sometimes are used interchangeably. By definition, striping refers to the process of aggregating physical resources to obtain higher performance. Multiplexing means combining multiple data inputs into a single output in such a way that the data on inputs can be recovered through the process of demultiplexing. Multiplexing can be of two types; one is physical multiplexing that includes frequency division multiplexing (FDM) / wavelength division multiplexing (WDM), time division multiplexing (TDM), space division multiplexing (SDM), etc. Other is the logical multiplexing that is the process of mapping multiple layer-n streams into a single layer n-1 stream. Demultiplexing is the reverse process of multiplexing that takes a single input and spreads that input across multiple outputs. Striping can be seen as a generic technique of aggregating the resources through either the process of multiplexing or inverse multiplexing where the operation of aggregation is transparent to higher layers except for the increased performance experience. For example, network striping can be achieved through aggregation of multiple N-ISDN channels to provide better bandwidth experience to applications. This is striping through inverse multiplexing [9]. In frame aggregation, more than one frame is combined into an aggregated larger frame for transmission as a single unit in order to improve the overhead/ payload ratio. For example, IEEE 802.11n-

Page 25

| Literature Review

2009 standard and ITU-T G.hn standard support the frame aggregation service in order to provide the increased user throughput [80] [81, 82]. This process can be seen as similar to the logical multiplexing. This thesis is focused on the multihomed devices, therefore, only those techniques are discussed that strip data of single stream across multiple network interfaces for achieving the aggregated bandwidth of multiple interfaces. By definition, bandwidth aggregation resembles to the process of inverse multiplexing. In the following different techniques are discussed that achieve the bandwidth aggregation at different layers of the communication protocol stack. 2.2.1 Physical Layer Solutions for Bandwidth Aggregation

Bandwidth aggregation at physical layer can be achieved by using the process of byte striping. For example, ATM layer can strip ATM cells on a byte-by-byte basis across multiple SONET physical layer instances [9]. However, proposed technique cannot be generalized for other access network technologies. One of the issues is that using two network interfaces operating in same frequency band can cause the performance decrement rather than improvement. For example, WLAN and Bluetooth use same ISM band, therefore, using both the interfaces at the same time can increase interference and noise thus decreasing the overall throughput experience. 2.2.2 Link Layer Solutions for Bandwidth Aggregation

ATM Cell Striping at ATM Adaptation Layer can be seen as bandwidth aggregation at link layer. In this technique, no modification to the physical layer framing is required [9]. To make the striping transparent to higher layer, ATM cell ordering must be kept preserved. ATM cell striping is the lowest form of striping in protocol stack that can robustly handle the problems associated with skew and loss. A model for striping IP packets over multiple data link interfaces is discussed in [83]. Implementation of the model is done on NetBSD kernel and for striping purpose ATM and Ethernet links are used. A Link layer striping technique is also discussed in [8] that aggregates available physical links into a single communication path. In this technique, transmission is done on byte-by-byte basis. In order to keep, byte ordering preserved, IP datagram may need to be reconstructed before crossing network boundaries. This limitation makes link layer striping useful only for local area communications.

Page 26

| Literature Review

More efficient bandwidth aggregation solutions can be devised on upper layers of networking protocol stack. Possibility of bandwidth aggregation solutions at different upper layers of protocol stack is presented in Figure 3.

Fig 3: Different Bandwidth Aggregation Approaches 2.2.3 Network Layer Solutions for Bandwidth Aggregation

With variable length packets at network layer, traditional striping algorithms may result in improper load sharing and may produce non-FIFO delivery of data. In order to solve these limitations, number of striping algorithms are discussed in [83]. For striping IP packets across multiple physical interfaces, authors also developed a framework for transparently embedding these proposed algorithms at network layer. A mechanism to aggregate bandwidth of multiple IP links is proposed in [84, 85] that splits data flow across multiple network interfaces at IP level. Proposed solution is transparent to upper layer protocols therefore, existing applications can get benefit from bandwidth aggregation without rewriting the applications. Proposed solution is also independent of link layer technologies. However, this kind of network layer striping techniques has following limitations:

Page 27

| Literature Review



If a connection is established through interface-1 then packets of this connection cannot be scheduled on the interface-2 because in that case, the source address will be of interface-2 and when this packet will reach at the destination then due to having different source IP address, these packets will be discarded on the receiving TCP layer.



If this problem is solved by allowing the protocol to insert source address of interface-1 while transmitting the packets on interface-2, then it causes the problem of ingress filtering. Upon receiving the packets of different source address than the designated IP address, ISP routers will consider these packets with spoofed address and thus discard them due to the security concerns.



How each side will learn about the existence of multiple network interfaces at peer node is also an issue.



If two connections experience different bandwidth, then due to this mismatch, the slower link will cause the problems of unnecessary timeouts, fast retransmissions, reduction in congestion window, slow start, starting of congestion avoidance mechanism, etc. All these issues will decrease the overall performance of the bandwidth aggregation system.

In order to overcome these limitations, authors proposed a solution that uses IP-in-IP tunnelling technique. In this technique, network layer forwards the packets routed to interface-1 as it is, while packets routed to interface-2 are encapsulated with an additional IP header that will have the source IP address of interface-2. Hence, ISP routers will not take these packets as potentially attack traffic. The receiver at destination side strips the outer header and forwards these packets to destination TCP as packets belonging to the same connection. However, this solution incurs additional IP-in-IP encapsulation overhead. A network layer proxy based solution is proposed in [7, 86]. Client gets IP address of proxy and uses this address to establish connection with the remote server. Remote server sends response back to the proxy address. Proxy receives the packets and performs the application layer and transport layer processing. Proxy is aware of the multiple interfaces of the client. Proxy tunnels the received packets using IP-in-IP encapsulation and forwards the packets to the multihomed

Page 28

| Literature Review

client. Limitation of this approach is the requirement of deployment of additional entity in network infrastructure. Moreover, it also incurs the IP-in-IP encapsulation overhead. An architecture for Mobile Communication Communities (MC2) is presented in [87]. Sample application to benefit from this architecture is hierarchically layered video streaming. This architecture is also a proxy-based solution that requires no kernel level changes at client end. Proxy is an application layer entity that provides channel aggregation service. Generic Routing Encapsulation (GRE) tunnels are used to create channels between participating MC2 members and proxy. The architectural setup of this scheme is shown in Figure 4. This scheme also has limitation of deployment of additional entity in network infrastructure and has the encapsulation overhead.

Fig 4: Network Layer Approach with Network Proxy for Bandwidth Aggregation

In summary, network layer striping techniques makes striping transparent to transport and application layer protocols. In network layer bandwidth aggregation techniques there are some performance related issues with TCP [8]: 

For every reordered segment, receiving TCP sends the Duplicate Ack message.



On receiving three Dup-Acks, sending TCP considers that the sent packet is lost due to congestion and enters the Fast-Retransmit mode unnecessarily.



Sender also cuts its congestion window to half.

Page 29

| Literature Review



To reduce the reordering, client side Buffer Management Policy is required. This policy stores the out of order TCP segments at network layer and passes them to TCP in order.

Although the performance of the bandwidth aggregation systems can be improved by making changes in transport layer protocols such as changing retransmission timers, window size, etc. however, limitation of requirement of additional network entities and the requirement of changes in operating system kernel remain the major limitation of network layer bandwidth aggregation schemes. Moreover at network layer, it is difficult to implement intelligent mappings of specific data flows to particular network interfaces in accordance to the application requirements [87]. 2.2.4 Transport Layer Solutions for Bandwidth Aggregation

Packet striping at TCP layer involves the striping of TCP segments among multiple lower layer stacks [9]. In contrast to lower layer striping, in transport layer striping techniques, each IP packet travels a single stripe. Due to multipath properties of IP networks, there is no need to ensure ordering across the multiple stripes. Striping at packet level can overcome the problem of head-of-line blocking. This problem occurs when smaller data packets are forced to wait while a larger one is being transmitted. If smaller packets may be scheduled on alternate available network interface, the delay experienced by these smaller packets may greatly be reduced. Increasing the number of strips reduces the chances of head of line blocking. Disadvantage of this kind of techniques is that striping is not available to each individual packet. If average transmission time of packets is shorter than the average inter arrival time of packets received from the upper layers then the system is effectively no longer stripped. This may result in larger striping latency and may consequently reduce the overall throughput experienced by the applications. In order to overcome this limitation, striping at transport layer, has also been performed by striping packets across multiple sockets using same kind of algorithms as used for striping at link layer [83]. A Reliable Multiplexing Transport Protocol (RMTP) is presented in [88]. In contrast to TCP, RMTP is a separate transport layer protocol that aims to exploit multiple network interfaces for bandwidth aggregation. RMTP is an end-to-end solution that is implemented as a user space

Page 30

| Literature Review

program over UDP. Thus, no kernel level changes are required to implement RMTP. In order to efficiently multiplex data on individual channels, RMTP implemented the bandwidth estimation algorithm. Using retransmission based policies, it also implemented a reliability functionality. A TCP like flow control mechanism is also implemented. So far, mobility management using RMTP is not supported in the proposed solution. An end-to-end bandwidth aggregation solution is presented in [89] as Parallel TCP (pTCP). It acts as a wrapper around a modified version of TCP that is TCP-v (TCP-Virtual). Applications open pTCP socket and pTCP socket in turn opens and maintains one TCP-v connection for each attached network interface. pTCP maintains a send buffer for all TCP-v connections. pTCP can perform intelligent striping across multiple TCP-v connections and also supports the feature of redundantly stripping data over multiple connections in catastrophic conditions. It decouples loss recovery from the congestion control. pTCP implements separate congestion control mechanism for each TCP-v connection. Loss recovery is also implemented individually for each TCP-v pipe. Support of pTCP at both sender and receiver is required. pTCP header is included along with the regular TCP header. pTCP can also be implemented as Session layer solution. The choice of number of interfaces to use is an external decision and is conveyed to pTCP through a socket option. However, how applications would know about the number of interfaces that are dynamically attached at run time is not discussed in the proposed solution. Due to pTCP specific socket option as well as due to the fact that applications open pTCP sockets rather than the traditional sockets, application transparency is a concern for pTCP. In summary, transport layer bandwidth aggregation requires protocol stack changes. Intelligent mapping on different interfaces is relatively easy as compared to network layer solutions. Although, transport layer bandwidth aggregation techniques are application transparent [87] but not all the transport layer techniques are application transparent. 2.2.5 Session Layer Solutions for Bandwidth Aggregation

Session layer solutions for bandwidth aggregation have also been proposed in the literature. Striping at session layer allows applications to have more control over striping mechanism and thus to take more advantage of multihoming. Strawman architecture for session layer striping is

Page 31

| Literature Review

proposed in [8]. As shown in Figure 5, applications using this architecture see only a single virtual pipe and need not to schedule data over multiple strips. The architecture proposes to decouple striping from transport protocols. This decoupling enables session layer to automatically select transport layer protocol based on the type of application layer data.

Fig 5: A Session Layer Striping Scheme

Session layer striping is seen as getting combined advantage of two approaches i.e. transport layer and application layer. In proposed architecture, applications inform session layer about their requirements in terms of reliability, throughput maximization, etc. and then according to these requirements, session layer selects appropriate transport protocol. Session layer also selects appropriate network interfaces to initiate the connections. In proposed architecture, processing is performed in two phases. First is the Connection Establishment phase and second is the Path Evaluation phase. In connection establishment phase, application informs session layer about its connection request and connection requirements. Session layer determines whether the support of session layer is available on remote node or not. This is done by initially sending a HELLO message to the remote node and checking whether reply comes back or not. Session layer sends request to remote host with its list of ‘m’ IP addresses and application requirements and list of potential transport addresses. Remote node replies with its list of ‘n’ IP addresses and list of potential transport addresses. Exchange of capabilities of IP addresses and transport layer information is secured using

Page 32

| Literature Review

SSL/TLS mechanism in order to avoid the session hijacking attacks. After successful capability exchange, each session can establish up to ‘n*m’ transport connections. In path evaluation phase, session layer periodically evaluates the service received on each of the connection. This is performed by monitoring the round-trip time, achieved throughput and packet loss ratio. Session layer also monitors the TCP or other reliable protocol’s congestion window for potential head-of-line blocking. When a head-of-line blocking is observed on a particular connection then that connection is termed as faulty and is subsequently closed. Afterward a new connection is opened to replace the faulty connection. Session layer header is attached with each packet delivered to the transport layer. Session layer sequence numbers are used for in-order delivery across multiple strips. User space implementation of the session layer bandwidth aggregation protocol is easy to deploy but have the limitation of not having access to the transport layer state variables. In order to perform path evaluation, authors recommend to implement this session layer in the kernel space. Authors suggest a set of APIs for communication between application layer and session layer. However, only architecture is discussed and actual implementation or experimental evaluation of the proposed architecture has not been discussed in the paper. As applications have to inform their requirements to session layer that is not possible with legacy applications therefore, this architecture is not application transparent. 2.2.6 Application Layer Solutions for Bandwidth Aggregation

As shown in Figure 6, application layer bandwidth aggregation schemes strip user data across multiple TCP connections [9]. Each strip individually provides reliable transmission. Any loss or corruption or reordering occurred during the transmission is corrected before the data reaches at the inverse striping point at application layer on the peer node. The loss of data increases the skew between the strips due to the TCP retransmissions. Striping at the application layer suffers from the strip utilization problem. Applications may partition the data into different logical groups and then use striping algorithms for each group separately. If application data stream is split into multiple streams at application layer then application has to deal with the complexity of splitting the data at sender side and merge multiple streams into a single application flow at

Page 33

| Literature Review

receiver side [85]. Application will be responsible to open multiple connections across different network interfaces. It is difficult for applications to estimate in advance that how many connections will be opened. This directly effects the decision of splitting the primary application flow in number of sub streams.

Fig 6: Application Layer Striping

Applications establish different connections using socket interface on each of the attached network interface. A Parallel Sockets library developed in C++ is presented in [90]. Applications can use this library by simply making socket calls. No transport layer state variable tuning is required. However, how this library can be used for bandwidth aggregation over multiple interfaces is not discussed in the solution. As application layer solutions are implemented in user space of operating system therefore, no kernel changes are required. However, application layer solutions suffer from following limitations [87], [88]: 

Application transparency is an issue.



As applications have to implement reordering mechanism and upper layer sequence numbers therefore, application complexity is increased.



At receiver side, applications need large buffers to accommodate out-of-order packets.

Page 34

| Literature Review



Application performance can degrade due to the head-of-line blocking caused by mismatch of bandwidths on attached network interfaces [8].



How user applications will get information about attached network interfaces is also an issue.

2.3 Location Management Service Location management of mobile nodes is an important service of mobility management services, however, it is required for the nodes that act as servers and not required for the clients. When a mobile server changes its location, its IP address is changed. Dynamic DNS (DDNS) updates provide the service of dynamically updating the name-address mapping of the hosts [91]. Berkeley Internet Name Domain (BINDv9) implemented the DDNS update standard [92]. Many mobility management solutions used DDNS update option for location tracking of mobile servers [16, 93, 94]. For this purpose, DHCP server is configured to send DDNS dynamic update message to the corresponding name server to update the location of the mobile node. However, in scenarios where there is no DHCP server and IP addresses are assigned either manually or generated using address autoconfiguration procedures, mobile nodes can also send DDNS updates themselves. If location updates are to be sent by the mobile nodes then Host Agent detects the change in IP address and exchanges the DDNS update messages with the corresponding name server. As the issue of maintaining established connections is resolved with handover management procedure, therefore, location updates are useful for new connection requests only. In order to overcome the scalability and overhead issues regarding DDNS updates, IETF has already standardized the Incremental Zone Transfer and Notify option of DNS. Incremental zone transfer of DNS data helps to reduce the update overhead [95], whereas Notify option helps to reduce the delays involved in the dynamic updates [96]. Similarly, Security concerns regarding DDNS updates have been addressed in [97-100].

Page 35

| Literature Review

2.3.1 Problems in Location Management in NAT Environment

Location Updates from behind the NAT devices are not easy. Following are two major problems in this regard [101, 102]. i) How to determine public IP address at which mobile server is currently reachable? ii) How to enable NAT device to have appropriate NAT entry in its NAT table so that it can forward the connection requests received from remote nodes to the mobile server located inside the NAT? Traditionally, in order to make mobile servers reachable behind the NAT, manual configurations are performed, however due to limitations of this manual entry, many efforts have also been made to automate this process. Session Traversal Utilities for NAT (STUN) was proposed to serve as a tool for NAT traversal solutions [103]. It can be used to: i) determine the IP address and the Port allocated by the NAT, ii) check connectivity between two endpoints, and iii) provide keep-alive mechanism to maintain the NAT bindings. STUN is based on client-server architecture in which the user node behind the NAT performs as a STUN client. A supporting entity called STUN server is required in the public Internet. From behind the NAT, user node can send the binding request message to the STUN server that may pass through one or more NAT devices between user node and the STUN server. Binding request reaching the STUN server contains the IP address and Port number modified by the outermost NAT. STUN server copies this address in binding response message and sends back to the user node. In this way, user node learns the IP address and Port number that outermost NAT is using as public IP address and port number. Disadvantage of this scheme is the requirement of additional entity of STUN server in network infrastructure. Traversal Using Relays around NAT (TURN) was proposed for scenarios when both communicating nodes are behind NAT and no direct communication path can be found. In these situations, an intermediate node in the public Internet is used as a relay between two communicating nodes [104, 105]. Nodes behind the NAT acts as TURN clients and the relay node in the Internet acts as TURN server. Disadvantage of this scheme is the requirement of additional entity in the form of TURN server. Interactive Connectivity Establishment (ICE) is a protocol that makes use of the STUN and TURN protocols in order to facilitate the offer/answer

Page 36

| Literature Review

type of protocols e.g. UDP based multimedia sessions. ICE and its related issues are discussed in [106] and [107]. UPnP Forum defined a set of protocols that enable devices to seamlessly discover and connect to the services in home and corporate environments [108-110]. UPnP implementation and source code for popular OS platforms e.g. Windows XP, Linux, OpenBSD, FreeBSD, Solaris and MacOSX is available at [111]. Use of UPnP for programmatic port forwarding and NAT traversal has been discussed in [112]. However, there are two main issues with UPnP. First, UPnP has no built-in authentication mechanism and it exposes NAT device to the malicious users. Malicious users can create/modify/delete port-forwarding mappings thus causing number of security threats [113]. Secondly, UPnP uses XML files for service discovery and control. From network's perspective, sending control information using XML files imposes huge overhead. Zeroconf is another mechanism to address these issues [114, 115]. Zeroconf uses NAT-Port Mapping Protocol (NAT-PMP) for its automatic port forwarding service [116]. NAT-PMP protocol is more efficient than UPnP from the perspective of protocol overhead, however, NATPMP allows the end users to directly communicate with NAT device and update the portforwarding entry without authentication. Therefore, zeroconf is also vulnerable to the same security attacks as faced by the UPnP. Experiences of zero config residential gateway for next generation smart homes are discussed in [117]. However, it does not use the zeroconf PMP protocol. Moreover, it does not handle the location updates and NAT autoconfiguration issues.

2.3 Chapter Summary This chapter first discusses the work done by the multiple interfaces (MIF) working group of IETF. This includes multiple interfaces problem statement, current best practices, API extension and some configuration issues. Then a brief description of IEEE 802.21 MIH standard and its application in multihomed devices for vertical handover is presented. Different proposals that use MIH standard to provide vertical handover service are presented. However, so far, no effort has been done to use MIH standard for other services like bandwidth aggregation decision

Page 37

| Literature Review

making. Then this chapter discusses different solutions that provide mobility management service using multihoming property. Although literature has been presented for different layers of TCP/IP protocol stack, however, focus of the chapter has been on the solutions that facilitate TCP applications. Then, chapter discusses different solutions of bandwidth aggregation that operate at different layers of networking protocol stack. In the following, some of the limitations of these solutions are summarized. In network layer mobility management solutions, fewer efforts have been done to provide multihoming support to end devices. Proxy Mobile IPv6 discusses some of the multihoming issues, however, it does not discuss the simultaneous usage of multiple network interfaces to facilitate smooth handovers. In addition, PMIPv6 solutions only handle the mobility management issues and do not handle the bandwidth aggregations service. Moreover, PMIPv6 is a network-centric solution that requires the deployment of additional entities in network infrastructure in order to provide its services. Network multihoming issues are discussed in Network Mobility (NEMO) working group, however, these solutions do not facilitate the endhost multihoming issues. IKE Mobility and Multihoming (MOBIKE) working group has also discussed some multihoming issues, however, MOBIKE only facilitates the mobility management solutions to maintain IPsec Security Association (SA) in mobility scenarios. Mobike itself neither handles the issues of mobility management nor it deals with the issues of bandwidth aggregation. HIP provides the end-host mobility solution in end-to-end manner. HIP implementation requires operating system kernel-level change that is not easy. Deployment of HIP requires the changes in some other protocols also. Shim6 is another proposed protocol that discusses the multihoming issues but it mostly handles the network multihoming issues and so far does not provide solution for the end host multihoming. At transport layer, SCTP is although a mature solution, however, it is a separate transport layer protocol and does not benefit bulk of the Internet traffic that is TCP based. Some of the solutions at transport layer like Indirect TCP, MSOCKS and LCT based handover are not end-to-end and

Page 38

| Literature Review

require the deployment of additional entity in network infrastructure. Some of the solution like TCP-R and Freeze-TCP do not support multihoming. Another limitation of transport layer solutions is that they require changes in TCP implemented in operating system kernel. Session Layer Mobility management (SLM) solution is not application transparent and it also requires a ULS server in network infrastructure for location management of mobile servers. Moreover, SLM does not support multihoming of mobile devices. In order to provide bandwidth aggregation service, many schemes have been proposed that work at different layers of the networking protocol stack. Physical layer byte striping scheme over multiple SONET interfaces cannot be generalized over heterogeneous network interfaces. In link layer aggregation schemes, IP packets are needed to be reconstructed before crossing the network boundaries. It limits link layer solutions to be used within local area network only. Some of the network layer bandwidth aggregation solutions require additional proxy in network infrastructure while some solutions incur IP-in-IP encapsulation overhead. Moreover, as discussed in the chapter, network layer bandwidth aggregation solutions also bear some performance related issues. For providing bandwidth aggregation to multimedia applications, Reliable Multiplexing Transport Protocol (RMTP) is proposed as a new transport layer protocol. However, TCP applications cannot benefit from RMTP protocol. pTCP is also proposed for bandwidth aggregation at transport layer, however, implementation of pTCP require changes in operating system kernel. Moreover, due to the requirement of pTCP specific sockets, pTCP is not application transparent solution. Strawman architecture for bandwidth aggregation at session layer requires applications to inform application requirements to the session layer. This raises the application transparency issues for the proposed session layer solution. Similarly, application layer solutions not only cause application transparency issues but also increase the complexity of the user applications. Considering all the issues and limitations of existing solutions, there is a need to design a solution that can utilize the existence of multiple network interfaces in mobile devices to provide

Page 39

| Literature Review

mobility management and bandwidth aggregation services. That solution should not depend on the deployment of additional network entities thus making the proposed solution independent of network service providers. Similarly new solution should not require changes in networking protocol stack implemented in operating systems kernel thus making the proposed solution independent of operating system vendors. Moreover, new solution should be application transparent so that existing applications can benefit from the new solution. It is note worthy that none of the existing solutions possess all these characteristics.

Page 40

| Literature Review

Chapter 3 3. Proposed System Architecture

Work done regarding multihoming, mobility management and bandwidth aggregation protocols was presented in Chapter 2 and the limitations of these protocols were also highlighted. In order to overcome these limitations, a novel end-to-end architecture is presented in this Chapter. Components of the proposed architecture and services provided by these components have also been discussed. The chapter starts with a set of design principles and requirements of the proposed architecture. Components of the proposed architecture have been categorized into two categories. One category is the session layer components that maintain the association between two communicating nodes and handle data during handover and bandwidth aggregation states. Other category is the cross layer components that facilitate the handover/bandwidth aggregation decision making. Later on, it has been discussed how handover management, bandwidth aggregation management and location management services are provided using these components. This chapter also discusses the three dimensions in which research contributions have been made i.e. end-to-end architecture design, intelligent prediction of the Link Going Down trigger and location management behind the NAT devices.

3.1 Design Principles of Proposed Architecture In order to develop a solution that can effectively utilize the multihoming capability of mobile devices and provides mobility management and bandwidth aggregation services, following

Page 41

| Proposed System Architecture

guidelines have been laid down [18]. These guidelines serve as design principles of the proposed architecture: 

New architecture should comply with the end-to-end design philosophy of the Internet. In order to provide its services, the architecture should not require any additional entity in network infrastructure.



New architecture should not require changes in current implementation of networking protocol stack in operating system kernel.



New architecture should facilitate TCP applications without loosing its reliable byte stream service.



Applications that do not require mobility management or bandwidth aggregation services, should not suffer from the overhead caused by these services.



For providing the service continuity in mobile scenarios, new architecture should support soft handovers.



Current protocols lack in providing the wilful handovers. New architecture should overcome this limitation and should support both forced handover as well as wilful handover.



Mobile nodes should remain able to serve as mobile server. Therefore, new architecture should facilitate the location management service.



In order to work behind NAT, current solutions of location updates rely on additional entity in network infrastructure. Proposed solution should overcome this limitation and propose mechanism that rely only on already available network entities and should not require additional network entities for its working.



In order to avoid the security attacks related to handover and location update mechanisms like traffic redirection attacks, denial of service attacks, replay attacks, appropriate security mechanisms should be incorporated in new architecture.



New architecture should also provide application transparency so that legacy TCP applications should remain able to use the proposed architecture.

Page 42

| Proposed System Architecture

3.2 Proposed Architecture Design The proposed end-to-end architecture contains the session layer and cross layer components [118].

An association is established between two communicating nodes. For association

establishment, it is required that both communicating nodes should have support of proposed architecture. Proposed association is maintained on top of the transport layer. Multiple TCP connections can be established under a single association. This allows to exchange data of a single application flow over multiple TCP connections. This is referred to as connection diversity.

Fig 7: Different Components of the Proposed Architecture shown in Dotted Lines

The proposed architecture is designed to provide services to both legacy applications as well as new multihoming-aware applications. For this purpose, a multihoming-aware library has been developed. New multihoming-aware applications call this library to provide the mobility management and bandwidth aggregation services. In order to provide support for legacy applications, library function interception technique using ld_preload is used [119-121]. For this purpose, a shared library has been developed that overloads the standard socket calls to provide the mobility management and bandwidth aggregation services to the legacy applications. Figure 7 shows the major components of the proposed architecture. In following sections, functionality of these components has been described.

Page 43

| Proposed System Architecture

3.2.1 Session Layer Components

There are two components at the session layer that are Association Handler and Data Handler. Association Handler manages the association between two communicating nodes, whereas Data Handler schedules and reassembles the data over multiple connections during handover and bandwidth aggregation. 3.2.1.1 Association Handler For each application flow, an association is established between two communicating nodes. Each association is identified by an Association Identifier (AID). Whenever an application requests for a TCP connection to some node, the decision regarding the establishment of association is made taking into consideration the user preferences. If local user preferences do not allow the handover/bandwidth aggregation services for this application type then association is not established and normal TCP connection is established with the remote node. However, if user preferences allow using the proposed architecture for the requesting application then association establishment process is started and remote node is checked for the remote-compliance. If the support of proposed architecture is not available on the remote node then association can not be established with the remote node. If support for proposed architecture is available, however, association establishment is not allowed by user preferences at remote node, then a notcompliant response is received from the remote node. In these cases, association is not established and normal TCP connection is established. However, if remote node sends the association establishment response message, then an association is established between two nodes. During the association establishment process, unique 32-bit Association Identifier (AID) is also generated. Moreover, a shared secret key for the association is also established during this process. AID is used in vertical handover and bandwidth aggregation messages. AID needs not be globally unique, however, it must be unique on two communicating nodes. In order to ensure its uniqueness, the 32-bit AID is constructed as two concatenated 16-bit identifiers. The node, initiating the association, sends a 16-bit Initiator-AID that is unique at the initiator side. Similarly, responding node sends a 16-bit Responder-AID that is unique at the responder side.

Page 44

| Proposed System Architecture

Both the nodes combine these two 16-bit numbers to generate the 32-bit AID that is unique at two communicating nodes. To avoid certain security attacks such as redirection attack over vertical handover and bandwidth aggregation control messages, AID is encrypted with the shared key between two communicating nodes. In order to generate the shared key, any appropriate shared key exchange mechanism like Elliptic Curve Diffie Hellman (ECDH) can be used [122]. Using this shared key, AID and a nonce is encrypted with the Advanced Encryption Standard (AES) [123]. Nonce is used to avoid the replay attacks [124]. During this key exchange process, the initiating node sends its public key certificate in Association Establishment Request message, and responding node sends its public key certificate in Association Establishment Response message. Public key certificates are used to avoid Man-in-the-Middle attack during the key exchange process [124]. In this way, the proposed architecture incorporates necessary security mechanisms to avoid the possible redirect attacks, replay attacks and man-in-the-middle attacks. Contents of the association establishment request, response and termination messages is given below: Assoc. Estb. Request = [AIDi, Enc. Algo, Enc. Algo. Mode, ULID, PK Certificate] Assoc. Estb. Response = [AIDr, Enc. Algo, Enc. Algo. Mode, PK Certificate] Assoc. Term. Request = [AID, Enc(AID + Nonce)] Assoc. Term. Response = [AID, Enc(AID + Nonce)]

Here, AIDi = Initiator Association Identifier AIDr = Responder Association Identifier AID = Association Identifier ULID = Upper Layer Identifier PK Certificate = Public Key Certificate Enc() = Encrypted value

Page 45

| Proposed System Architecture

3.2.1.2 Data Handler Data Handler module manages the application data flow. It schedules and reassemble the data during handover and bandwidth aggregation states. It maintains three data transmission states: i.e. i) Normal State, ii) Handover State and iii) Bandwidth Aggregation State. After the association establishment process, whenever application delivers data for the transmission, Data Handler schedules the data according to its state. During normal state, Data Handler simply forwards the data to the TCP. In this state, association has only one TCP connection and data transmission takes place just like that of over traditional TCP connection. When a handover is initiated, a new connection is established with the peer node. If handover is vertical then most likely new connection is established over different network interface. It is possible that previous interface is still up and connection over this previous interface is still established. In this case, there are two connections in handover state. During handover state, same data is transmitted over both the old as well as new connection, thus providing the connection diversity. Connection diversity means that the data of single application flow can be transmitted/received over multiple connections. Establishment of multiple connections under single association is depicted in Figure 8. Peer node receives data on either of the connections and delivers it in-order to the application. In-order delivery over multiple connections is maintained with association sequence numbers. Duplicate data received on the other connection, is discarded. This simultaneous transmission of data on two connections facilitates the smooth handover. Old connection is closed when either new connection reaches the stable state or when handover time-out timer is expired. Handover time-out is useful in situations when new connection takes a long time in reaching the stable state. This may happen because of frequent packet losses, due to either poor wireless link conditions or network congestion. Data Handler can also utilize the existence of multiple active network interfaces for bandwidth aggregation. Node having multiple network interfaces can initiate bandwidth aggregation process irrespective of the fact that peer node may or may not have multiple network interfaces. Proposed architecture takes bandwidth aggregation decision on the basis of user preferences. User can specify the applications/protocols for which bandwidth aggregation should be used. In

Page 46

| Proposed System Architecture

bandwidth aggregation state, Data Handler transmits the data over multiple connections. Contrary to the handover state, in bandwidth aggregation state, different data is transmitted on multiple connections. In handover state, packets with “same” association sequence numbers are scheduled on two connections, whereas in bandwidth aggregation state, packets with “different” sequence numbers are scheduled on multiple available connections. Bandwidth aggregation service enables Data Handler to transmit data at higher aggregated data rate that can help multihomed device to overcome throughput bottleneck on its side. However, peer node’s bottleneck is not eliminated unless peer node also use multiple network interfaces. This is beneficial in scenarios where a mobile node with multiple network interfaces is communicating with a high-end server.

Fig 8: Association with Multiple TCP Connections

As each network interface may have different bandwidth, therefore, connections over slower network interface may cause head-of-line blocking on the connections over faster network interface. To reduce the possibility of head-of-line blocking, data at each connection is scheduled at the rate at which that connection is transferring the data to the peer node. For each connection, data transmission performance is measured. While scheduling the data, connections with higher throughput performance are given higher priority. Data receiving node maintains a head-of-line blocking threshold. If out-of-sequence data exceeds the head-of-line blocking threshold, then Data Handler at receiving side can request the Data Handler at sending side for retransmission of

Page 47

| Proposed System Architecture

data. Retransmission request message includes the expected association sequence number at which the blocking is occurring. 3.2.2 Cross-Layer Components

Three cross-layer components are included in the proposed architecture. These are i) User Agent, ii) IEEE 802.21 MIH and iii) Host Agent. Functionality of these components is described in the following. 3.2.2.1 User Agent User preferences about applications and network interfaces are provided through the User Agent Module. Vertical handover service is beneficial for long lived connections, whereas bandwidth aggregation benefits the applications with high throughput requirements. For applications with short lived connections, these services are of less importance. Therefore, it is not suitable to establish every TCP connection under the association. Users can provide preference of application types for which associations may be established. Whenever, a request for connection establishment is received, association for this connection request is established by considering the user preferences. In the previous section, this was also referred to as local-compliance check. For multihomed mobile devices, link condition may not be the only parameter to decide about handover initiation. For heterogeneous access networks, there may be some additional parameters that can influence the handover decisions. For example, monetary cost of using an access network and achievable data rate are the parameters that can greatly affect the user’s preference of switching from one access network to the other. There is a significant difference in available data rates for different access networks. For example, achievable data rate in GPRS/EDGE network is in 100s of Kbps, whereas achievable data rate in IEEE 802.16WiMAX network is in Mbps. If a user bears equal cost and also getting the proportionally equal link quality then achievable data rate on a particular access network is a decisive parameter. Proposed architecture supports both forced as well as wilful handovers. Wilful handover service is useful in situations when a higher preference link becomes available for communication, whereas the lower preference link is still available. In this scenario, connections on lower preference link are handed over to the higher preference link. This can also happen, when due to

Page 48

| Proposed System Architecture

any reason, user manually switches his connectivity from one access network to the other access network. This is referred to as manual wilful handover whereas, former is referred to as automatic wilful handover. Manual wilful handover can be initiated directly from the User Agent interface whereas automatic wilful handover is supported through the link preferences provided through the User Agent. User may also be interested in giving his preference about the network interfaces to be used for bandwidth aggregation. More often users in offices have free network access through office's Ethernet and WLAN interfaces, whereas they may have their own subscriptions of WiMAX and GPRS networks. User having a multihomed device with four network interfaces i.e. Ethernet, WLAN, WiMAX and GPRS may be interested in using only Ethernet and WLAN interfaces for bandwidth aggregation for higher data rate applications and may not be interested to use WiMAX and GPRS interfaces for bandwidth aggregation due to their higher monetary cost. This kind of decisions has little to do with the link quality. 3.2.2.2 IEEE 802.21 Media Independent Handover (MIH) IEEE standardized the 802.21 MIH standard for providing the link information to the upper layers in a media independent manner [4]. It provides this information to the upper layer entity called MIH User. Although MIH was originally proposed for facilitating the vertical handover decision making [125], however, in the proposed architecture, MIH triggers have also been used for bandwidth aggregation decision making. In the proposed architecture, Host Agent can initiate handover or enter bandwidth aggregation process on receiving Link Up trigger. On the other hand, it can initiate handover or exit from bandwidth aggregation state on receiving the Link Down or Link Going Down triggers. MIH provides the information through MIH event, command and information services. Link Going Down (LGD) event is one such event that helps in timely initiation of handover procedure thus reducing handover latency and packet losses during handovers. MIH standard defines it as a predictive event that is used to estimate the future link behaviour, based on past and current link conditions. However, standard has not defined any specific algorithm to predict this Link Going Down event. Standard has kept it open to use any predictive algorithm for this purpose. Mobility

Page 49

| Proposed System Architecture

management protocols can start to exchange handover control messages upon reception of LGD trigger. Incorrect triggering of LGD event can initiate unnecessary handovers thus causing overhead. Hence, there is a need to generate the LGD trigger intelligently. In the proposed architecture, LGD trigger has been generated using intelligent techniques. Artificial neural network techniques are used to train the prediction module for varying link conditions. After learning the link behaviour, trained neural networks have been used to generate the LGD trigger. In order to provide link layer intelligence to upper layer mobility management protocols, MIH defines a logical entity called Media Independent Handover Function (MIHF). MIHF receives information from link layers and provides it to the MIH user at upper layers, thus hiding link specific complexities from the upper layers. One of the services provided by MIHF is Media Independent Event Service that informs about changes in the local as well as remote link layer properties in the form of link events or triggers. These triggers include i) Link Detected, ii) Link Up, iii) Link Going Down, iv) Link Down, etc. Link Going Down (LGD) is a predictive event that indicates the likelihood of link failure in near future due to degrading link conditions. Prediction of this possible disconnectivity is based on the past and present conditions of link parameters e.g. link quality, signal strength, etc. LGD event indicates that mobile node is moving away from the current access network and may soon be out of the coverage area of this network. The proposed architecture used the LGD trigger to significantly improve its performance during the handover process. This has reduced the handover latency and decreased packet loss ratio during the handover. MIH also defines new link layer service access points to get link specific information from the corresponding link technologies. MIH specific amendments for IEEE 802.11 (WiFi) are being defined as IEEE 802.11u [28]. As shown in Figure 9, the proposed LGD prediction module resides in the local Station Management Entity (SME) of 802.11u management plane. SME monitors local link conditions and generates a prediction indication message for the MAC State Generic Convergence Function (MSGCF). MSGCF then generates a LGD indication message and passes it to the MIHF. MIHF on reception of this message checks whether there is some subscription for this event from any MIH user or not. If there is a subscription, then MIHF generates the MIH LGD trigger to the MIH users.

Page 50

| Proposed System Architecture

Fig 9: Prediction Module for Link Going Down Generation in IEEE 802.11u Stack

As shown in Figure 10, in the proposed architecture, LGD trigger is generated using two types of neural networks i.e. Time Delay Neural Network (TDNN) and Feed-Forward Neural Network (FFNN). Under varying link conditions, both types of neural networks are first trained to learn the connectivity status. Based upon past and current link conditions, TDNN predicts the future link conditions. Then, FFNN is used to determine whether upper layer connectivity will be maintained or not based on these predicted link conditions.

Fig 10: Architecture for Intelligent Generation of Link Going Down Trigger

TDNN is trained over the sampled measurements of link parameters. From these sampled measurements, a window of past values of link parameters is input to the TDNN and future values of these parameters are predicted. These predicted values are discrete in time as samples of link parameters are 200 msec apart from each other in time. Output of TDNN y(t) at time t

Page 51

| Proposed System Architecture

depends on the input values x(t) at times (t-1), (t-2), ......, (t-n). Here, n is the window size of link parameters. 𝑦(𝑡) = 𝑓( 𝑥(𝑡 − 1), 𝑥(𝑡 − 2), … . , 𝑥(𝑡 − 𝑛) )

(3.1𝐴)

As depicted in Figure 11, for predicting next value y(t+1) of link parameters, recently predicted output y(t) is taken as current input, along with (n-1) delayed samples. 𝑦(𝑡 + 1) = 𝑓( 𝑦(𝑡), 𝑥(𝑡 − 1), 𝑥(𝑡 − 2), … . , 𝑥(𝑡 − 𝑛 − 1) )

(3.1𝐵)

𝑦(𝑡 + 2) = 𝑓( 𝑦(𝑡 + 1), 𝑦(𝑡), 𝑥(𝑡 − 1), 𝑥(𝑡 − 2), … . , 𝑥(𝑡 − 𝑛 − 2) )

(3.1𝐶)

…… 𝑦(𝑡 + 𝑚) = 𝑓( 𝑦(𝑡 + 𝑚 − 1), 𝑦(𝑡 + 𝑚 − 2), … , 𝑦(𝑡), 𝑥(𝑡 − 1), 𝑥(𝑡 − 2), … , 𝑥(𝑡 − 𝑛 − 𝑚) )

(3.2)

Here, m shows the number of predicted samples of future link parameters. These predicted samples are then input to the FFNN for deciding about the link connectivity status.

Fig 11: TDNN Module for Estimating Future Link Conditions

Link parameter samples y(t) are provided to FFNN to learn the connectivity status C(t) under these link conditions. This way, FFNN learns that under which link conditions, upper layers

Page 52

| Proposed System Architecture

experience the link connectivity and under which conditions, upper layers loose the connectivity. Predicted link parameter sample y(t) at time t is further a combination of 4 parameters: 𝑦(𝑡) = {𝑣1 (𝑡), 𝑣2 (𝑡), 𝑣3 (𝑡), 𝑣4 (𝑡)}

(3.3)

𝐻𝑒𝑟𝑒: 𝑣1 = 𝑆𝑖𝑔𝑛𝑎𝑙 𝐿𝑒𝑣𝑒𝑙 𝑣2 = 𝑁𝑜𝑖𝑠𝑒 𝐿𝑒𝑣𝑒𝑙 𝑣3 = 𝐿𝑖𝑛𝑘 𝑄𝑢𝑎𝑙𝑖𝑡𝑦 𝑣4 = 𝐵𝑖𝑡 𝑅𝑎𝑡𝑒 In the proposed solution, upper layer connectivity status C(t) at time t is taken as function of these link parameters: 𝐶(𝑡) = 𝑓(𝑦(𝑡)) = 𝑓({𝑣1 (𝑡), 𝑣2 (𝑡), 𝑣3 (𝑡), 𝑣4 (𝑡)})

(3.4)

For experimentation, Internet Control Message Protocol (ICMP) traffic is taken as upper layer traffic. During the experiments, successful reception of ICMP packets is considered as connectivity and drop of ICMP packets is considered as loss of connectivity. After learning this link behaviour, the predicted link parameters from TDNN are given as input to the FFNN. FFNN classifies whether connectivity C(t) will be retained or not with the predicted link conditions. This is termed as predicted connectivity status Cp(t). Average of these predicted connectivity status Cp(t) is used to decide whether the LGD event should be triggered or not. 𝑚

1 𝐶𝐴𝑣𝑔 (t) = ∑ Cp (t i ) 𝑚

(3.5A)

𝑖=1

𝐶𝑜𝑛𝑛𝑒𝑐𝑡𝑖𝑣𝑖𝑡𝑦 𝑤𝑖𝑙𝑙 𝑏𝑒 𝐿𝑜𝑠𝑡 ; 𝐶𝐴𝑣𝑔 (𝑡) ≤ 0.5 𝐶𝐴𝑣𝑔 (𝑡) = { 𝐶𝑜𝑛𝑛𝑒𝑐𝑡𝑖𝑣𝑖𝑡𝑦 𝑤𝑖𝑙𝑙 𝑏𝑒 𝑀𝑎𝑖𝑛𝑡𝑎𝑖𝑛𝑒𝑑 ; 𝐶𝐴𝑣𝑔 (𝑡) > 0.5 𝑖𝑓 (𝐶𝐴𝑣𝑔 (𝑡) ≤ 0.5), 𝑇ℎ𝑒𝑛 "Triger_LGD_Event",

𝐸𝑙𝑠𝑒 "𝐼𝑔𝑛𝑜𝑟𝑒"

(3.5B) (3.5C)

3.2.2.3 Host Agent Host Agent is a cross-layer decision module. In 802.21 MIH terminology, Host Agent acts as MIH User. Host Agent makes handover and/or bandwidth aggregation decisions based on link layer triggers from MIH and user preferences from User Agent. Host Agent passes the handover/bandwidth aggregation commands to the Data Handler. Depending upon the user

Page 53

| Proposed System Architecture

preferences, Host Agent can also send dynamic DNS updates for location management of mobile servers [91].

Fig 12: Handover and Bandwidth Aggregation Decision Algorithm

Page 54

| Proposed System Architecture

Fig 13: Flow Chart of Algorithm for Handover & Bandwidth Aggregation Decisions

Page 55

| Proposed System Architecture

As shown in Figures 12 and 13, with each received trigger of 802.21 MIH, Host Agent algorithm is executed to decide about the initiation of handover and bandwidth aggregation processes. When a trigger is received from 802.21 MIH module, first it is checked whether it is a Link Up trigger or Link Down/Link Going Down Trigger. For each trigger, it is checked that whether the user has given the preferences for sending location updates. On receiving Link Up trigger, if location updates are enabled then DNS Add Record message is sent to the corresponding name server. Then, it is checked that whether the link that recently has become Up is defined in user preferences as Handover_Enabled or not. If it is Handover_Enabled then all the traffic on lower priority link is handed over to the recently up link. Also, if user preferences define that this link can be used for bandwidth aggregation, then recently up link is also added in the list of links that are used for the bandwidth aggregation. Similarly, on receiving Link Down or Link Going Down (LGD) trigger, DNS Delete Record message is sent to the corresponding name server for location updates. The proposed architecture differentiates the user traffic flows as BA (Bandwidth Aggregation) or Non-BA flows. When a Link Down or Link Going Down (LGD) trigger is received for a link, it is checked that what type of traffic flows are being transmitted over this link. For BA traffic, this link is simply removed from the BA list. For Non-BA traffic, it is checked that if any lower priority link is available. If a lower priority link is available, then Non-BA traffic flows are handed over from the link being down to the lower priority link. With the implementation of intelligent mechanisms, the prediction accuracy is very high, therefore, for both Link Down (LD) and Link Going Down (LGD) triggers, same type of processing is performed.

3.3 Handover Management with Proposed Architecture In proposed architecture, the handover management process is initiated at the occurrence of any of the following events: i) Currently available higher preference network interface goes down and an alternate lower preference network interface is available ii) A network interface with higher preference than the currently active network interface becomes available

Page 56

| Proposed System Architecture

iii) From the User Agent, user manually issues a command to switch the network interface

After these events, Host Agent checks the established associations on the network interface from where the handover is to be initiated. For all the associations, Host Agent issues the handover command to the Data Handler. For each of the associations, Data Handler exchanges the handover messages with the peer Data Handler. Data Handler sends the encrypted AID and nonce in the handover request message to the peer node. With the pre-shared key, Data Handler at the peer node decrypts and validates the AID and nonce. In case these are not validated, peer Data Handler discards the handover request message and responses with the handover reject message. If AID and nonce are validated, Data Handler at peer node encrypts the AID with a new nonce and responses with the handover confirm message. On receiving the handover confirm message, handover initiating node decrypts and validates the AID and new nonce. In cases if these are not validated or Data Handler has not sent the handover request message, Data Handler discards the handover confirm message. If AID and new nonce are validated, Data Handler enters the handover state. Message exchange during handover process is shown in Figure 14.

Fig 14: Message Exchange during Simple Handover

Page 57

| Proposed System Architecture

Fig 15: Different Handover Scenarios

Contents of the handover request and response messages is given below: Handover Request = [AID, Enc(AID + Nonce), Expected Sequence Number] Handover Confirm = [AID, Enc(AID + Nonce), Expected Sequence Number] Handover Reject = [AID, Enc(AID + Nonce)]

Here, AID = Association Identifier Enc() = Encrypted value Besides this simple handover scenario, some complex simultaneous handover scenarios are also possible as shown in Figure 15. Let there are two multihomed nodes A and B each having two network interfaces (a1, a2) and (b1, b2) respectively. Scenario (i) is simple handover as shown in Figure 14. Scenario (ii) shows the situation in which node B is ready to send the HO Request message. However, before sending HO Request message to node A, node B itself receives HO Request message from node A. In this case, node B will send HO Request with updated IP address of new interface of node A. Message exchange in this scenario is depicted in Figure 16. In scenario (iii), both nodes A and B send HO Request messages simultaneously to each other from their respective new interfaces. In this scenario, old interfaces at both the nodes remain reachable. Both the nodes receive the respective HO Request messages at their old interfaces. Both the nodes again send HO Request messages to each other with new IP addresses of peer

Page 58

| Proposed System Architecture

nodes learned from the received HO Request messages. Message exchange for this scenario is shown in Figure 17.

Fig 16: Message Exchange during Simultaneous Handover of Scenario-2

Fig 17: Message Exchange during Simultaneous Handover of Scenario-3

Scenario (iv) is hard HO scenario in which both the nodes A and B send HO Request messages simultaneously to each other from their respective new interfaces and their old interfaces are no longer reachable. Both HO Request messages do not reach at respective peer node. After time

Page 59

| Proposed System Architecture

out, both the nodes query the name server to obtain updated IP addresses of each other. After receiving the response from the name server, both the nodes will send HO Request messages to each other with new IP addresses. Message exchange for this scenario is shown in Figure 18.

Fig 18: Message Exchange during Simultaneous Handover of Scenario-4

3.4 Bandwidth Aggregation with Proposed Architecture When a new network interface comes up, Host Agent checks whether to use this network interface for bandwidth aggregation or not. If currently available network interface is included in bandwidth aggregation user preferences, then Host Agent informs Data Handler to initiate bandwidth aggregation process. Data Handler, at the initiating node, establishes a new connection with the peer Data Handler. Then initiating Data Handler sends the add_to_BA request message to the peer Data Handler. This message contains encrypted AID and nonce. Peer Data Handler authenticates the AID and sends back the add_to_BA Confirm message. Upon

Page 60

| Proposed System Architecture

receiving this confirm message, Data Handler enters in the bandwidth aggregation state. Message exchange for adding a new connection for bandwidth aggregation in an already established association is shown in Figure 19. In bandwidth aggregation state, Data Handler has multiple TCP connections under a single association. In this state, sending Data Handler transmits data over all the connections in an association. While transmitting data, Data Handler also computes the performance measure of each connection. Amount of data sent on each connection depends upon the computed performance measure of that connection.

Fig 19: Message Exchange for Adding Interface in Bandwidth Aggregation

In bandwidth aggregation state, it is not necessary to perform handover for every link up or link down event. Decision regarding the initiation of handover depends on the condition that bandwidth aggregation is being performed on multiple network interfaces or only on a single interface. If bandwidth aggregation is being performed on multiple interfaces and one of these interfaces goes down, then handover is not needed to be initiated. Only remove_ from_BA request message is sent to the peer node. This way, the went down interface is removed from bandwidth aggregation. The data that is already scheduled on the connection over that network interface is rescheduled on the connections of remaining active network interfaces. Similarly, if a new network interface comes up then again handover is not initiated. Only add_to_BA request

Page 61

| Proposed System Architecture

message is sent to the peer node and the newly active interface is included in bandwidth aggregation. However, if a node has only one network interface and peer node has multiple network interfaces, the peer node can initiate the bandwidth aggregation process. In this scenario, node with single network interface will have multiple connections for bandwidth aggregation over that single interface. If this single interface goes down and some alternate interface becomes available then handover process is initiated. In this state, bandwidth aggregation connections are not directly handed over to new connections. Rather, bandwidth aggregation state is first transformed to the normal state i.e. close all the connections except the one with the highest performance measure. Then, this single connection is handed over to the connection at the newly available interface. On the completion of this handover, peer node having multiple active interfaces can initiate the bandwidth aggregation. In summary, handover during bandwidth aggregation state undergoes following state changes: Bandwidth Aggregation −> Normal −> Handover −> Normal −> Bandwidth Aggregation State transition diagram for this type of scenarios is shown in Figure 20.

Fig 20: State Transition Diagram for Handover and Bandwidth Aggregation States

Contents of the messages exchanged for bandwidth aggregation is given below: Add to BA Request = [AID, Enc(AID + Nonce)] Add to BA Confirm = [AID, Enc(AID + Nonce)]

Page 62

| Proposed System Architecture

Remove from BA Request = [AID, Enc(AID + Nonce), Expected Sequence Number, List of CIDs] Remove from BA Confirm = [AID, Enc(AID + Nonce), List of CIDs]

Here, AID = Association Identifier CID = Connection Identifier Enc() = Encrypted value

3.5 Proposed Solution for Location Management in NAT Environment In order to solve the NAT autoconfiguration issues, the proposed architecture uses Dynamic Host Configuration Protocol (DHCP) Options. In IP networks, DHCP provides a framework for passing configuration information to the hosts [126]. These configuration parameters and other control information are carried in Options field of DHCP messages. New DHCP options and message types can be defined to convey new configuration parameters [127]. DHCP option space is split into two parts. Public Option Codes (0-223, 255) are defined as standard options and new options must be reviewed prior to assignment of an option number by IANA. Sitespecific option codes (224-254) are defined as Private Use and require no review by the IETF's DHC working group. Code

Length

Version

Internal Port

Protocol

External Port Internal IP External IP

Fig 21: Format of Proposed DHCP Option for NAT Auto Configuration

As shown in Figure 21, the proposed DHCP option contains the information that is required to make a NAT port-forwarding entry in the NAT table. These include the internal IP address, external IP address, internal port number, external port number and transport protocol of these ports. In these fields, Code indicates the unique DHCP code number that indicates which type of option is included in the DHCP packets. For experimental implementation, the proposed architecture used the code value of 230. This value belongs to the site-specific code value.

Page 63

| Proposed System Architecture

Length value specifies the byte count of the option fields after the code and length fields. Version field indicate the version number of this option format. Protocol indicates the transport protocol to which internal and external ports belong. Internal IP is the IP address of mobile server assigned by DHCP server. This value is 0 in discover and offer messages. External IP is the IP address of external interface of the NAT device. Value of this external IP field is 0 in discover, offer and request messages. Only ack message contains the valid value of external IP address field.

Fig 22: Interaction between DHCP Client, DHCP Server and NAT Box

When a mobile server enters a new network, its DHCP client requests the DHCP server for IP address assignment. As shown in Figure 22, in DHCP Discover message, DHCP client includes the proposed NAT-Port-Mapping-Request Option. In DHCP Offer message, DHCP server sends its capability whether it can handle Port-Mapping-Request or not. If multiple DHCP servers are available in the network then mobile server’s DHCP client selects the appropriate DHCP server on the basis of DHCP server’s capability of handling the required port-mapping-request option. After DHCP server receives the DHCP Request message, DHCP server allocates internal IP address to the mobile server and sends NAT-Port-Mapping-Request to the NAT device. NAT device, upon receiving this mapping request, creates an entry in its NAT table. NAT device responds to DHCP server with the NAT-Port-Forwarding-Confirm message containing the port

Page 64

| Proposed System Architecture

mapping entry. DHCP server sends the ACK message with the option field containing the NAT port mapping entry.

Fig 23: Interaction Scenario for Port Mapping and DNS Location Updates

Besides usual DHCP ACK, this ACK message serves two additional purposes that are i) confirmation of port mapping entry as requested by the mobile server, ii) mobile server learns the External IP Address of the NAT device. Mobile server can use this external IP address to publish its current location to the location management entity in the form of Secure DDNS Updates. Remote clients can discover the current location of the mobile servers using DNS query and can send connection requests on the external IP address of the mobile server. NAT device, after address translation, forwards the connection request to the internal IP address of the mobile server. The complete scenario of NAT port-forwarding entry and dynamic DNS location update is depicted in Figure 23.

Page 65

| Proposed System Architecture

3.6 Chapter Summary Chapter 3 begins with the design principles of the proposed architecture. Then, the components and functionality of the architecture is presented. Components of the proposed architecture are grouped in two categories. One is the session layer components that include association handler and data handler modules. Other is the cross layer components that include user agent, MIH and host agent modules. MIH module also includes the proposed intelligent prediction module for generating the LGD trigger of MIH. Then, it is discussed that how proposed architecture provides the vertical handover and bandwidth aggregation services. It has also been discussed that how proposed architecture uses the dynamic DNS updates for location management. Then the issues in using location updates from behind the NAT devices are discussed. Towards the end of the chapter, a solution for handling location management from behind the NAT is presented.

Page 66

| Proposed System Architecture

Chapter 4 4. Experimentation of the Proposed Architecture

Chapter 3 presented the proposed end-to-end system architecture and discussed the functionality of its different components. In this Chapter, implementation design of the proposed architecture and performance analysis of this architecture during handover and bandwidth aggregation is presented. Later on, overhead of the services provided by the proposed architecture is also discussed. Then, accuracy and efficiency of the proposed prediction model for facilitating the timely handover decision-making is discussed. Towards the end of this Chapter, security and efficiency of the proposed NAT auto-configuration mechanism, for facilitating the location updates from behind the NAT devices, is briefly discussed.

4.1 Implementation Design In order to analyze the performance of the proposed architecture, it was implemented and tested on Linux Fedora Core 9 and Windows XP platforms. Laptops with multiple network interfaces were used as multihomed devices. As depicted in Figure 24, a Bandwidth Aggregation and Handover aware (BAHO) library was developed that contains the overloaded socket calls [128]. In order to avail the bandwidth aggregation and vertical handover services, the applications running on multihomed mobile devices can use these overloaded socket calls. For providing the support for legacy applications, a System Call Translator (SCT) module was also developed. Purpose of this SCT module is to intercept the socket calls from legacy applications, and translate them into the multihoming-aware BAHO library calls.

Page 67

| Experimentation of the Proposed Architecture

In order to provide the abstraction from underlying operating systems, an operating system independent glue layer was also developed. This glue layer receives the operating system independent system call from the application layer and translate these system calls in operating system dependent system calls. Due to this glue layer, it was possible to execute same source code over multiple operating system platforms.

Fig 24: Components of the Implementation Design of the Proposed Architecture

For providing the triggers and information about the locally attached network interfaces, a subset of IEEE 802.21 MIH functionality was also implemented. As proposed architecture only requires the local services of IEEE 802.21 MIH, therefore for simplicity, only local services of IEEE 802.21 were implemented and its network services were not implemented. Performance results of the implementation of the proposed architecture are presented in the following section.

4.2 Experimental Evaluation of Proposed Architecture The proposed architecture was tested in LAN as well as in WAN environments. Experimental setup for WAN environment is shown in Figure 25. During these experiments, multihomed laptops equipped with Wi-Fi (as LAN) and WiMAX (as WAN) network interfaces were used. For Wi-Fi, Atheros AR242x 802.11abg WLAN interface was used and for WiMAX, Motorola

Page 68

| Experimentation of the Proposed Architecture

WiMAX USBw35100 interface was used. In order to assess the performance of the proposed architecture, following parameters were evaluated: 

Throughput during handover



Latency of handover



Throughput gain using bandwidth aggregation



Scalability of number of connections in the proposed architecture



Overhead of the proposed architecture

Fig 25: Test Topology used for Performance Analysis of Proposed Architecture

4.2.1 Throughput and Latency during Handovers

When a mobile node is in overlapping coverage area of two access networks and it needs to handover from one access network to the other, mobile node keeps the previous connection and establishes the new connection on the second access network as well. During the handover process, Data Handler simultaneously exchanges data over old as well as the new connection. As soon as the new connection gets stable, old connection may be closed. Due to simultaneous transmission over two connections, there is no throughput degradation during the handover process. Thus, applications experience a seamless handover. As shown in Figure 26, there is no throughput degradation during the handover process. Reason of this seamless handover is the simultaneous transmission of same data over two connections. Figure 27 shows the throughput of a multihomed mobile device during its vertical handover from WiMAX interface to Wi-Fi interface. Similarly, Figure 28 shows the throughput during vertical

Page 69

| Experimentation of the Proposed Architecture

handover from Wi-Fi interface to WiMAX interface. In both these cases, throughput experienced by applications is changed. The change in throughput is due to the difference in the data rates of the underlying access network. Throughput experienced by the application during handover process is the maximum of the throughput achieved on either of the connections [118]. This is formalized in the following equation. 𝑇𝑃𝐻𝑂 ≤ 𝑀𝑎𝑥(𝑇𝑃𝑐1 , 𝑇𝑃𝑐2 )

(4.1)

Here, TPc1

= Throughput achieved over connection 1

TPc2

= Throughput achieved over connection 2

Fig 26: Throughput during Handover from one WLAN Network to other WLAN Network

In overlapping coverage area of multiple access networks, throughput degradation time and handover latency is minimal due to simultaneous transmission. However, in situations when overlapping coverage is not available or when link goes down abruptly and hence MIH link going down trigger is not generated, then in both these cases handover latency is larger. In these situations, delay is larger because after getting connectivity in new access network, MN first establishes a new TCP connection with three-way handshake and then it exchanges two

Page 70

| Experimentation of the Proposed Architecture

handover control messages i.e. handover request message and handover confirm message, with peer communicating node. Moreover, in situations when node sending the data initiates the handover process then data sent by the sender will take at least half RTT to reach the receiver. 𝐻𝑂𝐿𝑎𝑡𝑒𝑛𝑐𝑦 = {

0 ; 𝐹𝑜𝑟 𝑂𝑣𝑒𝑟𝑙𝑎𝑝𝑝𝑖𝑛𝑔 𝑅𝑒𝑔𝑖𝑜𝑛𝑠 𝐷𝑐𝑜𝑢𝑡 + 𝐷3𝑤ℎ𝑠 + 𝐷𝑠𝑖𝑔 + 𝐷1𝑤𝑑 ; 𝐹𝑜𝑟 𝑁𝑜𝑛 − 𝑂𝑣𝑒𝑟𝑙𝑎𝑝𝑝𝑖𝑛𝑔 𝑅𝑒𝑔𝑖𝑜𝑛𝑠

(4.2)

Here, Dcout

= Connectivity Outage Delay while moving in non-overlapping region

D3whs = TCP 3-way handshake Delay Dsig

= Handover control messages signalling Delay

D1wd

= Delay for 1-way data exchange after handover

D1wd will be experienced due to the reason that node initiating handover can not send data until it receives the handover confirm message from the peer node. However, in situations when peer node is sending data, it can send it immediately after sending the handover confirm message. In this case, D1wd is negligible. The above mentioned handover delay in non-overlapping region is the delay from the time when MIH issues a trigger to the time when handover is completed. It does not include the link layer association delay and IP address acquisition delay.

Fig 27: Throughput during Handover from WiMAX Network to Wi-Fi Network

Page 71

| Experimentation of the Proposed Architecture

Fig 28: Throughput during Handover from Wi-Fi Network to WiMAX Network

4.2.2 Throughput Gain during Bandwidth Aggregation

With the availability of multiple network interfaces, in order to get benefit of bandwidth aggregation, the proposed architecture can establish the connection over each available network interface. During bandwidth aggregation, total throughput experienced by the application (TPBA) is less than or equal to the sum of throughputs achieved on individual connection on each interface. 𝑛

𝑇𝑃𝐵𝐴 ≤ ∑(TPi )

(4.3)

𝑖=1

Here, n

= Number of connections in bandwidth aggregation

TPi

= Throughput achieved on ith connection

Figure 29 shows the throughput gain during the bandwidth aggregation over Wi-Fi and WiMAX interfaces. Network topology used for this test is same as shown in Figure 25. During these experiments, it was observed that achieving throughput gain from multiple network interfaces depends upon a number of factors. One such factor is that whether multiple network interfaces belong to the same or different network technologies. If available network interfaces belong to different network technologies then throughput gain due to bandwidth aggregation is highly

Page 72

| Experimentation of the Proposed Architecture

probable. However, if available network interfaces are of the same network technology, then throughput gain depends on the fact that the network is contention-based or contentionless. As simultaneous transmission over contention-based channel is not possible therefore, throughput gain with multiple network interfaces of same network may not be obvious.

Fig 29: Throughput during Bandwidth Aggregation over Wi-Fi and WiMAX Networks

In a shared media network, throughput gain through multiple network interfaces depends on node density. With the increase in number of contending nodes in a shared network, bandwidth share achieved on individual interface is reduced. Although throughput due to bandwidth aggregation is increased, however, as shown in Figure 30, throughput gain is not so evident because of its smaller share. Aggregated bandwidth achieved by a multihomed device with multiple network interfaces can be expressed by the Equation 4.4.

𝐵𝐴𝑔𝑔𝑟𝑒𝑔𝑎𝑡𝑒𝑑

c

f

i=1

j=1

mi Bi ≤ ∑ + ∑ mj Bj n

(4.4)

Here, c

= number of distinct access networks with contention based MAC

mi

= number of network interfaces of ith contention based access network

Page 73

| Experimentation of the Proposed Architecture

n

= total number of interfaces contending in an access network

Bi

= total bandwidth of ith contention based access network

mj

= number of network interfaces of jth contention free access network

Bj

= bandwidth received in jth contention free access network

f

= number of distinct access networks with contention free MAC

Fig 30: Throughput during Bandwidth Aggregation with Increasing Node Density

For example, let a WLAN BSS in which a node with one WLAN interface is attached with an Access Point (AP). If it is the only node in BSS then it will get all the bandwidth ‘B’ of WLAN. If another WLAN interface is added to the same node attached to the same AP then by Equation 4.4, for two WLAN interfaces m = 2 and these are the only two interfaces contending in BSS thus n = 2. Then the aggregated bandwidth experienced by the node remains 2B/2 ⇒ B. Thus, there is no gain of bandwidth aggregation in this scenario. However, if node density in BSS is increased to 9 nodes with all the nodes having only one WLAN interface then for each WLAN node m = 1 and n = 9. In this case, each node will on-average experience the bandwidth share of B/9. If one additional WLAN interface to one of WLAN nodes is added then for this multihomed node m = 2 and total number of contending interfaces are n = 10. In this scenario, the bandwidth

Page 74

| Experimentation of the Proposed Architecture

share of multihomed node will be 2B/10 ⇒ B/5 that is > B/9. In this way, bandwidth experienced by the multihomed node is greater as compared to the single homed node. This bandwidth gain for two WLAN interfaces is also shown in Figure 31.

Fig 31: Throughput during Bandwidth Aggregation over two Wi-Fi Interfaces

During these experiments bulk file transfer applications were used that attempt to utilize the available bandwidth aggressively. These test applications try to transmit data at the maximum available data rate. The proposed architecture receives this data from the applications and provide it to the underlying TCP without much delay. However, different application rates or application profiles have not yet been tested with the proposed architecture. 4.2.3 Scalability of Proposed Architecture

This section discusses the scalability issue of the number of connections, established by the proposed architecture, in both handover and bandwidth aggregation scenarios. Let a multihomed node, with ‘n’ number of network interfaces, is communicating with another multihomed node with ‘m’ network interfaces. Theoretically, there can be ‘n×m’ number of connections for a single application flow. However, in Normal State, there is only one connection under an association. In Handover State, usually number of connections are two and in rare scenario of simultaneous handover, number of connections are three. Establishing two or three connections for a short duration of handover does not really pose the scalability issue. During Bandwidth Aggregation State, all the network interfaces are not used for bandwidth aggregation. Only those interfaces are used for bandwidth aggregation for which user has given

Page 75

| Experimentation of the Proposed Architecture

his preferences. Connection over each network interface is established in accordance to the user preferences. Such interfaces are usually limited in number e.g. 2 or 3, therefore, scalability does not seem to be a big issue. For resource limited nodes, user can give the preferences for not including more network interfaces for bandwidth aggregation. In order to limit the number of possible connections from ‘n×m’ to ‘max(n,m)’, capabilities of both the communicating nodes can be exchanged. This exchange can include the parameters such as number of available network interfaces and user preferences at both ends. Such optimization is not included in current implementation of the proposed architecture. However, this optimization can be implemented as future work of current implementation. 4.2.4 Overhead of Proposed Architecture

Some overhead is involved for providing the vertical handover and bandwidth aggregation services. This section evaluates the impact of this overhead on the performance of applications using the proposed architecture. Two types of overhead are i) computation overhead, and ii) transmission overhead. Delay caused by these overheads may increase the end-to-end delay experienced by the applications using the proposed architecture. Computational delay is caused by the scheduling process in Data Handler module at the sender side (Dsch). Some computational delay is also caused by the in-order delivery process in Data Handler module at the receiver side (Diod). Transmission overhead in terms of additional header causes to increase the transmission delay that results in the increased end-to-end delay of TCP flow (De2eTCP). End-to-end delay experienced by the applications (De2eAp) can be expressed by the Equation 4.5.

𝐷𝑒2𝑒𝐴𝑃 = 𝐷𝑠𝑐ℎ + 𝐷𝑒2𝑒𝑇𝐶𝑃 + 𝐷𝑖𝑜𝑑

(4.5)

Here, Dsch

= Sender side scheduling delay of Data Handler

De2eTCP = End-to-end delay of TCP Diod

= Receiver side in-order delivery delay of Data Handler

If Data Handler schedules the application data at the rate that is higher than the transmission rate of the underlying network interfaces then delay caused by the scheduling is ‘Dsch  0’. This can

Page 76

| Experimentation of the Proposed Architecture

be achieved by always filling the TCP send buffer with data so that TCP always have data to transmit. However, processing delay for in-order delivery ‘Diod’ over multiple TCP connections is hard to avoid. Moreover, as ‘Diod

Suggest Documents