An End-to-End Multi-Path Smooth Handoff Scheme for Stream Media Yi Pan†
[email protected] †
School of Information and Computer Science University of California Irvine, CA 92697-3425 USA +1 949-824-4105
Meejeong Lee‡
[email protected]
Jaime Bae Kim§
[email protected]
‡
Dept. of Computer Science and Engineering Ewha Womans University 11-1 Daehyun-Dong, Seoul, Korea (120-750) +822-3277-2388
ABSTRACT In the near future, wide variety of wireless networks will be merged into the Internet and allow users to continue their application with higher degree of mobility. In such environment, multimedia applications, which require smooth rate transmission, will become more popular. There are two main reasons that cause difficulties in the smooth transmission of stream media application when a user roams around the wireless mobile networks: 1) packets may get lost due to the re-routing caused by handoffs; 2) due to the disparity in the amount of available bandwidth among wireless cells, handoffs may cause congestion. We propose an end-to-end multi-path transmission scheme that provides a comprehensive solution for the smooth handoffs of stream media. In the proposed scheme, multiple paths are acquired during the handoff period to reach a single mobile node. Multi-layer encoding technique is applied to make the stream media more adaptive to the heterogeneous network environment with different bandwidths. Protection of more important video layer through duplicated transmission is carefully designed for smooth handoff using multiple paths. The performance of the proposed multi-path handoff scheme is evaluated and compared with existing schemes through extensive simulations. The simulation results show that the proposed scheme improves the throughput and quality for stream media application during the handoff. Cost of the proposed scheme is also carefully evaluated in terms of transmission efficiency.
Categories and Subject Descriptors C.2.1 [Computer-Communication Networks]: Architecture and Design –wireless communication.
Network
General Terms Algorithms, Performance, Design.
Tatsuya Suda†
[email protected] §
Computer Science Department California State University Northridge, CA 91330 USA +1 818-677-3892
Keywords Handoff, multi-layer video encoder, slow start, congestion avoidance.
1. INTRODUCTION In the near future, wireless networks will become one of the major access networks of the Internet, allowing users to continue their application while they are moving. In such environment, stream media applications, such as digital audio and video streaming, visual telephone and teleconferencing, will become very popular. For stream media applications, it is important to provide smooth handoffs to avoid any disruption in data stream transmission and to avoid drastic quality degradation. Providing smooth handoffs without any transmission disruption and drastic quality degradation is a challenging problem due to the following two factors: 1) user movement often results in a broken data path and requires the change of point-of-attachment to the network; and 2) disparity of available bandwidths in wireless cells may result in congestion during handoffs. User movement in wireless networks often results in the change in its point-of-attachment to the network, and this change does not always guarantee the same amount of bandwidth for the user. Different amount of bandwidth may be available to the user due to different workload in the new cell. This problem is exacerbated as different wireless access techniques, such as WLAN, Bluetooth, W-CDMA, etc., are being deployed to provide ubiquitous connectivity. In such heterogeneous wireless network environment, it is not uncommon for a user to face different wireless access network with different link speed and physical capacity when a handoff occurs. This mismatch in available bandwidths may result in congestion if the stream transmission continues without appropriate rate adjustment. Taking these two factors into consideration, we believe that an ideal handoff scheme for stream media should be able to:
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WMASH ’03, September 19, 2003, San Diego, California, USA Copyright 2003 ACM 1-58113-768-0/03/0009… $5.00.
♦ ♦ ♦
reduce the packet losses caused by handoffs in order to avoid the drastic quality degradation, maintain the throughput during handoffs in order to avoid stream disruptions, carefully probe the available bandwidth in the new wireless cell in order to avoid potential congestion,
♦
allow the stream media to gradually adapt its quality to the new available bandwidth. A number of mobility management techniques have been proposed in the literature, but none of them can satisfy all of the above requirements. The existing mobility management techniques can be classified into two categories: network layer mobility management techniques and transport layer mobility management techniques. The common goal of network layer techniques is to reduce packet loss during the handoffs due to the broken data path from the source to the destination. This type of packet loss is referred to as “re-routing packet loss”from now on. None of these techniques, however, consider bandwidth disparity among the wireless cells, and thus, congestion may occur during the handoff. Transport layer mobility management techniques are mainly proposed for the reliable data transmission. The common goal of the existing transport layer techniques is to avoid unnecessary timeouts and shrinkage of congestion window during handoffs. Adaptation to different available bandwidth in the new cell depends either on slow start which chokes the stream media application or on TCP backoff mechanism which causes high burst of loss without considering the bandwidth differences. Considering together with high rate fluctuation in TCP congestion control algorithm, those TCP extensions can not be preferred by stream media applications. In this paper, we propose a new multi-path handoff scheme for stream media, which satisfies all the requirements of ideal handoff scheme mentioned above. Our targeted network environment is an infrastructure-based wireless network with cell overlapping areas to allow mobile nodes to connect to multiple neighboring base stations simultaneously. The stream media application considered in this paper is mainly the video stream application, although the proposed technique can also be applied to other types of streams. In the proposed multi-path handoff scheme, multiple paths are maintained while a mobile node transits the overlapping area of two adjacent cells, keeping connections to both cells. To avoid drastic quality degradation and stream disruptions, the proposed scheme reduces packet loss and maintains high throughput during handoffs by transmitting packets on multiple paths. Meanwhile, high throughput is maintained by exploiting all the available bandwidths on multiple paths. The proposed multi-path handoff scheme avoids congestion and provides smooth stream rate by applying per-path rate control on multiple paths. In the proposed scheme, a rate control module is dynamically installed on each path, and it carefully probes the available bandwidth on a new path in order to avoid congestion. In addition, the rate control module on each path monitors the available bandwidth on that path and calculates the proper transmission rate for that path to provide smooth stream rate. The proposed multi-path handoff scheme allows the stream media to gradually adapt its quality to the new available bandwidth by using source adaptive multi-layer encoder to the video stream. In the proposed scheme, multi-layer encoding rates are determined based on the calculated transmission rates from rate control modules on multiple paths. The encoding rate for each layer and the number of layers are reported to the multi-layer encoder. When delivering multi-layer encoded stream, it transmits base layer packets to multiple paths at the minimum available rate to guarantee the successful reception of base layer. Enhancement layer packets are sent only through
paths with extra bandwidths to avoid congestion on low bandwidth paths. The paper is organized as follows. Section 2 surveys related work and shows that the proposed scheme is new and original. Section 3 describes the proposed multi-path smooth handoff scheme in detail. In Section 4, simulation models are described, and in Section 5, numerical results are presented to investigate the performance of the proposed scheme. Concluding remarks are given in Section 6, and possible future work is described in Section 7.
2. RELATED WORKS In this section, existing mobility management schemes are surveyed and compared with the proposed multi-path handoff scheme. As mentioned earlier, the existing mobility management techniques can be classified into two categories: network layer mobility management techniques and transport layer mobility management techniques. The major objective of most of the network layer techniques [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] is to reduce the re-routing packet loss. Three techniques have been proposed to reduce the re-routing packet loss: 1) fast re-direction of packets to the new location of the mobile node; 2) multicast packets to multiple possible locations around the mobile node; and 3) regional registration in the foreign domain. In the first technique, when a handoff occurs, old base station caches and forwards the packets to the new base station based on the request to forward the packets. Fast Handovers mobile IP [8], HAWAII [7], and other temporary tunneling techniques use this technique. In the second technique, packets are routed to multiple nearby base stations around the mobile node to ensure the delivery of the packets to the mobile node. The recently proposed multi-cast mobility support [9] and bi-cast used in Cellular IP [6] use this technique. In the third technique, re-routing packet loss is reduced by shortening the location update time for the mobile node moving within a domain. It typically uses some foreign agent to assign a unique regional COA (Care of Address) registration to the mobile node and map the current location specific address of the mobile node to the unique regional COA registration. Only the regional COA registration is exposed to nodes outside of the domain and the mobile node only updates its location specific address to the foreign agents in the local domain. The foreign agent who keeps the regional registration for the mobile node is responsible to buffer and forward the packets to the new location of the mobile node. In that way, the location update delay is shortened and rerouting becomes faster. Cellular IP [6], Mobile IP Regional Registration proposed for IPv4 [4] and Hierarchical mobile IP proposed for IPv6 [5] use this technique. Note that regional registration is often used in conjunction with the other two techniques, as in multi-cast mobility support [9] and HAWAII [8]. These techniques can reduce the location update time and rerouting packet loss. None of the techniques mentioned so far, however, addresses the bandwidth disparity problem during the handoff. Since the entire volume of traffic is simply moved from the previous cell to the
new cell without regard to the available bandwidth in the new cell, congestion may occur in the new cell. Transport layer mobility management techniques are mainly proposed for the reliable data transmission [21]. The major objective of most transport layer techniques is to avoid unnecessary timeouts and shrinkage of congestion window during handoffs. Some of these techniques cause intermittent disruptions in data transmission. For example, I-TCP [11] incurs long handoff delay to transfer the session context and uses slow start to start the transmission in the new wireless cell. The long handoff delay and the slow start in the new cell cause disruptions in stream transmission. Mobile TCP [14] and TCP timeout/window freeze [13] schemes temporarily cease the data transmission during the handoff and resume it in the new cell for each handoff, also resulting in disruption in stream transmission. Transaction TCP [15, 21] starts a new session whenever a mobile node moves into a new cell, and it also incurs transmission disruption because the slow start phase of the new session forces the transmission rate to drop to the minimum. Some of the transport layer mobility management techniques, such as Snoop-TCP [12] and TCP timeout/window freeze [13], may cause congestion since the stream transmission continues in the new cell without considering the disparity in available bandwidths. For example, sender in Snoop TCP is unaware of the handoff and keeps the old congestion window in the new cell, which may result in congestion. Mobile TCP [14] does not provide congestion avoidance by itself but relies on a bandwidth broker to allocate the bandwidth in the new location. TCP timeout/freeze [13] scheme also resumes the previous congestion window size at the sender, and thus, it has the same problem as Snoop TCP. As mentioned earlier, since the time-sensitive nature of stream media prevents congestion loss from being recovered through retransmission, the quality of stream media will be greatly degraded by congestion. In addition, window-based congestion control in TCP causes high rate fluctuation, which is not preferred by the stream media applications. In fact, none of the existing TCP mobility management techniques can provide smooth handoffs for stream media applications that cannot tolerate any disruption in data stream transmission and drastic quality degradation. Recently, two new transport layer protocols that maintain multiple connections between the sender and the receiver are proposed. They are SCTP (Stream Control Transport Protocol) [18] and multi-path TCP [19]. SCTP is proposed to deal with multihoming connections in the future Internet. Although SCTP maintains multiple IP addresses to the multi-homed destination, its current specification [18] does not use multiple paths simultaneously but uses alternative path(s) only to retransmit lost packets. We recognize that our proposed multi-path handoff scheme can be implemented as a stream media extension within the framework of SCTP. Multi-path TCP is proposed to increase the throughput for reliable data transmission in ad hoc networks, and it achieves this goal by exploiting the aggregated bandwidth on multiple paths. The main effort in the paper [19] is made to evaluate and investigate the timeout and fast-retransmission misbehavior triggered by bandwidth and delay disparity among multiple paths. Smoothed round-trip time and retransmission timeout algorithm and out-of-order transmission algorithm are proposed to deal with this problem. However, the problem of
improper congestion window shrinkage caused by packet loss is not addressed.
3. SCHEME DESCRIPTION In this section, the proposed multi-path handoff scheme for stream media is described in detail. In Section 3.1, overall system architecture of the proposed multi-path handoff scheme is presented. It consists of four major components: path management module, multi-path distributor module, multi-path collector module, and rate control module. A detailed description on each of these four modules is given in subsection 2 through 5.
3.1 SYSTEM ARCHITECTURE The system architecture of the proposed multi-path handoff scheme consists of four major components: path management module, multi-path distributor module, multi-path collector module, and rate control module. Figure 1 illustrates the overall system architecture of the proposed scheme.
Figure 1. System architecture of multi-path handoff scheme The path management module at each end of the transport protocol dynamically installs and deletes rate control modules on data paths between a sender and a receiver, based on path indication messages received from mobile IP protocol [1,2]. The rate control module on each path performs a rate control algorithm to monitor the available bandwidth and to calculate the proper transmission rate on the corresponding path. After installing/deleting each rate control module, the path management module at the sender (or the receiver) informs the multi-path distributor module (or the multi-path collector module) of the existence of multiple paths. Based on the path existence information received from the path management module and the rate information provided by the rate control modules, the multipath distributor module at the sender calculates and reports the multi-layer encoding rates to the multi-layer encoder. The multipath distributor module is also responsible for assigning multilayer encoded video streams to appropriate paths. The multi-path collector module at the receiver accepts incoming packets from the rate control modules, filters and reorders them before sending them to the decoder.
3.2 Path Management Module Path management module is present both at the sender and the receiver. During handoffs, the path management module performs the following operations on reception of COA update indication from the mobile IP protocol: when mobile IP reports a new COA
with PATH_ADD message, the path management module installs a rate control module for the new path and sends a RATE_MOD_READY message to the local distributor/collector to indicate existence of a new path; when mobile IP reports a lost of COA with PATH_LOSS message, the path management module deletes a rate control module and sends a RATE_MOD_DELE message to the local distributor/collector to indicate that an existing path is lost. Both types of path indication messages contain a unique PATH_ID to identify the path to a mobile node. To allow a source node to be able to maintain multiple paths simultaneously, mobile IP simultaneous binding [1, 2] and route optimization option [3] are used. Simultaneous binding option allows a mobile node to simultaneously register multiple COAs, and route optimization option allows the sender to be always informed of the COA registrations directly from the receiver.
3.3.1.1 Video encoding rate calaculation and path assignment Based on the proper transmission rates calculated on multiple paths, the distributor calculates the target encoding rates. An additional task tightly coupled with this is to assign each video layer to appropriate paths based on the calculated target encoding rate and the rate estimation on each path. These two closely related tasks are implemented in a single algorithm, called Video Layer-Path Adaptation (VLPA) algorithm. VLPA Algorithm First, the following shows the definitions of notations used in VLPA algorithm:
ri
is the cumulative video stream rate from layer i down to layer
0 (the base layer)
3.3 Multi-path Distributor Module Multi-path distributor module is present only at the sender. Based on the path existence information received from the path management module and the rate information provided by the rate control modules, the multi-path distributor module calculates and reports the multi-layer encoding rates to the multi-layer encoder so that the encoder can adapt its encoding rates accordingly. The multi-path distributor module is also responsible for assigning multi-layer encoded video streams to appropriate paths. In the proposed protocol, the source adaptive multi-layer encoder architecture presented in [20] is assumed. This architecture adjusts its encoding rates for different layers based on the feedback from the lower layer protocol. The actual encoding rate for layer i is calculated using the following function Fi F i ( r ilayer , b i , ε i ) = r ilayer − αε
i
+ β(
b i − T d ri layer (1), ) τ
where ri is the feedback rate for layer i from the lower layer protocol; bi is the buffer occupation; εi is the encoding error ratio; Td is the expected time that a packet stays in the buffer; τ is the rate adjustment interval; a and ß are parameters to define the response to encoding errors and buffers. Multi-path distributor is responsible to provide the feedback rate ri for the source adaptive multi-layer encoder. For simplicity, Td and τ are assumed to be equal to one video frame time. εi and a are set to 0, and ßis set to 1 in our experiment.
rilayer
Gi pj
is the group of paths assigned to transmit layer i represents path j with p j .rate as the estimated rate and
p j .stateas the congestion control state on path j. Note that we consider two congestion control states on a path, probing state and congestion avoidance state. p j .state = PROBING means that the slow start procedure is in use on path j; otherwise, the congestion avoidance procedure is in use on path j. The congestion control states are decided by the rate control algorithm, which will be explained in detail in Section 3.4. In VLPA algorithm, the target encoding rates of multiple layers are calculated in the following way: 1.
To avoid frequent changes in the target encoding rates, the number of video layers and the cumulative rates are first determined by the number of available paths in congestion avoidance state and their transmission rates. Therefore, for each ri , 0 ≤ i < n, there exists a path pj with p j .rate= ri and
p j .state ≠ PROBING . New paths that are in probing state are excluded in this calculation since the transmission rates on those paths are changing very rapidly and they will cause undesired fluctuations in the target encoding rates. 2.
3.3.1 Functions of Multi-path Distributor The multi-path distributor module receives RATE_MOD_READY and RATE_MOD_DELE messages from path management module. With these path indication messages from the path management module and the rate information provided by the rate control modules, the multi-path distributor performs the following three functions: 1) video encoding rate calculation, 2) path assignment, and 3) fetching video packets from multi-layer encoder. Functions 1) and 2) are tightly coupled, and thus, they are described in a single section.
is the encoding rate of layer i
If the transmission rate on the path that is in probing state is higher than the highest cumulative rate calculated through 1), one additional layer is added and the highest cumulative rate is set to the transmission rate on that path in order to provide enough probing packets for that path.
The path assignment task in VLPA algorithm is carried out in the following way: 1.
Video layer i is transmitted redundantly through all the paths whose transmission rates are greater than or equal to cumulative rate ri . Therefore, path
pj
is included in the
path group Gi if the cumulative rate
ri
is less than or equal
to p j .rate.
2.
If a path is in probing state (i.e., new path) and there is no cumulative rate which is equal to the transmission rate of this path, an additional layer is assigned to this path to assure that this new path will get enough probing packets. In path
pj
and ri < pj .rate< ri +1 , path
p j is
other
words,
if
is
in
probing
state
also included in the path
group Gi +1 . Note that with the above path assignment policy, the base layer (i.e., layer 0) is transmitted to all the paths, and the more important video layer is, the more paths are assigned to that layer. Figure 2 presents the detailed procedure of VLPA algorithm in pseudo codes.
3.3.2 Fetching video packets from multi-layer encoder Since stream media prefers a smooth flow of packets to bursts of packets in the transmission, the multi-path distributor periodically fetches and sends packets from the application buffer. Specifically, the video stream is transmitted at the maximum rate among all paths with evenly spaced interval between adjacent packets. The time interval T used to fetch and send packets is set using the following equation T = video _ packet _ size / max ( p j . rate ) (2). allpaths
After a packet is fetched from the application buffer, the multipath distributor checks the layer identifier of the packet and assigns the packet of layer i to the rate control modules of the paths in the path group Gi . A PKT_DELIVER message will be sent to the rate control module of path j if the fetched packet is assigned to path j.
3.4 Rate Control Module For pages other than the first page, start at the top of the page, and continue in double-column format. The two columns on the last page should be as close to equal length as possible. Rate control module is present at both the sender and the receiver on each path. It is installed by the path management module, and it performs a rate control algorithm to calculate the proper transmission rate and to monitor packet flows on the corresponding path. Although any rate control algorithm can be used with the proposed architecture, TFRC (TCP Friendly Rate Control) rate control algorithm [16, 17] is chosen as the rate control algorithm for the rate control modules. TFRC not only satisfies the general requirement imposed on all Internet applications, which is to react to congestion fairly with TCP flows, but also is preferred by stream media applications since it can maintain a sustainable rate against intermittent packet loss.
Figure 2. VLPA algorithm. VLPA algorithm first sorts the existing paths according to the transmission rates calculated by the rate control module on each path. It then sets the number of video layers to the number of available paths in congestion avoidance state, and sets the cumulative rates to the transmission rates of the paths in congestion avoidance state. If the path with the highest transmission rate is in probing state, an additional layer is added, and the corresponding cumulative rate is set to the transmission rate of that path. Based on the calculated cumulative rates, it then calculates the encoding rate of each layer and assigns paths to each layer. The above algorithm is executed periodically, and the updated encoding rates are reported to the multi-layer video encoder in a RATE_REPORT message. The time interval between two consecutive reports is set to one frame time.
As defined in [16], each rate control module at the sender periodically exchanges TFRC rate control information with the corresponding rate control module at the receiver. The TFRC rate control information exchanged between the sender and the receiver includes sender reports and receiver reports. Sender report is issued by the sender, and it is used to update the rate control information, such as RTT (Round Trip Time), at the receiver for the controlled path(s). Receiver report originates from the receiver, and it reports the observed congestion status on the path(s) to the sender. The rate control information included in the sender/receiver report is inherited from the TFRC definition, with addition of one more piece of information, the path ID. Since the proposed multi-path handoff scheme controls multiple paths simultaneously, the sender/receiver report includes path ID so that it can be directed to the corresponding rate control module. TFRC uses slow start algorithm to carefully probe the available bandwidth on a path at the beginning of a session or after a timeout, and for the rest of the period in the session it uses equation-based rate control algorithm. Based on this, we define two congestion control states: probing state in which slow start algorithm is used and congestion avoidance state in which equation-based rate control algorithm is used.
The rate control modules at the sender also monitor the packet flows on each path to make sure the calculated rate on each path is not exceeded. Even though the multi-layer encoder is aware of the calculated transmission rate for each video layer, this monitoring function is necessary since the multi-layer encoder may not keep up with the precise encoding rate for each video layer due to the encoding rate adjustment granularity and/or the delays in rate adaptation.
keeping the buffer size and delivery delay small. After conducting numerous experiments, we recommend to set this interval around 100ms. To inform the application of available packets in the buffer, the multi-path collector sends out a BUF_READY message after it flushes the buffered packets to the shared buffer between the transport protocol and the application.
On receipt of the PKT_DELIVER message from the multi-path distributor, the rate control module monitors the packet flow using
The performance of the proposed multi-path handoff scheme is evaluated through extensive simulation using OPNET [22]. The purpose of the simulation is two-fold: first to compare the proposed scheme with other existing schemes, and second to investigate the impact of various network parameters on the performance of the proposed multi-path handoff scheme.
a token bucket algorithm. For the token bucket for path the
token_rate
to
p j .rate
and
p j , we set
token_buck_size
to
p j .rate× p j .RTT . By setting the token bucket size to p j .rate× p j .RTT , we regulate the packet burst size to be at most the window size of a TCP connection with the same transmission rate. The following rules are used by the rate control module in monitoring packet flows on each path: 1.
On a path in probing state, packets are allowed to be transmitted only if the token bucket of that path has enough tokens. Since an additional video layer may be assigned to a path in probing state to provide enough probing packets, the rate of packet flow assigned to that path may exceed the transmission rate on the new path. Therefore, strict regulation needs to be applied on paths in probing state to avoid congestion.
2.
On all paths not in probing state, base layer packets are always accepted to guarantee the successful reception of base layer.
3.
Enhancement layer packets are always checked against the number of tokens in the token bucket to avoid congestion.
Packets that are not accepted to any path are simply discarded. Since base layer packets are given higher acceptance priority and are transmitted on multiple paths, dropping of packets will mostly happen to enhancement layer packets, and thus, it will not have significant impact on the quality of video. The rate control modules at the receiver simply accept the arriving packets and send PKT_ARRIVAL messages to the multi-path collector.
3.5 Multi-path Collector Module Multi-path collector module is present only at the receiver. Since the proposed multi-path handoff scheme transmits duplicate packets through multiple paths simultaneously, out-of-order and/or duplicate arrivals may occur. The multi-path collector module is responsible for buffering and reordering all the packets received from multiple paths. It is also responsible for filtering out the redundant packets before delivering them to the application. For stream media application, receiving the video stream on time is more critical than receiving the stream without loss. Therefore, when a lost packet is identified, the multi-path collector will wait for the packet to arrive for a certain time interval. If the packet does not arrive within the time interval, it delivers its buffer to the application with the unfilled holes. The time interval should be set with a proper value to accommodate the delay difference among the multiple paths and the resulting out-of-order arrivals, while
4. SIMULATION MODEL
In this paper, the proposed multi-path handoff scheme is compared with two types of single-path handoff schemes: singlepath handoff scheme with forwarding and single-path handoff scheme without forwarding. Both types of single-path handoff schemes maintain only a single path from source to destination during handoffs. In single-path handoff scheme with forwarding, the fast forwarding extension of mobile IP is used to forward packets from the old base station to the new base station. In single-path handoff scheme without forwarding, no network layer mobility support is provided to forward packets from the old base station to the new base station. Note that the simulated singlepath handoff scheme with forwarding can collectively represent various network layer mobility management techniques discussed in Section 2 since it removes the re-routing losses, achieving the common goal of all network layer mobility management techniques. As the transport protocol of the single path handoff schemes, TFRC rate control algorithm is used to have a fair comparison with the proposed multi-path handoff scheme. Note that there is no mobility enhancement in TFRC specification [16]. One possible solution would be to apply some of the ideas employed in the transport layer mobility management techniques discussed in Section 2 to TFRC. This seemingly straightforward solution, however, cannot be applied due to the problems that those existing transport layer techniques may cause for stream media application, as mentioned in Section 2. Therefore, plain TFRC is used as the transport protocol of the single path handoff schemes. As mentioned earlier, the targeted network environment is an infrastructure-based wireless network with cell overlapping areas. In the proposed multi-path handoff scheme, a mobile node registers multiple COAs and keeps connections to two neighboring base stations simultaneously within the overlapping area between two wireless cells. In both single-path handoff schemes, it is assumed that a mobile node switches its single COA registration when it passes the middle line of the overlapping area between two wireless cells. Our simulation experiments concentrate on analyzing a single handoff instance with various parameter settings. Since a more general scenario with a sequence of handoffs actually consists of individual handoffs with different parameter settings, thorough investigation of single handoff instances would be sufficient to provide complete understanding on how different schemes behave under various handoff situations. In addition, it is assumed that a
handoff occurs between two neighboring wireless cells since it is the most common case.
4.1 Network Configuration Figure 3 shows the network configuration used in our simulations.
Figure 3. Simulation network model. The wireless domain is modeled by a simple infrastructure-based wireless network. The wireless domain includes base stations for two neighboring wireless cells, physical links connecting base stations with the wireless gateway, and a wireless gateway connecting the wireless domain to the wired backbone network. Wireless links are 802.11b WLAN 11Mbps links with1e-03 to 1e05 bit error rate. The coverage radius of each wireless cell is 300 meters, and the distance between two neighboring cells is 500 meters. Therefore, the longest distance across the overlapping area is 100 meters. In each wireless cell, a noise node is placed to simulate the background traffic. The wired backbone network includes physical links and routers connecting the wireless gateway to the home agent and the corresponding node. All wired links in the simulation have 155Mbps link capacity, 1e-12 bit error rate, and 10µs of propagation delay.
4.2 Traffic Model The video source traffic is generated using a source adaptive multi-layer encoder model presented in Section 3.3, which adjusts its encoding rates for different layers based on the feedbacks from the transport layer. In single-path handoff schemes, only the base layer is generated since only one rate is available. It is assumed that the size of video packet is fixed to 1024 bytes. To simulate wireless cells with different amount of available bandwidth, background traffic is introduced in wireless cells. Background traffic is generated by a Poisson process, and it is directed from the base stations to the noise nodes. Since the proposed scheme does not rely on any specific MAC layer, background traffic is generated from the base station to the background nodes direction only to avoid irrelevant impact of MAC layer competition mechanism on the bandwidth share. In reality, it is typical that the bottleneck link in wireless networks is the downlink from the base station to the mobile nodes.
4.3 Simulation Parameters To investigate the impact of various network parameters on the performance of the proposed multi-path handoff scheme, following network parameters are varied: 1. available bandwidth in neighboring wireless cells, 2. round trip time from the source to the destination, and 3. moving speed of a mobile node. In our simulation, the available bandwidth in a wireless cell is calculated as the link speed minus the volume of background traffic generated in the wireless cell. The actual throughput, however, is much lower than the available bandwidth due to the packet encapsulation overhead and the control overhead of the MAC layer. In our experiments, the maximum throughput is around 0.7Mbps with the available bandwidth of 4.6Mbps, and 7Mbps with the available bandwidth of 11Mbps. In the simulation, the available bandwidth in a wireless cell is varied from 4.6Mbps to 11Mbps. The round trip time from the video source (corresponding node) to the destination (mobile node) is varied from 20ms to 200ms by tuning the propagation delay of the first link from the Corresponding Node to Router01 in the simulation network model (Figure 3). The moving speed of a mobile node is varied from 4.5mph to 140mph in the simulation. In each set of simulation experiments, only one of theses variables is varied while others are set to their default values. The default values of round trip time and node moving speed are 60ms and 30mph, respectively. For the available bandwidth, two scenarios are simulated: the case where a mobile node moves from low bandwidth area to high bandwidth area and the case where a mobile node moves from high bandwidth area to low bandwidth area. In the first scenario, the default values of available bandwidth in old wireless cell and new wireless cell are 7.8Mbps and 9.4Mbps, respectively. In the second scenario, they are 7.8Mbps and 6.2Mbps, respectively.
4.4 Performance Metrics The most significant difference between the proposed multi-path handoff scheme and the two compared single path handoff schemes is in the way they behave in the cell overlapping area. While a mobile node traverses the cell overlapping area, the proposed multi-path handoff scheme maintains multiple data paths, whereas the single path handoff schemes switch from one link to another. Exiting the cell overlapping area, the proposed multi-path handoff scheme and the two compared single path handoff schemes behave in the same way except that they may start with different transmission rate. Therefore, we focus on the performance during the handoff period, and the performance metrics are obtained during this period. The handoff period is defined as the time period during which a mobile node stays in the cell overlapping area, and it starts when a mobile node goes into the cell overlapping area and ends when it exits the cell overlapping area. The performance of handoff schemes is compared with respect to two different aspects: the quality of service, and the efficiency of the scheme. To measure the quality of service of the video stream, throughput, loss ratio, and goodput frame rate (defined as the average number of successfully received video frames per second) are measured.
The efficiency of each handoff scheme is also investigated in terms of the ratio of the number of unique application packets received to the total number of packets transmitted during the handoff period.
5. NUMERICAL RESULTS In this section, simulation results are presented to evaluate the performance of the proposed multi-path handoff scheme. Specifically, the proposed multi-path handoff scheme is compared with two single-path handoff schemes described in Section 4. In all the figures presented in this section, SPATH_NF denotes single-path handoff scheme with no forwarding; SPATH_FF denotes single-path handoff scheme with fast forwarding; MPATH denotes the proposed multi-path handoff scheme; and MPATH_Base denotes the base layer performance of the proposed multi-path handoff scheme. Each result is an average of 10 simulation runs.
5.1 Throughput Figures 4 through 6 show the throughput of different handoff schemes as a function of available bandwidth in new cell, round trip time, and node moving speed, respectively. As mentioned in Section 4.C, the available bandwidth in new cell is varied from 4.6Mbps to 11Mbps; the round trip time from the video source to the destination is varied from 20ms to 200ms; and the node moving speed is varied from 4.5mph to 140 mph in the simulation. In each set of simulation experiments, only one parameter is varied while others are set to their default values. The default values of round trip time and node moving speed are 60ms and 30mph, respectively. The available bandwidth of old cell is assumed to be 7.8Mbps.
Modifying TFRC used in the single path schemes so that a location change triggers a slow start procedure may help for the faster rate adaptation. It will, however, impair the smooth handoffs since the slow start procedure on the single path forces the transmission rate to drop to the minimum, thereby resulting in stream disruption. Note that the slow start used in the proposed scheme will not cause stream disruption since video stream will keep arriving at a steady rate through the old cell. In summary, the proposed multi-path handoff scheme always has a higher throughput than single path schemes. For the base layer video stream, the proposed multi-path scheme uses the minimum available bandwidth in two neighboring cells. As a result, the base layer throughput is lower than the throughput of the single path handoff schemes, when the available bandwidth of the new cell is lower. As the available bandwidth in new cell decreases, the base layer throughput decreases. The decrease in base layer throughput is necessary in order to adapt the future transmission rate to the lower bandwidth in the new cell. Furthermore, the decrease in base layer throughput is compensated by the transmission of enhancement layer packets in the old cell. When the new cell has higher available bandwidth than the old cell, the base layer throughput of the proposed multipath handoff scheme is very close to the throughput of single path schemes. Table 2. Parameter Setting-2
Table 1. Parameter Setting-1
Figure 5. Throughput vs. Round trip time.
Figure 4. Throughput vs. Available bandwidth in new cell. When the available bandwidth in new cell is higher than the old cell’s, the throughput of the proposed multi-path handoff scheme increases as the available bandwidth in new cell increases. On the other hand, the throughput of both single path handoff schemes remains the same. The proposed scheme increases the rate more promptly than the single path handoff schemes because unlike the single path handoff schemes that use congestion avoidance mechanism, it uses slow start mechanism and starts probing the available bandwidth earlier than the single path handoff schemes.
Figure 5 shows the throughput of the different handoff schemes as a function of the round trip time. The parameter setting used in this figure is summarized in Table 2. It is shown in this figure that the throughput of all three schemes simulated decrease as RTT increases. This is mainly due to the following equation, which defines the relationship between the steady state throughput r and RTT defined in TFRC algorithm [18]:
r = 1.22 × MSS / (RTT × p )
(3),
where MSS and p represent the packet size and the loss rate, respectively. Note that the throughput during the rate adaptation period may be lower or higher than the steady state throughput given in Equation (3). In Case I where a mobile node moves from low bandwidth area to high bandwidth area, the throughput during the rate adaptation period is lower than the steady state throughput, whereas in Case II where a mobile node moves from high bandwidth area to low bandwidth area, it is higher.
Therefore, as RTT becomes larger, the rate adaptation period becomes longer, and this results in the throughput reduction in Case I and the throughput increase in Case II. As a result, as RTT increases, the throughput reduction defined by Equation (3) is amplified in Case I, and it is offset in Case II. This is why Case I shows more significant throughput reduction than Case II as RTT increases. In Figure 5, it is again shown that the proposed multi-path handoff scheme achieves higher throughput than the single path handoff schemes. The base layer throughput is robust against RTT variation. Recall that the base layer packets are always transmitted over the old path with a steady state rate. Therefore, the impact of rate adaptation on the base layer throughput is kept minimum. As a result, a much slower decrease in base layer throughput is observed in Case I, as defined by Equation (3). In Case II where a mobile node moves from high bandwidth to low bandwidth area, the base layer throughput is kept at the higher rate in the old cell during the slow start probing phase in the new cell. Since the length of probing phase increases as RTT increases, the decrease in the base layer throughput defined by Equation (3) is compensated by the higher base layer throughput during the probing phase in Case II. Table 3. Parameter Setting-3
the rate adaptation period to the whole handoff period is relatively small. In Figure 6, it is also shown that the proposed multi-path handoff scheme achieves the highest throughput in the entire range of node moving speed. The base layer throughput in the proposed multi-path handoff scheme remains constant in Case I since the impact of rate adaptation period is kept minimum as mentioned in Figure 5. The base layer throughput increases slightly in Case II. In Case II, the base layer throughput is kept high in the old cell during the new path’s probing period. Since the length of probing period does not change as the node moving speed changes, the ratio of the probing period to the whole handoff period becomes larger as the node moving speed increases. Therefore, the average throughput increases as the node moving speed increases.
5.2 Packet Loss Ratio There are three types of packet loss during the handoff period: rerouting loss, congestion loss, and probing loss. Re-routing loss occurs when a mobile node moves to a new location. The amount of re-routing loss is determined by the bandwidth-delay product. As the location update delay becomes longer, the amount of rerouting loss increases. Congestion loss occurs when a mobile node moves from high bandwidth cell to low bandwidth cell and the rate control algorithm proceeds regardless of handoffs. As the bandwidth difference and round trip time become larger, the amount of congestion loss increases. Probing loss occurs when the slow start mechanism is used to estimate the rate on a new path. Loss of packets occurs during the last RTT of the slow start probing phase, and the probing loss increases as RTT and available bandwidth increase.
Figure 6. Throughput vs. Node moving speed. Figure 6 shows the throughput of the different handoff schemes as a function of the node moving speed. The parameter setting used in this figure is summarized in Table 3. As the node moving speed increases, the length of handoff period becomes shorter. On the other hand, the length of rate adaptation period in the single path handoff schemes is determined by the length of RTT, and thus, it remains constant as the node moving speed varies. Therefore, as the node moving speed increases, the ratio of the rate adaptation period to the entire handoff period increases, and thus, the impact of rate adaptation period on the performance becomes greater. In Case I, since the transmission rate during the rate adaptation period is lower than the actual available rate in new cell, the throughput decreases as the node moving speed increases. In Case II, since the transmission rate during the rate adaptation period is higher than the actual available rate in new cell, the throughput increases as the node moving speed increases. In Case II, though, the impact of the rate adaptation period on the performance is less significant than in case I since the rate reduction takes much shorter than the rate increment and ratio of
Figure 7. Packet loss ratio vs. Available bandwidth in new cell. Figure 7 shows the packet loss ratio of different handoff schemes as a function of available bandwidth in new cell. The parameter setting shown in Table 1 is used in this figure. In the single path handoff schemes, as the available bandwidth in new cell increases from 4.6Mbps to 7.8Mbps, the packet loss ratio decreases since the congestion loss decreases as the bandwidth difference between the neighboring cells decreases. When the available bandwidth in new cell is higher than that in the old cell (i.e., 7.8Mbps), the packet loss ratios of the single path schemes remain constant. This is because packet loss in this case consists of only re-routing loss, which remains constant for varying available bandwidth in new cell. There is no probing loss in the single path handoff schemes, and there is virtually no congestion loss when the available bandwidth in new cell is higher than that in the old cell. Between the two single path handoff schemes, the single path
handoff scheme with forwarding always gives the lower packet loss ratio since it reduces the re-routing loss. In the proposed multi-path handoff scheme, the packet loss ratio remains almost constant as the available bandwidth in new cell is varied. This is because there is no severe congestion in the new cell since the proposed multi-path scheme uses the slow start mechanism to adjust the transmission rate. Although the probing loss increases as the available bandwidth in new cell increases, the overall packet loss ratio doesn’t change much since the probing loss occurs only for the duration of one RTT. Note that the packet loss ratio of the proposed multi-path handoff scheme is kept very close to the minimum of the three simulated schemes for the entire range of available bandwidth in new cell. This indicates that the proposed multi-path handoff scheme effectively avoids the rerouting loss as well as the congestion loss. As for the base layer packet loss, it is always kept close to zero since the proposed multi-path scheme protects base layer packets by redundant transmissions.
In the proposed multi-path handoff scheme, the packet loss ratio in Case I shows similar trend seen in the single path scheme without forwarding. That is, as RTT increases, the packet loss ratio decreases for a while and then increases. The reason for the decrease is same as in the single path handoff scheme without forwarding. The reason for the increase, however, is not due to the re-routing loss, but due to the probing loss. In Case II where the new cell has lower available bandwidth than the old cell, this increasing trend in the packet loss ratio disappears. This is because with the VLPA algorithm used in the proposed scheme, nearly all packets sent on the new path during the slow start probing phase are duplicates of the packets sent on the old path when the new path has lower available bandwidth. Therefore, the most of the probing loss is recovered in this case. Among the three schemes simulated, the proposed multi-path handoff scheme has the lowest packet loss ratio in the most of cases. Only when RTT is larger than 40ms in Case I, the proposed scheme gives slightly higher packet loss ratio than the single path handoff scheme with forwarding. This implies that the slow start mechanism used in the proposed scheme may generate more losses than the congestion avoidance used in the single path scheme. However, this is a small cost paid to achieve higher throughput as shown in Figure 4. The loss ratio of the base layer packets in the proposed multi-path handoff scheme is again kept close to zero in the entire range of RTT due to redundant transmissions.
Figure 8. Packet loss ratio vs. Round trip time. Figure 8 shows the packet loss ratio of different handoff schemes as a function of the round trip time. The parameter setting shown in Table 2 is used in this figure. In the single path handoff scheme without forwarding, the packet loss ratio decreases and then increases as RTT increases in both Case I and Case II. In the single path handoff scheme without forwarding, there are mainly two sources of packet loss that are affected by RTT. First, from the TFRC rate control equation given in Equation (3), the following equation can be derived:
p = (1.22 × MSS / (r × RTT)) 2
(4).
According to Equation (4), the packet loss ratio is inversely proportional to the square of RTT. That is, the packet loss decreases as RTT increases. The second source of packet loss is the re-routing loss, which is proportional to the product of stream rate and RTT. That is, the re-routing loss increases as RTT increases. When RTT is small (i.e., smaller than 40ms), the packet loss ratio is mostly governed by Equation (4), and thus, the packet loss ratio decreases as RTT increases. When RTT increases beyond 40ms, however, the re-routing loss becomes the dominating factor of packet loss, and thus, the packet loss ratio increases as RTT increases. In the single path handoff scheme with forwarding, the packet loss ratio decreases as RTT increases, and there is no increasing trend as RTT progresses to larger values. This is because the re-routing loss is recovered by fast forwarding technique used in the single path handoff scheme with forwarding, and thus, the packet loss ratio is governed only by Equation (4).
Figure 9. Packet loss ratio vs. Node moving speed. Figure 9 shows the packet loss ratio of different handoff schemes as a function of the node moving speed. The parameter setting shown in Table 3 is used in this figure. In all the schemes, the packet loss ratio increases as the node moving speed increases. Since the length of handoff period becomes shorter as the node moving speed increases, the total number of packets transmitted during the handoff period decreases. Recall that the node moving speed has no impact on any of the three causes of the packet loss during the handoff period. As a result, the number of lost packets during the handoff period remains same regardless of the node moving speed. Therefore, the packet loss ratio, which is defined as the ratio of the amount of lost packets to the total number of packets transmitted during the handoff period, increases as the node moving speed increases. Similar to figure 8(a), in Case I, the proposed multi-path handoff scheme has slightly higher packet loss ratio than the single path handoff scheme with forwarding. In Case II, both of the single path handoff schemes experience the congestion loss, which is more significant than the probing loss incurred in the proposed multi-path scheme. Therefore, the proposed multi-path handoff scheme has the lowest packet loss rate in Case II. Again, the base
layer packet loss ratio is kept close to zero for the entire range of node moving speed.
5.3 Goodput Frame Rate The goodput frame rate is defined as the average number of successfully received video frames per second, and it is measured to illustrate the smoothness of video stream perceived by users. It is assumed that the target frame rate is 25 frames per second. Table 4. Parameter Setting-4
Figure 11. Data transmission efficiency.
Figure 10. Goodput frame rate. Figure 10 shows the transient behavior of goodput frame rate during the handoff period. Two cases are simulated, and the parameter setting for these two cases is summarized in Table 4. In Case I, all three schemes show relatively small fluctuations in the goodput frame rate. This is because no congestion occurs in the new cell since it has higher available bandwidth than the old cell, and thus, there is no drastic drop in the frame rate due to packet loss. In Case II, on the other hand, the two single path handoff schemes show a drastic drop in the goodput frame rate. This is because the single path handoff schemes continuously apply the TFRC rate control during the handoffs and this causes congestion. A drastic drop in the goodput frame rate reflects the burst of packet loss during the congestion. The proposed multi-path handoff scheme shows smooth frame rate even in Case II. It is seen in this figure that the proposed multi-path handoff scheme provides smooth handoffs without any transmission disruption and drastic quality degradation. The case with extremely high node moving speed (110mph) was also simulated, and similar result was found in that case, too. Due to the space limitation, the result for that case is omitted.
5.4 Data Transmission Efficiency The main cost paid to achieve higher performance in the proposed multi-path handoff scheme is the bandwidth waste due to redundant packet transmissions during the handoffs. To evaluate this cost, data transmission efficiency is also measured. Data transmission efficiency is defined as the ratio of the number of unique application packets received to the total number of packets transmitted during the handoff period. Since the amount of redundant transmissions is mainly determined by the available bandwidth in the neighboring cells, only the available bandwidth in new cell is varied in this investigation. Figure 11 shows the data transmission efficiency of different handoff schemes as a function of the available bandwidth in new
cell. Note that the transmission efficiency of the single path handoff schemes is defined as (1 - packet loss ratio) since there is no redundant transmission. Between the two single path handoff schemes, the single path handoff scheme with forwarding has slightly higher transmission efficiency since it removes the rerouting loss. Due to the redundant transmissions during the handoff period, the proposed multi-path handoff scheme has lower transmission efficiency than the single path handoff schemes as expected. Since in the proposed multi-path handoff scheme, only base layer packets are redundantly transmitted at the minimum available rate in the two neighboring cells, the lowest transmission efficiency is measured when both cells have the same amount of available bandwidth (i.e., 7.8 Mbps). In this case, only the base layer packets are redundantly transmitted over the two paths. With this cost, the proposed multi-path handoff scheme achieves higher overall throughput by exploiting available bandwidth on multiple paths and lower packet loss ratio by effectively reducing the re-routing loss and the congestion loss as seen in Figures 4 and 7. In addition, the proposed multi-path handoff scheme protects base layer packets by redundant transmissions, and thus, the packet loss ratio of the base layer is always kept close to zero.
6. CONCLUSION In this paper, a multi-path smooth rate handoff scheme for stream media application is proposed. Through extensive simulations and careful analysis of the numerical results, it is shown that the proposed multi-path handoff scheme has the following features: a) it protects important application data (i.e., base layer video packets) through redundant transmissions on multiple paths during handoffs; b) it avoids drastic quality degradation and stream disruptions; c) it avoids congestion and provides smooth stream rate regardless of bandwidth disparity; d) it provides smooth video quality adaptation to new available bandwidth by applying source adaptive multi-layer encoder to the video stream; e) it provides an end-to-end mobility support for stream media application without requiring additional network layer support other than getting a new COA. The cost of the proposed scheme is also carefully evaluated in terms of transmission efficiency.
7. FUTURE WORK We are currently implementing a prototype system that integrates the proposed multi-path handoff scheme into a real multi-layer encoder. We have successfully integrated a wavelet multi-layer encoder with our simulation model and have obtained some
preliminary test results for real video traces. The test results show that our proposed multi-path handoff scheme provides better userperceived video quality than other schemes. We are also cooperating with other research groups to implement the proposed scheme in real video communication system. By doing this, we plan to investigate more user-subjective performance of stream media transmission in the targeted wireless environment. We are also considering the extension of multi-path transmission to reliable data applications. Such extension can increase the throughput and at the same time reduce the retransmission and timeouts in reliable data transmission by exploiting aggregated bandwidth on multiple paths. Our work will differ from [19] in that we will concentrate more on the bandwidth management on multiple paths than on timeouts and retransmission algorithms.
8. REFERENCES [1] W. Fritsche, F. Heissenhuber, “Mobile IPv6: Mobility Support for Next Generation Internet”, IPv6 Forum, white paper, 2000 [2] C. Perkins, Ed.., “IP Mobility Support for IPv4”, RFC3344, Aug. 2002 [3] D. B. Johnson and C. Perkins, “Route Optimization in Mobile IP,”Internet draft, draft-ietf-mobileip-optim-09, work in progress, Feb. 2000. [4] E. Gustafsson, A. Jonsson, and C. Perkins, “Mobile IP Regional Registration,” Internet draft, draftietfmobileip- reg-tunnel-03, work in progress, July 2000. [5] H. Soliman, C. Castelluccia, K. Malki, L. Bellier , “Hierarchical Mobile IPv6 mobility management ”, Internet Draft, draft-ietf-mobileip-hmipv6-07.txt, work in progress, Oct. 2002 [6] A. T. Campbell, Gomez, J., Kim, S., Turanyi, Z., Wan, C-Y. and A, Valko "Design, Implementation and Evaluation of Cellular IP", IEEE Personal Communications, Special Issue on IP-based Mobile Telecommunications Networks, June/July 2000. [7] R. Ramjee, K. Varadhan, L. Salgarelli, S. Thuel, S.Y. Wang, and T. La Porta, "HAWAII: A Domain-based Approach for Supporting Mobility in Wide-area Wireless Networks", IEEE/ACM Transaction on Networking, June 2002 [8] K. El-Malki and H. Soliman, “Fast Handovers for Mobile IPv6”, Internet draft, draft-ietf-mobileip-fastmipv6-00.txt, work in progress, Feb. 2001 [9] A. Helmy, "A Multicast-based Protocol for IP Mobility Support", ACM SIGCOMM Second International
Workshop on Networked Group Communication (NGC 2000), Palo Alto, Nov. 2000. [10] A. T. Campbell, Gomez, J., Kim, S., Turanyi, Z., Wan, C-Y. and A, Valko "Comparison of IP Micro-Mobility Protocols", IEEE Wireless Communications Magazine, Vol. 9, No. 1, Feb. 2002 [11] Ajay Bakre and B.R. Badrinath, “I-TCP: Indirect TCP for Mobile Hosts”, In Proc. 15th Int'l Conf. on Distributed Computing Systems, May 1995. [12] Hari Balakrishnan, Srinivasan Seshan, and Randy H. Katz, “Improving Reliable Transport and Handoff Performance in Cellular Wireless Networks”, ACM Wireless Networks, V 1, N 4,December 1995. [13] T. Goff, J. Moronski, and D. Phatak, “Freeze-TCP: A True End-to-End Enhancement Mechanism for Mobile Environments”, In Proc. of the INFOCOM, 2000. [14] Kevin Brown and Suresh Singh. M-TCP: TCP for Mobile Cellular Networks. In Proc. of the ACM SIGCOMM Computer Communication Review, pages 19–43, 1997. [15] R. Braden, “T/TCP -- TCP Extensions for Transactions Functional Specification”, RFC1644, Jul. 1994. [16] M. Handley, J. Padhye, S. Floyd, J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification”, Internet Draft, draft-ietf-tsvwg-tfrc-05.txt, work in progress, Oct. 2002. [17] Sally Floyd, Mark Handley, Jitendra Padhye, and Joerg Widmer, “Equation-Based Congestion Control for Unicast Applications” In Proc. of Sigcomm’00, Aug 2000. [18] R. Stewart, Q. Xie, K. Morneault, C. Sharp, et al, “Stream Control Transmission Protocol”, RFC2960, Oct. 2000 [19] D. S. Phatak, Tom Goff, “A Novel Mechanism for Data Streaming Across Multiple IP Links for Improving Throughput and Reliability in Mobile Environments”, Proc. of Infocom’02, Vol 2, Jun. 2002. [20] Brett Vickers, Célio Albuquerque and Tatsuya Suda, "Source-Adaptive Multi-Layered Multicast Algorithms for Real-Time Video Distribution", IEEE/ACM Transactions on Networking, Dec. 2000. [21] Jochen H. Schiller, “Mobile Communication”, Addison-Wesley Publisher, Jan. 2000 [22] OPNET simulator, http://www.opnet.com