Stefan Aust, Jong-Ok Kim, Peter Davis, Akira Yamaguchi, Sadao Obana. ATR Adaptive ... two and three bonded LAN interfacesusing the round-robin bonding ...
Evaluation of Linux Bonding Features Stefan Aust, Jong-Ok Kim, Peter Davis, Akira Yamaguchi, Sadao Obana ATR Adaptive Communications Research Laboratories 2-2-2 Hikaridai, Keihanna Science City Kyoto 619-0288, Japan Abstract The paper contains an evaluation of the current Linux bonding implementation for wired interfaces. The Linux bonding provides methods to aggregate multiple wired interfaces to support load balancing, fault-tolerance and throughput improvement. The intention is to identify the performance of the current Linux bonding implementation and to use the results and assumptions for future discussions about bonding of wireless interfaces. The paper presents details about interface bonding in a RedHat Linux system and discusses measurement results of two bonded LAN interfaces using the round-robin bonding mode.
INTRODUCTION Interface bonding allows the aggregation of multiple wired interfaces into a single bonded interface and provides features, such as increased throughput, load-balancing and fault-tolerance. Bonding is also known as link aggregation, port trunking or striping. In this paper we focus on the Linux bonding implementation. In [1] the Linux implementation is described in detail and shows different bonding algorithms and settings. We are particularly interested in using the Linux bonding to bond multiple wireless interfaces in order to improve the performance of wireless communications. However, the current Linux bonding implementation was designed for wired interfaces, such as Ethernet or FastEthernet. Moreover, the literature does not contain quantitative analysis of performance of Linux bonding even for wired links. Hence it is not clear what performance improvements can be expected when using Linux bonding for wireless interfaces. This paper presents a quantitative evaluation of Linux bonding for Ethernet links. Specifically, this paper presents an insight study of the bonding features with measurement results of two and three bonded LAN interfaces using the round-robin bonding mode. I.
II. RELATED WORK There is a lack of information about Linux bonding for wired networks that shows the performance of the bonding. There are several documents available, [2], [3], [4] which
show how to set up the bonding for wired interfaces, but they do not analyze how much throughput improvement can be achieved and how many packet loss or out-of-order packets occur when the Linux bonding is used. Bonding technologies for wired interfaces are used by several companies and vendors of LAN switches. These companies provide a so-called port trunking at their switches and allow using multiple switch ports in parallel for a concurrent use of wired connections. Bandwidth improvement as well as fault tolerance can be offered in combination with load-balancing features. The bonding is also named as striping [5] in communication networks and is based on layer 2 implementations allowing sending and receiving datagram via multiple interfaces. Reordering features are combined with the striping methods reducing the number of unwanted out-of-order packets and improving the striping performance [6]. Out-of-order packets occur when multiple interfaces are used and packets are distributed via multiple interfaces which belong to the same data flow. Due to jitter and latency the receiver will get the incoming packets in a non-sequential order. This is a typical phenomenon for sending datagram via multiple interfaces. However, striping technologies are not widely used, proprietary and not standardized. In contrast, the Linux bonding is shipped with each Linux kernel and any Linux user can use this feature for wired bonding, e.g., to improve the throughput between two PCs with multiple Ethernet connections. This concurrent use of wired interfaces is also named as link aggregation. An interface refers to the local network interface on a PC, whereas a link refers to the connection between interfaces (local and remote interface) on separate PCs. There have been standardization activities which also lead to an implementation of a mode for the Linux bonding, named as IEEE 802.3ad link aggregation [7], [8]. The latest version of the Linux bonding can be downloaded at the Sourceforge homepage [9].
III.
LINux BONDING FEATURES
A. Interface Bonding With the Linux bonding implementation, multiple wired interfaces can be aggregated into a logical bonded interface. The bonding interface takes the description bondO and can be configured with ifconfig as any other Linux interface having an IP address and a MAC address. The bonding interface usually gets the MAC address from the first bonded interface. During the bonding of the wired interfaces, each bonded interface changes its own MAC address to the MAC address of the bonding interface. This avoids confusing attached switches or routers having one IP address related to one MAC address of the bonding interface bondO. Multiple wired interfaces can be bonded and multiple bonding interfaces can be defined (bondO, bond], bond2,...). It is possible to bond multiple wired interfaces on a single PC that is attached to one or multiple switches or routers. The bonding also supports that two PCs can set up bonded interfaces to communicate with each other via bonded communication links. This allows throughput improvement and provides fault-tolerance. The throughput can be improved by the concurrent use of wired interfaces. The fault-tolerance can be used to switch to another wired connection when the default connection will fail. B. Bonding Modes The current bonding implementation provides 7 different bonding modes [1]. These modes can be selected by the user to bond wired interfaces and links in a specific combination. In Linux wired interfaces can be bonded providing a backup interface when one of the interfaces will fail (removed, broken, etc.). The bonding of wired links allows load balancing between two or more wired connections and depends on the bonding mode, i.e., the bonding algorithm. Table 1 shows the Linux bonding modes in detail. The first mode in Table 1 is the round-robin mode (mode=0). This is the default mode for the interface bonding and provides fault-tolerance and load balancing. Packets are transmitted in a sequential order between the bonded interfaces. The function of the active-backup mode (mode=1) is to provide an always-on connectivity. If the first interface (primary interface) fails, it will switch the traffic to the second available interface. Load balancing is not provided in this mode. The balance-XOR mode (mode=2) provides load balancing and fault-tolerance based on a hash combination of the interfaces that is defined in [1]. This allows a load balancing that can be adjusted to the related interfaces. The broadcast mode (mode=3) allows sending the same data to each bonded interface. This is to provide a fault tolerance where all attached switches may receive the same data.
Bonding Algorithm Balance-RR
Mode
Description
0
Active-Backup
1
Balance-XOR
2
Broadcast
3
802.3ad
4
Balance-tlb Balance-alb
5 6
Round robin policy that transmits packets in sequential order between the wired interfaces. Active-backup policy. Second interface for fault-tolerance (failover solution). Provides load balancing and fault tolerance. Broadcast of datagram on each interface. IEEE 802.3ad dynamic link aggregation standard. Adaptive transmit load balancing. Adaptive load balancing including balance-tlb plus receive load balancing.
Table 1: Overview of Linux bonding modes The 802.3ad mode (mode=4) provides a standardized link aggregation mode, named as IEEE 802.3ad. This standard defines a link aggregation control protocol (LACP) that is used to negotiate the link aggregation between wired interfaces [8]. This mode requires that both ends of the bonded link are supporting the 802.3ad mode and have the same link speed. The last two modes in Table 1, the balance-tlb (mode=5) and the balance-alb (mode=6) provide extended load balancing functions. The load balancing for both modes are related to the current load of each bonded interface. Both modes provide a load balancing for outgoing traffic, whereas the balance-alb also provides a load balancing for incoming traffic. This is not supported by the balance-tlb mode. It is not required that both ends supporting the mode, but it is essential that the bonding is able to read out the link speed of each bonded interface. C. Bonding Libraries Currently there is a lack of an overview of the implemented bonding libraries and classes. For an extension of the bonding features, such as for bonding of wireless interfaces and wireless status detection, a detailed view of the implementation is needed. Fig. 1 shows the libraries and classes of the Linux bonding implementation. Two levels of integration can be identified. The first level is the user-level that contains the ifenslave.c and if bonding.h modules. The ifenslave.c is the user-level tool for enslaving wired interfaces and can be recompiled when a new kernel has been installed. The ifenslave.c can be found in Ikernel/Documentationlnetl, where the bonding HOWTO [1] is located. This class is related to the if bonding.h that provides required bonding features for Ethernet.
I
Phygiddi lhtdffgidd§
4-00.
Bonded interface ethO
r,
on e In e
j. h
N Enslaving of
Bonding functions
& algorithms
l Kernel spa
thtool h
ce e
Bonded interface ethn
I
thernet interfaces
.;Q
IV.
BONDING SETUP For Linux bonding setup, the kernel has to be configured to support the required bonding feature. The bonding has to be selected as a module before compiling the kernel, e.g. with make xconfig. The bonding module can be found in /Device Drivers/Network device supportlbonding driver support. Compiling the bonding as a module is mandatory for the bonding to receive status information of the wired
hAnL
Figure 1: Linux bonding libraries The second level of the bonding implementation is located at the kernel level. The main class for the bonding is the bond main.c module that contains the bonding implementation and the main bonding algorithms or modes. The other libraries are containing additional bonding features such as the adaptive load balancing and the 802.3ad standard implementation. To receive the status information of wired interfaces, the bonding calls the functions of the ethtool.c receiving the speed and duplex mode of the wired interface. The mii.c provides the link status that is used by the bonding to send data packets to interfaces which are available (UP/DOWN status). D. Status Detection The status detection is one of the most essential parts of the interface bonding. The bonding has to identify which interface is active and able to send or receive data. The bonding can aggregate active interfaces for the distribution of data packets, e.g., in round-robin mode. If the interface is down the bonding has to identify the status to avoid sending packet to this interface. The status detection of the Linux bonding uses the Media Independent Interface (Mll) status detection that is implemented in mii.c and defines the two Wired link A t
dthIl
MAC-A.A.AAA.A MAC=D:D:D:D:D:D ethO IP=10.0.0.1 Wired link B IP=10.0.0.2 _-
-
iMAC=BBTB:B:B:B
I=P10.0.1.1
-_
-
-
-
-
MAC=E:E:E:E:E:E
Wired link C
MAC=C:C:C:C:C:C
IP=10.0.2.1 IF#: bondO 10.0.10.1 MAC=A:A:A:A:A:A
states UP/DOWN. The required information for the link speed and the duplex mode is received from the ethtool.c implementation. The ethtool is a Linux tool that also provides commands to display the configuration of LAN interfaces and to change the settings (see Linux man-pages of ethtool).
_
WEth 1
_
-o
IP=10.0.1.2
MAC=F:F:F:F:F:F
IP=10.0.2.2 IF#: bondO 10.0.10.2 MAC=D:D:D:D:D:D
Figure 2: Wired bonding scenario with three interfaces
interfaces. Fig. 2 shows the bonding setup of two PCs which uses three wired interfaces (ethO, ethl, eth2) for wired bonding. Each wired interface is configured with an IP address and has a unique MAC address. For the setup 3 Gigabit Ethernet cards were selected for each PC to provide three LAN ports. It has been found during the setup that there is a limitation of having multiple Gigabit LAN cards in one PC (see experimental results). At first the bonding module has to be loaded using the Linux modprobe command in combination with the selected bonding mode. Typing modeprobe bonding mode=O will setup the bonding module using the round-robin algorithm. After defining the bonding mode, the bondO interface can be configured as any other interface using the ifconfig command. The bondO interface gets an IP address and is ready to enslave multiple wired interfaces. Using the LrootUrpc6T -.] cat /proc net bonding/ bonau
Ethernet Channel Bonding Driver: v .6.3
(June 8, 205
'Bonding Mode: load balancing (round-robin) MII Status: up MIII Polling Interval (ms 3: 0 Uep Delay (mis): 0 Down Delay (ms): 0
Slave Interface: ethO MII Statu down Link Failure Count: 1 Permanent 11M addr: 00:90:cc:c2:da:c5 Slave Interface: ethl
MII Status: up
Link Failure Count: I Permanent e4 addr: 00:90:cc:c2:da:c4
Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: :90:cc:c3:07:af [root rpc6l8 1# I
Figure 3: Linux bonding messages in /proc/net/bonding
ifenslave command, several wired interfaces can be bonded. Note, that during the bonding the system messages should be observed to detect bonding errors. This can be done typing tail -f varlloglmessages as root. After the bonding, the bondO and the bonded interfaces should be studied with ifconfig. It will show which interfaces changed their MAC address and the IP configuration related to the bonding. The bonding also establishes a new entry in the /proclnetlbonding and shows the bonding interfaces as well as the wired interfaces which are bonded. It also contains the Mll status and counts the link failures of each link disruption. Fig. 3 shows the kernel bonding messages of the /proclnetlbonding in detail. The bonding setup in this paper is based on two PCs (3.8 GHz CPU, 4 GB RAM) which are installed with Red Hat 4 ES, with an upgraded 2.6.13 kernel. Both PCs are equipped with three Gigabit LAN cards, lOOOBase-T 32 PCI. The wired interfaces are connected in pairs using LAN CAT6 UTP, cross-over cables. The Linux PCs are equipped with the current Linux bonding implementation, version 2.6.3 from June 2005 (see Fig. 3). V.
EXPERIMENTAL RESULTS
To evaluate the bonding performance the Iperf tool [10] was used sending UDP traffic via the bonding interface. The throughput was measured sending UDP traffic with different sending rates from 1OOMbps up to 600 Mbps. It has been found that the wired Gigabit interfaces only provide up to 320 Mbps for each interface (for 1.4 KB packet size) when the PC was equipped with two or more Gigabit cards, even though using only one installed Gigabit card can provide up to 980 Mbps. This is due to a limitation of the PC system. Fig. 4 shows the switching performance of the Linux bonding when only two wired interfaces were used. During the measurement the interfaces were switched (UP/DOWN) to identify the bonding performance. The duration of the 1. . . bUU
-
500
-
measurement was 250s and UDP traffic with 1KB packet size was send. The measurement starts with ethO and ethl in UP-status. After 50 sec. ethO was set to DOWN, while ethl was UP. After 100 sec. ethO was reset to the UP status to show whether the bonding can reach the former throughput. After 150 sec. the configuration was changed by setting ethl into the DOWN status. Fig. 4 contains the result of the measurement with a sending rate of 600 Mbps. The max throughput of the bonding interface can be observed by the upper curve that shows the throughput result of the Iperf measurement. The throughput for two bonded interfaces is 440 Mbps, using 1.0 KB packet size. Fig. 4 shows clearly the round-robin packet distribution when two interfaces are bonded and in UP-status. If one of the wired interfaces is DOWN the other interface offers the throughput of a single interface. Fig. 4 also shows that after setting up both interfaces in UP status, the former throughput can be reached up to 440 Mbps. To provide a complete view of the bonding performance a second measurement was taken to identify the throughput for single (non-bonded interface), 2 bonded interfaces and 3 bonded interfaces. Two different UDP packet sizes were measured (1.0 KB and 1.4 KB) to show the increased performance for packets that are closer to the MTU (Maximum Transmission Unit) packet size. The throughput performance of a single link was measured to evaluate the bonding offset (see Analysis of Bonding Performance). The throughput of the single and bonded interfaces can be observed in Fig. 5 that contains the curves for single, 2 bonded and 3 bonded interfaces with different packet sizes. The measurement started with 10OMbps sending rate and all three scenarios show the same throughput, 100 Mbps. Below the max. throughput of a single interface that was identified as 320 Mbps for 1.4 KB packet size (313 Mbps for 1.0 KB packet size), the throughput for multiple bonded interfaces is 550 500450 400 -Q 350300 4
I
400-
-
Q 0
300-
07
Q-
250200150100 50
200
100
100
-
0
100
50
-bondO -ethO
ethl
Time [s]
150
200
250
Figure 4: Throughput bonding performance of 2 bonded interfaces
300
400
Sending rate [Mbps]
UDP (1.0 KB, single interface) UDP (1.0 KB, bonded, 2 interfaces) - UDP (1.0 KB, bonded, 3 interfaces) -
0
200
500
600
-UDP (1.4 KB, single interface) -UDP (1.4 KB, bonded, 2 interfaces) -UDP (1.4 KB, bonded, 3 interfaces)
Figure 5: Throughput for single, 2 bonded, and 3 bonded wired interfaces
4,5 4
3,5
.-3 > 2,5
0 -
2
0D
0,5 0 100
200
300
400
Sending rate [Mbps]
UDP (1.0 KB, single interface) = UDP (1.0 KB, bonded, 2 interfaces) _ - UDP (1.0 KB, boded, 3 interfaces) -
-
500
600
UDP (1.4 KB, single interface) UDP (1.4 KB, boded, 2 interfaces) UDP (1.4 KB, bonded, 3 interfaces)
Figure 6: Packet loss ratio for single, 2 bonded and 3 bonded wired interfaces the same as for a single interface. For two bonded interfaces the max. throughput is 451 Mbps for 1.4 KB packet size and decreases down to 432 Mbps for 1.0 KB packet size. The bonding of three wired interface reaches a max. throughput of 513 for 1.4 KB packet size, and 457 Mbps for 1.0 KB packet size, respectively. A packet size that is close to the MTU reaches the highest throughput. Moreover a reduction in packet loss as well as a reduction for out-oforder packets has been observed. In Fig. 6 the packet loss ratio is shown for each scenario. In the case where only a single interface is measured the packet loss is below 1.5% for all sending rates. For 2 bonded interfaces the packet loss increases significantly for high sending rates. Comparing the sending rate of 500 Mbps, the packet loss increases up to 1.4% for and 1.4 KB packet size and up to 2.5% for 1.0 KB packet size. For the same sending rate and 3 bonded interfaces the packet loss increases up to 2.0% and 1.4 KB packet size and
cn -
VI. ANALYSIS OF BONDING PERFORMANCE The bonding tests have shown that there is a limit in the throughput of bonded interfaces. It has been identified that this limitation is mainly due to the bonding offset. With the following description for the round-robin algorithm the throughput bm can be estimated for two bonded (multiple) interfaces, (see Fig. 8): tdl
tla
_
,
to2
t1I
ta2 ,-*
eth 1
tb1
tb2
Figure 8: Packet timing for two bonded wired interfaces '
60
I.
ethO
bm(tdl td2'
70 .
up to 4.2% for 1.0 KB packet size. It can be shown that a large UDP packet size (1.4 KB) improves the throughput performance of the wired bonding. A large packet size also reduces the number of packet loss in comparison to a small packet size. The number of out-of-order packets due to the bonding is shown in Fig. 7. The number of out-of-order packets is increasing for an increased sending rate. For a sending rate that is near to the max. throughput of a bonding, the number of out-of-order increases significantly. For 3 bonded interfaces the out-of-order ratio increases up to 18% for a max. of 500 Mbps sending rate and 1.4 KB packet size. For the measurement of out-of-order packets, the assumption can be verified that a large UDP packet size reduces the number of out-of-order packets for wired bonding.
n~ l0
o2''P
(tdl ±tol)
n~
(td2
±
to2 )
(1)
The packet size p is the same for each wired link and there is no difference in the throughput of both wired links. In this case the transmission time td for each packet is the same and can be set to
50
0
X 40 0-
20
tdl
10 0 100
200
300
400
500
600
Sending rate [Mbps] UDP (1.0 KB, single interface) - UDP (1.0 KB, bonded, 2 interfaces) - UDP (1.0 KB, bonded, 3 interfaces) -
-
UDP (1.4 KB, single interface) UDP (1.4 KB, bonded, 2 interfaces) UDP (1.4 KB, bonded, 3 interfaces)
Figure 7: Out-of-order ratio for single, 2 bonded, and 3 bonded wired interfaces
=
td2
(2)
It can be also assumed that the bonding offset for both wired links are the same since there is no difference observed in the throughput of the links. The bonding offset to can be set to
to1 - to2
(3)
The assumptions for the packet size, the transmission time and the bonding offset lead into the following equation to calculate the throughput for n-bonded interfaces (n>1) bm(p,nfltd,to)
=
n p.
(td± (n- 1)
to)
The parameter n is the number of interfaces and td is the transmission time for one Ethernet frame. The bonding offset to can be calculated as follows:
to(pin b td)=(
b
_td)
n
-1)(5
(5)
For two bonded interfaces which provide a max. throughput of 451 Mbps for 1.4 KB packet size (td =35.0 pts, bs = 320Mbps single interface, 1.4 KB) the bonding offset to is estimated to be 14.7pts. For three bonded interfaces with a max. throughput of 513 Mbps, 1.4 KB packet size, the bonding offset to is estimated to be 15.2pts The result that these values are similar confirms that Eqn (4) is correct. VII. REQUIREMENTS FOR BONDING OF WIRELESS INTERFACES
Using Linux bonding for wireless interfaces the bonding modes require information about the status as well as the speed of the wireless interface. Some of the bonding modes need to change the MAC address of the interfaces. This is provided by most of the Ethernet drivers, but this is not supported by wireless drivers. In the case of using extended load balancing algorithms (mode 5 and 6), the bonding is not able to identify the speed of the wireless interfaces. In cases when the bonding is not able to identify the speed it will set up the speed to a default value (10OMbps). This is also the case when the bonding is not able to detect the Mll status of the interface. The bonding will use the default status and assumes that the interface is always up (MII-status=UP). This results in packets being sent to interfaces even when they are not reachable, because the status is always UP. High packet loss will occur in this case when the interface is down but the status cannot be detected by the bonding. To detect the status for wireless interfaces, the ARP monitoring mode can be considered. The ARP mode requires the ARP interval and the target address of the remote host. Multiple ARP targets can be specified and the ARP link monitoring frequency can be set in milliseconds [1]. However, the ARP mode was developed to provide a monitoring tool detecting the link status for LAN interfaces which do not provide a Mll implementation. In relation to high speed LAN connections ARP requests and replies do not play a significant role and the overhead is negligible.
Considering the data speed in WLANs and upcoming wireless systems such as WiMAX and wireless mesh networks, ARP requests and replies will result in large overheads, especially for wireless terminals which use wireless ad hoc communication. Moreover, for wireless interfaces the ARP mode might not be a reliable indication of the usability or quality of a link. Other measures such as signal strength may be necessary for effective and reliable bonding of wireless interfaces. VIII. CONCLUSION AND FUTURE WORK The interface bonding in Linux systems allows the aggregation of multiple wired interfaces. Throughput improvement as well as load balancing is supported. We showed how the throughput can be increased by concurrently using multiple links compared to using single links. It was pointed out that the interface status detection is an essential part of the Linux bonding which is not currently available for wireless interfaces. Extensions of the Linux bonding will be required to allow it to be used for bonding wireless interfaces, and achieve features such as throughput improvement, load balancing and fault-tolerance in the concurrent use of multiple wireless links. ACKNOWLEDGMENT This research was performed under research contract of Cognitive Wireless Technology for the Ministry of Internal Affairs and Communications.
REFERENCES [1] M. Williams,
"
2006.
Linux ethernet bonding driver HOWTO", April 24,
[2] Interface bonding documentation, http://mikrotik.com/docs/ros/ 2.9/interface/bonding.pdf (in HOWTO [3] Linux bonding Japanese language), http://www.linux.or.jp/JF/JFdocs/kernel-docs-2.4/networking/ bonding.txt.html
[4] Advanced networking fuer fli4l (in German language), fli4l project, http://fli4l.de/fileadmin/doc/fli4l-3.0.1 /node30.html [5] H. Adiseshu, G. Parulkar, G. Varghese, " A reliable and scalable striping protocol", Computer Communications Review, Vol. 26, No.4, pp. 131-141, October 1996. [6] F. Jacquet, M. Mission, " A method for increasing throughput based on packet striping", Ist European Conference, pp. 375-379, October 2000. [7] IEEE P802.3ad link aggregation task force, http://gruper.ieee.org/
groups/802/3/ad/index.html.
[8] M. Seaman, " Link aggregation control protocol scenarios", Rev. 01, August 1998. [9] Linux channel bonding, sourceforge project, http://sourceforge. net/
project/bonding. [10] Iperf version 2.02, http://dast.nlanr.net/Projects/Iperf/ [11] Ethernet card and driver test, test of combinaitons of ethernet cards and drivers under Linux (RedHat 6.2, kernel 2.2.16), http://www.hpc.sfu.ca/bugaboo/nic-test.htmI