Mobile and Multicast IP Services in PACS: System ... - Semantic Scholar

5 downloads 455 Views 919KB Size Report
creasing demand on providing Internet services to mobile users based on the ..... an FA at the IP router associated with RPCU, and run Mobile-IP client software.
Preprint 0 (1999) ?{?

1

Mobile and Multicast IP Services in PACS: System Architecture, Prototype, and Performance



Yongguang Zhang and Bo Ryu HRL Laboratories, Malibu, CA 90265 E-mail: fygz,[email protected]

Traditionally, wireless cellular communication systems have been engineered for voice. With the explosive growth of Internet applications and users, there is an increasing demand on providing Internet services to mobile users based on the voiceoriented cellular networks. However, Internet services add a set of radically di erent requirements on to the cellular wireless networks, because the nature of communication is very di erent from voice. It is a challenge to develop adequate network architecture and necessary systems components to meet those requirements. This paper describes our experience on developing Internet services, in particular, mobile and multicast IP services, in PACS (Personal Access Communication Systems). Our major contributions are ve-fold: (i) PACS system architecture that provides wireless Internet and Intranet access by augmenting the voice network with IP routers and backbone links to connect to the Internet; (ii) Simpli ed design of RPCU (Radio Port Controller Unit) for easy service maintenance and migration to future IP standards such as IPv6; (iii) Native PACS multicast to eciently support dynamic IP multicast and MBone connectivity; (iv) Optimization and incorporation of Mobile IP into PACS hand-o mechanism to eciently support roaming within a PACS network as well as global mobility between PACS networks and the Internet; (v) Successful prototype design of the new architecture and services veri ed by extensive performance measurements of IP applications. Our design experience and measurement results demonstrate that it is highly feasible to seamlessly integrate the PACS networks into the Internet with global IP mobility and IP multicast services. Keywords: Cellular network, Internet service, Multicast, PACS, Mobile-IP  This research was funded in part by the Defense Advanced Research Projects Agency (DARPA) under the High Speed Digital Wireless Battle eld Network Technology Reinvestment Project (DWBN TRP). The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the ocial policies, either expressed or implied, of DARPA or the U.S. Government.

2

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

1. Introduction Traditionally, cellular mobile and wireless communication systems have been designed and built for voice service. With the explosive growth of Internet applications and users, there is an increasing demand on providing Internet service to mobile users based on the existing cellular systems. In contrast to the nature of voice communication which is connection-oriented, circuit-switching, constant bit-rate, and low tolerance to loss and jitter, Internet service is connectionless, packet-switching, bursty, and often best e ort and loss-tolerance. In addition, some Internet applications desire much higher and often on-demand bandwidth such as videoconferencing using variable-bit-rate coding. As we have learned from CDPD (Cellular Digital Packet Data) system, it is a real challenge to develop a cost-e ective network architecture and necessary system components to meet these di erent requirements of Internet service on top of the existing infrastructure of voice-oriented cellular networks. The goal of this research is to de ne a network architecture and a set of design guidelines for achieving seamless integration of cellular networks with the global Internet by supporting mobile and multicast IP services in cellular networks. We also seek to gain system experience by implementing the proposed system architecture and functions as a prototype, and verify them via extensive performance measurements. We choose PACS (Personal Access Communication System [1,2]) to achieve this goal since PACS supports the packet-mode data service suitable for the bursty nature of Internet trac (see Section 2). PACS is an emerging low-tier, low-cost PCS standard for cellular wireless services in densely populated areas. In a PACS network (Figure 1), users obtain services through SU (Subscriber Unit) devices. SU communicates with RP (Radio Port) through TDMA uplink and TDM downlink. Nearby RPs are controlled by a RPCU (Radio Port Control Unit), which concentrates all trac from the RPs and connects it to the backbone voice or data networks. User authorization and other related functions are provided by AM (Access Manager) and the signaling network. By and large, PACS has so far been developed only as a voice network. Although the standard text does de ne two data communication modes (circuitmode and packet-mode), supporting Internet service in PACS has not been de ned in detail. In a recent publication [3], the authors suggested that Internet access could be provided through the circuit-mode data service, where users would

3

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

signaling network cell 1 cellular phone

AM

SU

(e.g., AIN) voice network

cell 2 mobile computer

RP

RP

(e.g., PSTN)

SU RPCU RP

data network (e.g., Internet)

cell 3

Figure 1. PACS Architecture

establish a PPP connection to an ISP over a dedicated PACS channel. Because of the xed bandwidth, this type of access is unscalable and inecient for Internet applications. On the contrary, packet-mode data is more appropriate because the communication is asynchronous, bandwidth allocation is variable, on-demand, and asymmetric. Furthermore, multiple packet-mode data users can share one channel and multiple slots can be aggregated to provide higher bandwidth. As part of a recent DARPA program, HRL Laboratories and Hughes Network Systems (HNS) have designed a wireless Internet/Intranet system using PACS packet-mode data service. It has been implemented and eld-tested in an HNS experimental PACS network. The implemented system allows a PACS user to gain wireless Internet access using a prototype packet-mode SU connected to a mobile PC. Most IP applications can run as if it were a xed Internet host. The user and the mobile PC can roam within the PACS wireless network or move between PACS networks and the outside Internet using Mobile IP. IP multicast and MBone applications are also seamlessly and eciently supported using the native PACS multicast. To our knowledge, this is the rst operational implementation of wireless Internet access built on top of PACS packet-mode data service ever reported in the literature. This paper is organized as follows. Section 2 brie y describes the packetmode data service as de ned in the PACS standard. Section 3 describes the new system architecture we developed for PACS Internet services. Mobility, multicast, and QoS supports are discussed in Section 4, 5, and 6. The prototype implementation is in Section 7, and a performance measurement is in Section 8.

4

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

2. PACS Packet-Mode Data Service This section brie y describes the packet-mode data service as de ned in the PACS standard [1]. It serves as the fundamental building block for implementing and managing IP services in our PACS Internet service model. The packet-mode data service of PACS, or PACS Packet Channel (PPC), provides the user with a variable bandwidth, asynchronous, bandwidth-ondemand, and asymmetric data service at data rates up to 256 kbps. It is based on frequency-division-duplex, TDMA uplink and TDM downlink PACS physical interface which is common to both circuit-mode and packet-mode services. (Uplink refers to the direction from SU to RPCU, and downlink is from RPCU to SU.) The high data rate and variable bandwidth nature of PPC is well suited to multimedia and the bursty nature of Internet trac. PPC supports dynamic sharing of bandwidth with the PACS circuit mode services (voice, circuit-mode data, etc.), allowing PPC to utilize the bandwidth otherwise idle. Network Layer #1

network layer packet

Network Layer #2

SL packet

header

PACS Security Layer (SL) DL fragment

PACS Physical Layer (a) Layering Point of View frame

DL packet

header

PACS Datalink Layer (DL)

DL segments

TDM/TDMA

checksum

DL fragment DL segments

(b) Encapsulation and Framing 1 slot: 10 bytes slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

2.5ms

checksum

fragment 1 fragment 2

(c) Airlink Structure

Figure 2. PPC Layers and Structures

From the layering point of view (Figure 2 (a)), PPC consists of three layers: PACS physical layer, datalink layer (DL) and security layer (SL). The PACS physical layer performs coding of TDMA uplink and TDM downlink. Both uplink TDMA and downlink TDM frames are 2.5 msec long. Each frame consists of 8 slots and each slot is 10 bytes long. The task of PPC DL is to provide a reliable

5

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

and connectionless communication service to the SL, which includes medium access control (MAC), fragmentation and segmentation, and error detection and correction. The major functions of SL include handset registration, user authentication, and data encryption. (Detailed descriptions of each layer can be found in references [1,4].) PACS standard de nes the following encapsulation and framing procedure (see Figure 2 (b)). First, PPC copies each network layer packet in a \SL packet," with optional payload encryption to prevent eavesdropping over the air. It then encapsulates each SL packet in a \DL packet" with proper header and checksum. Each DL packet is divided into one or more \DL fragments," and nally each DL fragment is subdivided into \DL segments." Fragmentation is for the highlevel medium access function { PPC must assign a slot number (out from the 8 slots) for each DL fragment, and all segments of a fragment must be transmitted in the same slot. Segmentation is to t the TDM/TDMA airlink structure (Figure 2 (c)). For downlink fragmentation, the maximum fragment size is 576 bytes of data. A larger packet must be fragmented but each fragment can be transmitted in di erent slots in parallel. Uplink fragments may be 256 segments long, therefore all uplink DL packets are sent in a single fragment. The functional architecture of PPC is shown in Figure 3. The contention function (CF) performs the small subset of DL medium access and acknowledgment procedures that are highly time critical. The packet data controller unit (PDCU) handles the rest of the DL and SL functions. CF resides in RP, and PDCU is part of RPCU. Subscriber Unit (SU)

Air Interface

Contention Function (CF)

Packet Data Controller Unit (PDCU)

Figure 3. PPC Functional Architecture

Each packet-mode SU has a unique Subscriber Identity (SubID), which is only used to authenticate a user during registration. In addition, each active SU also has a transient identi er called LPTID (Local Packet Terminal ID). It is a one-byte integer specifying the source/destination SU in every uplink/downlink slot over the wireless link. In any cell, an SU will have a unique LPTID. An LPTID is only valid in the current cell and an SU can have a di erent LPTID

6

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

value in a di erent cell. LPTIDs is assigned by the PACS network after successful registration and re-assigned after each hand-o . Table 1 shows the current allocation scheme for LPTID as de ned in the standard [1]. LPTID value 0x00 0x01 0x02 - 0xEF 0xF0 - 0xFD 0xFE 0xFF

Use for Null Registration message (used before SU is assigned an LPTID). Assigned to a SUs upon registration or hand-o . This allows up to 238 SUs in each cell. Reserved for future use System information (used to broadcast datalink layer, network layer, and \system information channel" parameters) Everybody (used for messages that must be broadcast to all SUs) Table 1 The Current LPTID Allocation Scheme

Whenever a SU enters the network, it performs a PPC registration. Two major tasks of PPC registration are authentication and LPTID assignment. At the beginning of the registration, the SU sends a registration request message (PACKET REG REQ) which includes its SubID (assuming no user anonymity). The AM then authenticates the SU using this SubID. Once the authentication is successful, the PDCU module assigns a new LPTID and sends the registration acknowledgment message (PACKET REG ACK) with this LPTID back to the SU. From then on, the SU identi es data destined for it by the LPTID until it de-registers from the network or moves to a di erent cell. The PACS terminology for \hand-o " is ALT (Automatic Link Transfer). ALT takes place when SU is crossing the wireless cell boundary. It begins when an SU detects the degradation of the present physical channel and nds another physical channel with suciently high quality. The SU then sends an ALT request message to the new RP. Once the request is accepted, the SU gets an ALT execution message back and a new LPTID for the new cell. Depending on whether the two channels are associated with the same RPCU or not, ALT can be divided into two categories: intra-RPCU ALT when SU moves to an adjacent cell in the same RPCU, and inter-RPCU ALT when SU moves to a di erent RPCU.

7

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS IP subnet 1 a cell

signaling network RP

IP subnet 2

RPCU ART

SU

RP

RPCU PFM

S

RP

PSTN (voice network)

R R IP router

Internet backbone

GW

PACS Packet Network (Intranet)

S

telco switch S RPCU R

IP subnet 3

Figure 4. System Architecture for Providing IP Services in PACS Network

3. A New System Architecture for PACS Packet Data Network 3.1. PACS Packet Network

PACS standard does not de ne how the cellular network interfaces with the Internet nor how it forwards IP datagrams to and from PACS users. We must rst construct a system architecture for PACS Internet services, which is described in Figure 4. On top of an existing PACS voice network we add a new data network called PPN (PACS Packet Network) using the Internet/Intranet technology. All cells in an RPCU constitute an IP subnet. Each RPCU connects to an IP router. PPN is an internetwork connecting all IP subnets by the IP routers and backbone links. Border gateways (GW) connect di erent PPNs (from di erent PACS network operators) and the global Internet. Each GW also includes rewall and other security functions to protect PACS network premises and PACS users. In this architecture, a mobile PC with the packet-mode SU constitutes a legitimate host in the Internet/Intranet with a unique IP address. SU is a network device that provides the mobile host with a wireless network interface to the Internet through PACS. The PPN becomes a large IP network. When a user subscribes to PACS IP service from a network operator, the SU is assigned a permanent IP address from a \home" network. When the user connects a PC to the SU, the PC will use this IP address as its host address in accessing the Internet. The \home" network is an RPCU subnet where the user is likely to use the most. The service provider records the permanent IP address in its database that can be retrieved later by AMs. For each IP datagram sent

8

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS RPCU

SU higher layers

Network-Layer Module (unicast/multicast/mobile routing)

signaling network AM

IP

PFM

●●●

PPC

PDCU

PLF

●●●

Datalink Device

PDCU

CF ● ● ●

air interface

RP PFM: PDCU: PPC: PLF:

(ART)

Packet Forwarding Module Packet Data Controller Unit PACS Packet Channel PACS Physical Layer Function

PPN backbone

CF CF: AM: ART:

datalink Contention Function Access Manager Address Resolution Table

Packet flow Control flow

Figure 5. System Architecture of SU, RP, RPCU

to this IP address, PPN is responsible for forwarding the packet to the \home" subnet. The corresponding RPCU then delivers it to the target SU. (This assumes the user is currently within the home subnet; we will discuss in the next section the case when a SU moves outside the home subnet.) Outgoing IP datagram from the SU will be forwarded by RPCU to an IP router. The standard unicast routing in PPN will ensure its correct delivery across the PPN and the Internet. 3.2. IP Datagram Forwarding in RPCU

The main function of RPCU in PACS Internet service model is to deliver IP datagrams to and from SUs. RPCU serves the basic Layer-3 to Layer-2 interface functions: address resolution, framing, and medium access. Figure 5 describe the functional architecture of RPCU. A key component of RPCU is the Packet Forwarding Module (PFM). PFM implements the Layer-3 to Layer-2 address translation function. The PACS Internet service uses LPTID as the datalink layer address. To deliver an IP datagram properly, PFM coordinates with the PDCUs in managing the LPTIDs since PFM must know which cell (RP) has the receiver and what LPTID to use. Hence, it must maintain a mapping between the IP address and the tuple (RP, LPTID) for each SU. We call this table the (unicast) address resolution table (ART). (The multicast case will be discussed later.) ART is

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

9

updated during user registration, ALT, and de-registration. Once the entry is found, PFM will pass the RP and LPTID information along with the IP datagram to the corresponding PDCU for PPC functions (Section 2). IP forwarding in the uplink direction (SU to RPCU) is straightforward. PDCU receives segments from RP and reassembles the datalink payload. When it receives a complete IP datagram, it passes up to the PFM. PFM rst checks whether the datagram is targeted for another SU in the same subnet. If yes, the same procedure as described above will be used. Otherwise PFM forwards it to the PPN router. The PPC module in SU serves the same basic Layer-3 to Layer-2 interface functions, including framing and medium access. Address resolution is unnecessary because the communication with any other host is always through RPCU. 3.3. PACS Registration Procedures

Whenever an SU enters the PACS network, it performs a packet data service registration. It does so by sending a registration message to the RPCU right after it obtains a physical channel. RPCU then passes it to AM for user authentication and authorization. At the end of the registration AM will retrieve SU's permanent IP address recorded during service commission and return it to the PFM. Afterwards PDCU will assign an LPTID from the RP, and PFM then enters the IP address to (RP, LPTID) mapping in ART. When an SU crosses the cell boundary, it performs ALT. The new PDCU determines whether it is intra-RPCU or inter-RPCU by examining the Complete Port ID (old RP) eld in the PACKET REG REQ which contains the RP and RPCU addresses. If it is inter-RPCU ALT, the AM of the new RPCU noti es that of the previous RPCU to release the LPTID assigned at the previous cell. Finally, when an SU signs o , it performs a de-registration procedure to release the current LPTID. The detailed procedures for registration, de-registration, and ALT are given in Figure 6.

4. Mobility Support through Mobile-IP When an SU performs ALT, in addition to the physical channel transfer and the ALT procedure as described in the previous section, PPN must ensure proper routing in the backbone for subsequent IP datagrams destined for the SU. During the intra-RPCU ALT, since SU remains with the same RPCU and in the

10

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS SU

PDCU

PACKET_REG_REQ

AM

PFM

Auth. request (SubID, Reg)

(SubID, Auth, Reg)

Auth. Reply (IP) Assign LPTID Create entry (RP, LPTID, IP)

PACKET_REG_ACK (LPTID, Auth)

(a) Registration New PDCU

SU PACKET_REG_REQ

AM

Old PDCU

PFM

Auth. request (SubID, ALT1)

(SubID, Auth, ALT)

Auth. reply (IP) Assign LPTID Modify entry

PACKET_REG_ACK

Release LPTID

(RP, LPTID, IP)

(LPTID, Auth)

(b) Intra-RPCU ALT (ALT1) New AM

New PDCU

SU PACKET_REG_REQ (SubID, Auth, ALT)

Auth. Req. (SubID, ALT2) Auth. reply

Notify inter-RPCU ALT (SubID or IP)

(IP)

PACKET_REG_ACK

Old AM

New PFM

Assign LPTID Create entry (RP, LPTID, IP)

Old PFM

Delete entry (IP)

Old PDCU

Release LPTID

(LPTID)

(c) Inter-RPCU ALT (ALT2) PDCU

SU

AM

PFM

PACKET_REG_REQ (SubID, Auth, De-reg)

Auth. request (SubID, De-reg) Auth. Reply (IP) Delete entry (IP)

PACKET_REG_ACK (Auth)

Release LPTID

(d) Deregistration

(Key information within each message is shown in parenthesis) Figure 6. Procedures for PACS Registration, ALT, and De-registration

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

11

same IP subnet, there is no e ect to routing in PPN. Inside RPCU, PFM will update ART and replace the corresponding entry with a new one that contains the new RP number and the new LPTID. For inter-RPCU ALT, however, the process is more complicated because not only the ART tables in both the old and new RPCU must be updated, but routing in PPN must also be changed so that subsequent IP datagrams will arrive at the new RPCU instead. We accomplished this by incorporating Mobile-IP in our system architecture. Mobile-IP is a standard Internet mechanism that allows proper delivery of IP datagrams to a mobile host without considering the mobile host's current point of attachment to the Internet [5]. To use Mobile-IP, we place an HA and an FA at the IP router associated with RPCU, and run Mobile-IP client software at the mobile PC. We also improve the airlink eciency when using Mobile-IP in PACS. Normally, Mobile-IP client software relies on \agent advertisement" { a periodic broadcast message by each FA { to detect the change of IP subnet during hand-o . However, as shown in Figure 7(A), using the same mechanism \as is" in PACS has two problems. First, advertisement messages waste precious airlink bandwidth when there is no hand-o or registration activities. Second, it forces SU to wait until the next advertisement message arrives, yielding unnecessary long registration time or hand-o latency. To remedy this, and at the same time preserve the Mobile-IP standard, we place a Mobile-IP Assist Agent (MIAA) in RPCU. After RPCU completes inter-RPCU ALT or a fresh registration procedure, MIAA immediately sends an agent advertisement message to the new SU (on-demand advertisement). PDCU may \piggyback" this message on the registration reply message (PACKET REG ACK).1 The result is a saving of one round-trip between SU and RPCU, as shown in Figure 7(B). Further, since every Mobile-IP hand-o activity in PACS is preceded by PACS registration or inter-RPCU ALT, the periodic agent advertisement becomes unnecessary. We can now safely disable the periodic Mobile-IP agent advertisement at each FA.

5. IP Multicast Support IP multicast [6] support in PACS is divided into two parts: multicast routing in PPN, and local multicast forwarding within each RPCU subnet. Multicast routing in PPN can be achieved eciently by adopting the same multicast routing 1

Such \piggyback" will require extension to the PACS standard. Currently, only the PACKET REG REQ messages can piggyback network-layer packets [1].

12 M-IP Client

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS SU

PDCU

FA (current)

HA of SU

PACKET_REG_REQ

M-IP Client

PDCU MIAA

SU

FA (current)

HA of SU

PACKET_REG_REQ

PACKET_REG_ACK

PACKET_REG_ACK piggyback agent advertisement M-IP Reg. Request

Agent Solicitation (optional) Agent Advertisement

M-IP Reg. Reply M-IP Reg. Request M-IP Reg. Reply

M-IP Reg. Request M-IP Reg. Reply

M-IP Reg. Request M-IP Reg. Reply

(A) message exchanges during normal Mobile-IP registration

(B) message exchanges during optimized Mobile-IP registration

Figure 7. Improve Airlink Eciency for Mobile-IP Registration in PACS

protocols used in the Internet/MBone (e.g., DVMRP or PIM). However, local multicast forwarding requires additional functions in RPCU as described in the following three sections. 5.1. PACS Multicast Scheme

A fundamental requirement for PACS multicast is a link-level multicast addressing scheme. The traditional subnet-wide link-layer addressing scheme (c.f., in Ethernet) is not applicable in PACS because: (i) PACS has a limited link-layer address space (LPTID) compared to the class D IP addresses; (ii) a PACS subnet is partitioned into many cells, each managing LPTIDs independently. When multicast packets reach RPCU in which there exist group members, a PACS-speci c multicast mechanism must deliver them only to the members interested in the multicast group. This requires the ability for local mobile hosts to join cell-wide multicast groups and receive using the cell-wide group addresses assigned to the cell-wide groups. We thus need a cell-wide scheme { each cell manages cell-speci c groups independently. The link-layer address (LPTID) for an IP multicast group will be a PACS \group" address with respect to each cell of the subnet. Furthermore, multicast in PACS must be selective in the sense that RPCU only forwards one copy to each cell that has members, not to all cells indiscriminately. In each cell, there are various ways to deliver multicast over the air interface. Two naive approaches are \multi-unicast" where packets are duplicated and delivered separately to each individual SU, and \PACS broadcast" where multicast data is carried in the broadcast slots (with LPTID 0xFF) and each SU must process them and lter out packets from uninterested groups. Obviously,

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

13

multi-unicast wastes precious airlink resource and broadcast wastes CPU and battery power for SUs who is not member of the particular group. A better approach for PACS multicast, which we have adopted, is to extend PPC (PACS Packet Channel, see Section 2) to allow multicast capability in airlink slot allocation. Normally, each downlink slot (except for control messages) is associated with an LPTID specifying the unique target SU. We modify PPC so that certain airlink slots can be marked for a multicast group. We also enhance SU with the capability to receive not only those slots that are assigned to the SU, but also other slots that are marked for certain groups. This way, all members of the group and only the members can process the slots and receive multicast data without the need for duplication or using broadcast. To accomplish this, we have extended the notion of LPTID to include PACS cell-wide multicast groups. In this addressing scheme, a LPTID is called a \multicast" LPTID, (m-LPTID), if it is assigned to a PACS multicast group instead of a particular SU. When RPCU delivers a multicast datagram over the air, it uses the corresponding m-LPTID in the downlink. SU can set its receive interface (or PLF, see Figure 5) with a list of LPTIDs { the unique LPTID that is assigned when SU enters a cell, and optionally one or more m-LPTIDs. The m-LPTID allocation must be dynamic, because it shares the same address space with the normal or unicast LPTIDs. The allocation is however different from (unicast) LPTID in two ways. First, an m-LPTID is shared by many SUs in the same group, so it is only allocated when the rst group member in a cell requests to join the group. Subsequent requests from other SUs will be assigned the same m-LPTID. Likewise, an m-LPTID will be released only after all members leave the group in this cell. Second, an m-LPTID can be re-used for more than one multicast group at the same time. This is because the number of available m-LPTIDs is much smaller than all the possible IP multicast addresses. Each cell can have at most 238 LPTIDs for both unicast and multicast, but the class D IP multicast address space contains a total of 228 addresses. While it is unlikely to have more than dozens active PACS users in a PACS micro-cell, each user can join as many multicast groups as desire. This forces PPC to deal with more than 238 di erent multicast groups in each cell. We may have to reuse m-LPTIDs and map several multicast addresses to one m-LPTID. If this is the case, SU is required to reconstruct the datagram received over this m-LPTID, and discard the datagram if it does not belong to a group that SU subscribes.

14

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

5.2. Multicast Forwarding in RPCU

The mapping of an IP multicast address to one or more PACS cell-wide group addresses, one per cell, is stored in PFM's ART table along with the unicast address mapping. A multicast ART entry now contains a list of (G, RP, m-LPTID, M ). Each multicast entry means that the IP multicast address G has a corresponding PACS group with the m-LPTID assigned to it in the cell RP. M contains the IP addresses of all the members in this group. For each incoming multicast datagram from the PPN router (downlink direction) or from the SU (uplink direction), PFM looks up the ART to nd all the RPs which have members and the corresponding m-LPTIDs, duplicates the multicast datagram, and transmits with the respective datalink layer address m-LPTID. 5.3. Management of ART for Multicast Multicast Registration We amend the PACS standard by adding a new type of PACS registration message (PACKET REG REQ) called multicast registration. When a mobile PC requests to join an IP multicast group G, either for the rst time or because of ALT, SU transmits PACKET REG REQ which includes the requested IP multicast address. There are two separate cases: the requested group G is a new group in this cell, or it has been already joined by another member. In the rst case, a new m-LPTID must be allocated and mapped to G, and the corresponding entry must be created in the ART. In the second case, G already has a m-LPTID assigned and the corresponding entry exists. Either case, RPCU will return a LPTID number but this time it is interpreted as an mLPTID. During multicast registration, RPCU invokes AM functions for multicast service authorization. This is illustrated in Fig. 8. Multicast De-registration In order to eciently use the airlink bandwidth, we adopt an explicit multicast de-registration. Since the current PACS standard de nes only a single type of de-registration, we add a new type of de-registration for multicast (multicast de-registration). A PACS user performs multicast deregistration only if it leaves a multicast group it has joined within the same group (i.e., not as a result of ALT) but is still attached to the network. If this user is leaving the network permanently, SU performs the regular de-registration (Fig. 6-(d)) during which RPCU removes this user from all the groups in the ART.

15

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS SU

PDCU

AM

PACKET_REG_REQ (SubID, m-Reg, G)

PFM

SU

PDCU

Auth. Request (SubID, G, m-Reg)

(SubID, m-Reg, G)

PFM

Auth. Request (SubID, G, m-Reg)

Auth. Reply (IP)

Auth. Reply (IP)

Query (G,RP,IP)

Query (G,RP,IP) Yes, it exists (m-LPTID)

No entry found Assign m-LPTID PACKET_REG_ACK

AM

PACKET_REG_REQ

Create (G, RP, m-LPTID, IP)

PACKET_REG_ACK (m-LPTID)

Add IP to M_List

(m-LPTID) (a) First join within a cell

(b) Subsequent join

Figure 8. Multicast Registration Procedure

ALT PACS multicast hand-o involves two processes during ALT. First, after a SU performs ALT, it must re-join all the IP multicast groups it has joined from the previous cell because the PACS multicast is cell-speci c. Second, if it is inter-RPCU ALT, the old RPCU updates its ART by removing this user from all the groups it has joined. 5.4. Group Membership

IP multicast uses a group membership protocol (IGMP) to determine whether there is a member for a particular group in the subnet. Internet routers use this information to determine whether or not trac for a multicast group should be delivered to the subnet. In our case, the IP router will send a periodic IGMP query message to the link that connects to the RPCU, and expect at least one member to reply with IGMP report messages. Normally, IGMP query messages are multicast to all multicast-capable hosts in the subnet. When one member replies, the reply messages are also multicast to the group to suppress other member's reply (since one reply per group is sucient). However, using the same scheme in PACS may cause unnecessary overhead, because RPCU already keeps the multicast mapping information in its ART table. For each multicast address that has an entry in ART, there must be at least one member in this RPCU subnet. Therefore, RPCU implements an IGMP support module to intercept all IGMP queries from the IP router, and response with IGMP reports generated from the ART table. This PACS group membership scheme seamlessly supports the IGMP version 2 [7]. When a new

16

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

multicast group is added to the ART, RPCU sends an unsolicit membership report to the IP router, and when a multicast group is removed from the ART, RPCU sends out an explicit leave message. Likewise, the PPC module in SU lters out unsolicited IGMP messages generated by the IP layer software of the mobile PC, such as the rst IGMP join message. In this case, SU simply invokes the PACS multicast registration procedure and discards the IGMP message.

6. Quality Of Service Support Quality of Service (QoS) support in wireless networks is gaining a growing interest, but dicult to achieve due in a major part to the unpredictable nature of wireless link quality. In this paper, we brie y outline how di erent levels of service can be achieved in PACS by employing di erent fragment schemes, packet scheduling (Class-Based Queue or Weighted Fair Queueing), and ARQ. The goal is to support multiple levels of services and fairness within each service class by implementing several packet drop and delay preferences over the downlink, similar to the Di erentiated Service e ort by IETF [8]. The rst choice is downlink fragmentation. While downlink DL packet must be divided into DL fragments, there are several strategies. The normal case is called \minimum fragmentation", which produces smallest number of fragments and fragmentation is always at the multiplication of 576 byte (the maximum size). It yields maximum throughput because the overhead (fragmentation headers, etc.) is lower. Another strategy is called \maximum fragmentation." Since each DL fragment can be sent in a separate slot, a DL packet may be divided into 8 smaller fragments for parallel delivery. The entire packet can arrive sooner and the delay can be minimized. Figure 9 illustrates performance di erence when di erent fragmentation strategy is used. The data is derived through a numerical analysis under ideal conditions: all slots are cleared from previous transmission, and we assume no error, no retransmission, and no medium-access delay. Other PACS protocols overhead is also ignored, such as control messages, system informations, acknowledges, MAC, as well as superframe headers. The upper chart shows the airlink propagation delay as a function of an IP packet size. In maximum fragmentation cases, packets are divided nearly equally (subjected to PACS fragmentation rules [1]) among 4 or 8 slots. The lower

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

Minimum Latency (millisecond)

500ms

min fragmentation (single-slots) min fragmentation (multi-slots) 400ms

max fragmentation (up to 4 slots) max fragmentation (up to 8 slots)

300ms

200ms

100ms

0ms 0 1

200

400

600 800 1000 1200 1400 IP datagram size (byte) Normalized Throughput (as a fraction of the raw bandwidth)

0.8

0.6

0.4 min fragmentation max fragmentation (up to 4 slots) max fragmentation (up to 8 slots)

0.2

0 0

200

400

600 800 1000 IP datagram size (byte)

1200

Figure 9. Comparison of Di erent Fragmentation Strategies

1400

17

18

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

chart shows the normalized throughput { the size of the IP datagram divided by the total raw bandwidth used to deliver the packet. The overhead is framing overhead, includes the fragment and segment headers, DL and SL checksums, as well as padding to meet the minimum fragment size (the dips around 576 bytes and 1152 bytes). These charts indicate that the delay can be signi cantly reduced with less than 10% increase in framing overhead. Therefore, it is feasible to achieve di erent levels of service by cleverly manipulating the number of fragments for each service and transmit them over multiple slots in parallel. Nevertheless, the actual delay and packet loss will be a ected by the load uctuation, i.e., some slots may have more segments in queue than others. Therefore, the fragmentation algorithm must consider the queue length so that the queueing delay and packet loss do not a ect the end-to-end quality of service. Another scheme we can employ to achieve di erent levels of service over downlink is ARQ. PACS standard allows PDCU to selectively enable or disable ACK for each DL packet. For example, for IP datagrams with low drop priority, PDCU sets the \ACK required" bit in DL packet header. With this, SU will acknowledge all properly received segments, allowing PDCU to retransmit missing or error segments selectively.

7. Prototype Implementation A prototype of PACS Internet service architecture described in this paper has been implemented in Hughes Network Systems (HNS) and eld-tested in an experimental PACS network. It supports simultaneous circuit-voice and packetdata services. The testbed is described in Figure 10. It operates in an FCC-licensed PCS band. The network consists of several cells, base stations (RPs), and one base station controller unit (RPCU). We use the HNS GMH2000 RPCU, originally designed for PCS cellular network. On the voice service side, RPCU connects to a standard telephone switch by an E1 link. The telephone switch connects to the PSTN through a central oce. RPCU also relies on its SS7 system to provide AM functions. On the Internet service side, RPCU connects to an IP router (cisco LAN router) by 10Mbps Ethernet. The IP router connects to an intranet in HNS and to the global Internet through an Internet gateway. Since the router we have does not support Mobile-IP yet, we include a Linux PC at the same Ethernet

19

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS INTERNET

Internet gateway

SS7

Intranet

IP

10base2 Ethernet

E1 link

telephone switch

router Linux PC

PSTN

HNS GMH2000 RPCU T1 links

packet-data SU

Laptop PC

RP

RP RP

voice handsets

Laptop PC

Licensed PCS Band

Figure 10. Testbed for PACS Internet Services

to run Mobile-IP protocol. The base station use PACS TDM/TDMA radio port (RP). The connection between RP and RPCU is through a T1 line. The testbed also uses several prototype subscriber units (SU), including voice handsets and packet-mode data SUs. This prototype supports two types of slot con guration. In single-slot con guration, each SU can use only one slot for downlink and one slot for uplink anytime. The maximum bandwidth is limited to 32kbps. In two-slot aggregation con guration, each SU can use up to two slots for downlink, and two slots for uplink. The maximum bandwidth is 64kbps. Multi-slot aggregation that uses three or more slots are still under development. But even the two-slot con guration has given us good insight on how multi-slot aggregation works, how it a ects the PPC implementation, and how it a ects the overall performance. In our architecture, the packet-mode data SU is a network interface for the mobile PC. The PC uses the same IP address as the one assigned to SU during the Internet service commission. To simplify the PC-SU interface in this prototype implementation, we use a high-speed serial connection (RS232) and run PPP protocol between the PC and SU. The reason for this con guration is

20

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

its versatility. Virtually all computers (desktop or laptop) have RS232 interface and almost all operating systems have adequate software support. We can save signi cant code development. The disadvantage, however, is that current serial interface has a limited data rate of 115200 bps. This is less than one-half of the full capacity in PACS airlink (256 kbps). Nevertheless, using RS232 serves our prototyping purpose and still gives us enough data rate to study up to 4-slot aggregation. In the future, we plan to develop a better PC-SU interface using USB or PCMCIA interface. The interaction between PC and SU is illustrated in Figure 11. The forwarding function in SU is straightforward. Both PPP and PPC modules are watched for incoming packets. In the uplink direction, SU extracts IP datagrams from the PPP module and passes them to the PPC module. For the downlink direction, vice versa. SU Applications

IP

Laptop PC (Linux)

TCP/IP OS PPP serial device

PPP

PPC

Serial Port

A-Interface

serial cable

Figure 11. IP Datagram Forwarding in SU

We have implemented the maximum fragmentation strategy. In the two-slot downlink case, an IP datagram will be divided into two near-equal fragments and sent using two slots in parallel. By default, the downlink ACK scheme is turned o . That is, if a packet arrives at SU with missing segments, there will be no retransmission and the packet will be dropped. Due to some hardware limitation and race conditions at the prototype SU, We only support an MTU (Maximum Transmission Unit) size of 700 bytes. That is, any IP datagram passing between RPCU and SU should be 700 byte or less. Larger IP datagrams must be fragmented rst in the IP layer. This is supported through standard IP fragmentation mechanism included in the laptop PC and in the IP router. Although the MTU is smaller than Ethernet's MTU (1500 bytes), it still meets the IP standard requirement (a minimum MTU of 512 bytes). To enable IP fragmentation, we set MTU to be 700 bytes at both the PPP interface of the laptop PC and the Ethernet interface at the IP router.

21

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

8. Performance Study We have conducted a series of experiments to measure the performance of IP forwarding and TCP applications in our prototype implementation. The experimentation helps us to evaluate our design and identify the bottleneck in the architecture. The measurement helps us to nd out the performance characteristics of Internet services in PACS, such as overhead, delay, and throughput. The data can also help capacity planning for future service deployment. Figure 12 explains the experiment setup. To conduct the experiments, we add a \Test PC" to the testbed. The Test PC will run TCP applications and other test programs that generate or receive instrumental TCP and UDP datagrams. We also add a new Ethernet interface to one of the Laptop PC, and attach it to the same Ethernet segment. The interface has promiscuous mode set and can sni any instrumental datagrams between the RPCU and IP router. Comparing the datagrams seen at the Ethernet and the datagrams sent/received at the PPP interface, the Laptop PC can evaluate the IP delivery performance in PACS. downlink RP uplink Laptop PC

GMH2000 RPCU

sniffer Ethernet interface

10base2 Ethernet

IP router Test PC

Figure 12. Experiment Setup

8.1. IP Datagram Latency

We rst measure the latency of IP delivery in PACS, for both downlink and uplink, unicast and multicast, single-slot and multi-slot. To measure the downlink latency, the Test PC sends unicast or multicast UDP packets to the Laptop PC. The Laptop PC should receive two copies of each UDP packets, one sni ed from the Ethernet and the other received through PACS and PPP. The di erence in arrival time constitutes the downlink latency. To measure the uplink latency, the Laptop PC sends unicast or multicast UDP packets to the Test PC through PPP and PACS. It also sni s the Ethernet for all the packets that it just sent. The elapsed time from a packet being sent out the PPP interface to the time it is sni ed out the Ethernet constitutes the uplink latency.

22

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

We vary the IP datagram size from 64 bytes to 700 bytes2 . We also vary the sending rate from one packet per second, 8kbps, 16kbps, and 32kbps (only two-slot con guration). We choose the sending rate lower than the raw channel bandwidth because we do not want the data being skewed by the queueing delay caused by congestion. In fact, when we try sending rate higher than channel capacity, the result is inconclusive because the delay becomes a function of bu er size at RPCU or SU. Further, the packets are sent evenly to avoid being queued up at the RPCU or SU. Note that except for the one-packet-per-second case, the packet-sending frequency is inverse proportional to the IP datagram size. For each case, we repeated 100 times. The results are plotted in Figure 13, 14. Each data point represents one measured latency for one UDP packet. Each chart also plots a component breakdown in the latency. The bottom box in each bar represents the theoretical minimal latency, i.e., the airlink propagation delay. The theoretical numbers are derived from a numerical analysis of the PACS speci cation. The upper box is the measured latency contributed by PPP. To obtain the PPP numbers, we conducted a similar measurement on a sole 115200 baud PPP link (c.f. PPP plus PACS in our testbed). These two latency number can be added up together since a packet must be received entirely from PACS before forwarding to PPP, and vice versa. From the data we can see that the performance is close to the theoretical predictions. Since the sending data rate is lower then the PACS capacity, and the packets are spaced evenly, we can assume that there is no queueing delay. Therefore, the latency consists of airlink propagation, PPP delay, and the processing delay at both RPCU and SU. The processing delay is moderate, compared to the delay contributed by PPC and PPP. The delay variation is consistent among the 100 data points for all con gurations. For most cases, the delay variation is approximately 50ms (from minimum to maximum values) for downlink and 100ms for uplink. The result also shows that the latency does not depend on sending rate, nor on whether it is unicast or multicast. The data shows that we can improve the performance further by eliminating the PPP portions. This can be achieve in the future when we move to USB or PCMCIA interfaces. 2

Given MTU being 700 bytes, sending a bigger datagram will result in IP fragmentation. The latency is simply the addition of the two or more fragmented datagrams.

23

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS IP Datagram Delay (1 slot, downlink direction) 500ms 450ms 400ms 350ms

multicast, 1 packet per second unicast, 1 packet per second multicast, sending rate 8kbps unicast, sending rate 8kbps multicast, sending rate 16kbps unicast, sending rate 16kbps breaking down into PPP & PPC

300ms 250ms 200ms 150ms 100ms 50ms 0ms 64

128

192

256 320 384 448 512 IP datagram size (byte)

576

640

700

640

700

IP Datagram Delay (1 slot, uplink direction) 500ms 450ms 400ms 350ms

multicast, 1 packet per second unicast, 1 packet per second multicast, sending rate 8kbps unicast, sending rate 8kbps multicast, sending rate 16kbps unicast, sending rate 16kbps breaking down into PPP & PPC

300ms 250ms 200ms 150ms 100ms 50ms 0ms 64

128

192

256 320 384 448 512 IP datagram size (byte)

576

Figure 13. IP Forwarding Latency (1 slot)

8.2. IP Datagram Throughput

We then measure the throughput of IP datagram in PACS. Again we measure for both downlink and uplink, unicast and multicast, single-slot and multi-

24

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS IP Datagram Delay (2 slots, downlink direction)

500ms 450ms 400ms 350ms 300ms

multicast, 1 packet per second unicast, 1 packet per second multicast, sending rate 8kbps unicast, sending rate 8kbps multicast, sending rate 16kbps unicast, sending rate 16kbps multicast, sending rate 32kbps unicast, sending rate 32kbps breaking down into PPP & PPC

250ms 200ms 150ms 100ms 50ms 0ms 64

128

192

256 320 384 448 512 IP datagram size (byte)

576

640

700

640

700

IP Datagram Delay (2 slots, uplink direction) 500ms 450ms 400ms 350ms 300ms

multicast, 1 packet per second unicast, 1 packet per second multicast, sending rate 8kbps unicast, sending rate 8kbps multicast, sending rate 16kbps unicast, sending rate 16kbps multicast, sending rate 32kbps unicast, sending rate 32kbps breaking down into PPP & PPC

250ms 200ms 150ms 100ms 50ms 0ms 64

128

192

256 320 384 448 512 IP datagram size (byte)

576

Figure 14. IP Forwarding Latency (2 slots)

slot. To measure the downlink throughput, The Test PC sends unicast or multicast UDP packets to the Laptop PC, at various rates The Laptop PC collects the packets from the PPP interface and measures the receiving rate. It also sni s the

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

25

Ethernet to determine the sending rate. The measurable IP datagram throughput is the maximum receiving rate that can be achieved. Measurement for uplink throughput is similar except that the UDP packets are sent from the Laptop PC to the Test PC. IP datagram size is also varied from 64 bytes to 700 bytes. We also vary the sending rate by varying the inter-packet gap (how long the sender waits in between sending two packets). The sending rate is roughly the datagram size divided by inter-packet gap. We vary the inter-packet gap from 10ms to 200ms. Each run consists of 100 UDP datagrams and each run is repeated 10 times. We also measure the downlink throughput under bursty trac. Instead of evenly pacing UDP packets, we burst out UDP packets as fast as possible from the Test PC. We then measure the receiving rate seen by the Laptop. Since sending rate exceeds the channel capacity, the queues at RPCU will grow and eventually RPCU will begin dropping packets. The results are plotted in Figure 15, 16. We only plotted the unicast case because the multicast data are almost identical. Each dot in the charts represents one throughput number measured. The data shows the following maximum throughput that our testbed can achieve: two-slot downlink, 52 kbps; two-slot uplink, 50 kbps; one-slot downlink, 27kbps; and one-slot uplink, 25kbps. They are very close to the theoretical maximum throughput calculated from framing analysis. The graphs also show that, when the sending rate is signi cantly lower than the channel capacity, the receiving rate equals the sending rate. But when the sending rates exceed some threshold, the receiving rates begin to slow down. At this time, the receiving rate approaches the maximum measurable receiving rate, i.e., the throughput of IP forwarding. As we have explained in an earlier section, the dips in downlink throughput around 576 bytes are caused by fragmentation and minimum fragment size requirement { the second fragment must be at least 8 segments long, carrying an overhead as high as 79 bytes. We also attempt to measure the uplink throughput under bursty trac. However, the result is inconclusive. This is caused by the limited bu er space and processing power in the prototype SU. The prototype SU was originally designed for voice service. We converted it for packet-mode data SU by adding the PPC module. Apparently, the interrupt handler and memory management designed for constant bit-rate voice is not suitable for bursty data. When the sending rate exceeds the channel capacity, SU's interrupt handler ad bu er management do

26

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS IP Datagram Throughput (1 slot, downlink direction) c.f. theoretical maximum bursty pacing with inter-packet gap 10ms .. 20ms .. 50ms .. 100ms .. 150ms .. 200ms c.f. sending rate

60Kbps 50Kbps 40Kbps 30Kbps 20Kbps 10Kbps 0Kbps 0

64

128

192

256 320 384 448 512 IP datagram size (byte)

576

640

700

IP Datagram Throughput (1 slot, uplink direction) 60Kbps

c.f. theoretical maximum pacing with inter-packet gap 10ms .. 20ms .. 50ms .. 100ms .. 150ms .. 200ms c.f. sending rate

50Kbps 40Kbps 30Kbps 20Kbps 10Kbps 0Kbps 0

64

128

192

256 320 384 448 512 IP datagram size (byte)

576

640

700

Figure 15. IP Datagram Throughput (1 slot)

not handle that well, resulting in signi cant packet drops. We believe that it can be remedied by using better ow control and trac shaping functions, as well as a faster processor and additional memory.

27

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS IP Datagram Throughput (2 slots, downlink direction) 60Kbps 50Kbps 40Kbps 30Kbps

c.f. theoretical maximum bursty pacing with inter-packet gap 10ms .. 20ms .. 30ms .. 50ms .. 70ms .. 100ms c.f. sending rate

20Kbps 10Kbps 0Kbps 0

64

128

192

256 320 384 448 512 IP datagram size (byte)

576

640

700

IP Datagram Throughput (2 slots, uplink direction) 60Kbps 50Kbps 40Kbps 30Kbps c.f. theoretical maximum pacing with inter-packet gap 10ms .. 20ms .. 30ms .. 50ms .. 70ms .. 100ms c.f. sending rate

20Kbps 10Kbps 0Kbps 0

64

128

192

256 320 384 448 512 IP datagram size (byte)

576

640

700

Figure 16. IP Datagram Throughput (2 slots)

8.3. TCP Throughput

We measure the throughput of TCP connections over PACS. We only measure the two-slot case. The Laptop PC opens an FTP connection to the Test PC and fetches a 64 Kbyte le (downlink) or upload a 64 Kbyte le (uplink).

28

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

We record the transmission time and calculate the throughput for the TCP connections. We only vary the TCP's MSS parameter (Maximum Segment Size). TCP slices the data stream into TCP segments and transports each segment as an individual IP datagram. MSS thus determines the size of IP datagram (IP datagram size equals TCP segment size plus 40 bytes of TCP and IP headers). Each data transfer is repeated 25 times. The results are plotted in Figure 17. Each data point is the result of one le transfer. The line connects the average among 25 le transfers for each MSS value. The theoretical limit is calculated by subtracting the 40 byte header from the theoretical maximum throughput for IP datagram. The maximum measurable downlink TCP throughput is around 49kbps. It is lower than the theoretical numbers because the theoretical number does not consider TCP's ow control, including the initial three-way handshake, slow-start, and the congestion avoidance mechanisms that periodically reduce the sender TCP window size by half. The maximum uplink TCP throughput is around 39kbps. In addition to the same factors, the bu er management problem mentioned before also contributes to the lower uplink throughput. In summary, our testbed can deliver TCP performance close to optimal when large MSS values are used.

9. Related Work Two notable related systems are CDPD [10] for AMPS and GPRS [11,12] for GSM. CDPD works by sharing unused voice channels for data service. However, CDPD network is organized by various proprietary protocols. Compared with our Mobile-IP approach in handling mobility, CDPD has the advantage of higher airlink eciency but it limits itself to a particular physical system. Besides, multicast delivery in CDPD network is inecient and groups must be statically con gured. GPRS (General Packet Radio Service) is a newly proposed standard for packet data services in GSM digital cellular networks. The system architecture of GPRS is similar to our PACS Internet service model, although it is more complicated. Nevertheless, it has not been implemented and we are yet to see empirical results on its performance. We have not completed the performance study for medium access protocols in PACS. This is because we have only implemented two-slot aggregation in our prototype. A measurement on this prototype may not be meaningful for full eight-

29

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS TCP Throughput (2 slots, downlink) 60Kbps 50Kbps 40Kbps 30Kbps 20Kbps 10Kbps

theoretical limit downlink: datapoints downlink: average

0Kbps 216

280

344

408 472 536 TCP MSS size (bytes)

600

660

TCP Throughput (2 slots, uplink) 60Kbps 50Kbps 40Kbps 30Kbps 20Kbps 10Kbps

theoretical limit uplink: datapoints uplink: average

0Kbps 216

280

344

408 472 536 TCP MSS size (bytes)

600

660

Figure 17. TCP Throughput

slot aggregation because the e ectiveness and fairness of medium access protocols depend on the number of SUs and the number of slots available. However, a simulation study on PACS has been reported elsewhere [9]. The study included an analysis of medium access protocols and showed that the medium access protocol in PACS could be e ective and fair to all SUs.

30

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

10. Conclusion The major contributions of this paper can be summarized as the following. First, we have developed a system architecture for Internet/Intranet access in PACS with full support for mobile users and multicast applications. Second, the PFM design provides a clean layering solution and hides all PACS datalink details from the Internet software in PPN, resulting in easy integration. Third, we have incorporated Mobile-IP into inter-RPCU ALT and improved the airlink eciency of Mobile-IP. Fourth, we make an extension of the PACS datalink layer to support ecient multicast through m-LPTID. And nally, we demonstrate the architecture by a prototype implementation with performance measurements. All of these design experience and measurement results indicate that it is highly feasible to seamlessly integrate the PACS networks into the Internet with global IP mobility and IP multicast services. There are signi cant advantages to adopt generic internetworking mechanisms and use standard IP routers in constructing the cellular wireless data network. It leverages existing IP techniques and commercially available products. It requires no change to the mobile computer software in the network or upper layers. New IP applications or service models can be provisioned rapidly without signi cant changes to the existing network infrastructure. Upgrades to future IP standards like IPv6 will be smooth. The same concepts and architecture can apply to the future 3rd generation mobile wireless networks (e.g., UMTS/IMT-2000) that are on the horizon.

Acknowledgment Credit belongs to the PACS team in Hughes Network Systems for a high quality implementation of the PACS Internet Service. We are also grateful to Tayyab Khan, Stan Kay, Victor Liau, and Sivakumar Kailas of Hughes Network Systems and Roy Axford from U.S. Navy SPAWAR for valuable comments during the course of this work.

References [1] ANSI J-STD-014. Personal Access Communications Systems, 1995. [2] A. Noerpel, Y-B. Lin, and H. Sherry. PACS: Personal Access Communication System { A Tutorial. IEEE Personal Communications, 3(3):32-43, 1996.

Y Zhang & B Ryu / Mobile and Multicast IP Services in PACS

31

[3] V. Varma, P. Roder, M. Ulema, and D. Harasty. Architecture for Interworking Data over PCS. IEEE Communications Magazine, pages 124-130, September 1996. [4] J. Smolinske et al. A 512 Kbps high capacity packet mode data protocol for the broadband PCS spectrum. In Proceedings of IEEE Vehicular Technology Conference, 1996. [5] C. Perkins. Mobile IP { Design Principles and Practices. Addison-Wesley, 1998. [6] S. Deering and D. Cheriton. Multicast Routing in Datagram Internetworks and Extended LANs. ACM Transactions on Computer Systems, 8(2):85-110, May 1990. [7] W. Fenner. Internet Group Management Protocol, version 2. IETF RFC 2236, Nov. 1997. [8] IETF Di erentiated Services Working Group. http://www.ietf.org/html.charters/di servcharter.html [9] Y. Hasimoto, B. Sarikaya, and U. Mehmet. Multimedia communication in cellular PACS network. In Proceedings of ACM MOBICOM'97 Conference, Budapest, Hungary, 1997. [10] CDPD System Speci cations, release 1.1, 1995. [11] G. Brasche and B. Walke. Concepts, Services, and Protocols of the New GSM Phase 2+ General Packet Radio Service IEEE Communications Magazine, pages 94-104, August 1997. [12] J. Cai and D. Goodman. Gerneral Packet Radio Service in GSM. IEEE Communications Magazine, pages 112-131, October 1997.