Evaluating Wireless Architectures for GDS Applications

22 downloads 58431 Views 323KB Size Report
The Netherlands. Abstract. While Wireless data communication technology promises ... environments is acceptable for several GDS applications. We show that ...
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

Evaluating Wireless Architectures for GDS Applications Hesham Ali1, Gert-Jan de Vreede1,2, Karthik Ramachandra1, Elwalid Sidahmed1, Hiranmayi Sreenivas1 1

University of Nebraska at Omaha College of Information Science & Technology phone: +1.402.554-4901 fax: +1.402.554-3400 [email protected] [email protected]

Abstract While Wireless data communication technology promises to be the next primary communication media in the 21st century, thus far, skepticism about their degree of reliability and performance has prevented wireless networks from replacing traditional networks in critical or demanding applications. In this paper, we explore the use of wireless environments in various applications related to Group Decision Support (GDS) Systems. In particular, we look into the impact of using various network architectures on the performance measures of GDS applications. Our experiments show that while the Quality of Service parameters of wireless networks still do not the match those of wired networks, the performance of wireless environments is acceptable for several GDS applications. We show that selecting the proper wireless architecture has a significant impact on the performance of the applications. In particular, the peer-to-peer architecture provided far better performance than other wireless architectures.

1. Introduction In labs around the world, researchers have recently been working tirelessly to create technologies that will change the way we conduct business and live our lives. Some of these technologies are completely new technologies that could soon transform computing, medicine, manufacturing, transportation, and all other major industries. Mobile computing and wireless networking is clearly one of these technologies. Due to advances in technology, portable computing devices have become more and more available while possessing higher computing power. From smart phones, and personal digital assistants running embedded operating systems, to portable computers running fullblown desktop operating systems, these devices have increasingly provided communication capabilities that utilize wireless connections. With those communication capabilities firmly established, the next logical step is in the direction of greater interaction between users equipped with such devices while achieving higher degrees to stability and performance.

2

Faculty of Technology, Policy and Management Delft University of Technology The Netherlands

Group Decision Support (GDS) applications provide an excellent example of such a problem. GDS applications represent a partnership between traditional group process methodologies and computer technology. Technology support of traditional group process methodologies has given rise to a particular class of Group Decision Support Systems (GDSS) which concentrates on face-to-face interactions of individuals in groups. Research has shown that under certain circumstances, GDSS increase the effectiveness and performance of group decision in face-toface settings [Fjermestad and Hiltz 1999, 2001]. GDSS are most productive when effective structured group process methods are appropriately supported by computer technology. Since such technology is making big strides in the wireless world, it is our attempt to learn how they can impact GDSS. This paper reports on an early study in that direction. It is our goal to design experiments that would show us how GDS applications can make full use of emerging technologies. GDSS use technology to support problem solving in group decision situations thereby improving the performance and effectiveness of the group. These systems are aimed at removing common communication barriers, facilitating meetings, voting solicitation and compilation, anonymous input of ideas and preferences, and electronic message exchange between members in a group gathering [Nunamaker et al. 1997]. They provide decision modeling and group decision techniques aimed at reducing uncertainty and 'noise' that occur in the group's decision process. These interactive synchronous group applications are very demanding, particularly on bandwidth and data transfer speeds. They are also very sensitive to latency. Hence, the technology used to support these applications needs to satisfy these requirements. With groupware/GDS applications becoming more mobile and more ubiquitous, demands on high quality communication support system increases. Technologies that are characterized by high delays, low reliability, and high waiting times are not favored by theses applications. It is our endeavor to find the best technology match for these applications, while keeping

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

1

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

the focus on wireless technologies. While the applications could benefit significantly from the flexibility and agility of wireless systems, Quality of Service (QoS) parameters should not be sacrificed due to the interactive nature of such applications [Chien et al. 1999; Ramachandra and Ali 2004; Seih 2001] Emerging wireless technologies like 802.11 a/b/g promise to provide a robust and fast communication medium [Cali et al. 2000; Chhaya and Gupta 1997]. This research aims at using these technologies in a manner that would provide significant improvements in the effectiveness of GDS. This attempt actually stemmed out of a requirement at one of our GDS labs. All the group meetings held at the lab were facilitated by GroupSystems software running on 15 to 20 laptops. The group moderator was confined to use the wired network to connect these laptops and this network architecture had an initial setup time of 60 minutes. We propose to use 802.11b to solve this problem. This paper investigates and compares different network architectures in terms of quality of service for synchronous electronic meeting support with GDSS. It investigates the performance under different usage scenarios in different network configurations. The main contribution of this work is that it helps organizations and professional GDS meeting facilitators to make an informed decision about network configuration for GDS applications based on required quality of service. The remainder of this paper is structured as follows. In Section 2, we present the various network configurations that will be used in our experiments. Section 3 covers the setup details of the experiments. The results of the experiments and the associated analysis are covered in Section 4. The overall conclusions are summarized in Section 5.

2. Network configurations In our study we investigated and compared three different network configurations: a wired architecture, an ad-hoc wireless architecture, and a wireless access point network. Each configuration is introduced below.

Figure 1. Clients in wired architecture along with lead and server machines.

2.2 Wireless Ad Hoc architecture The second architecture was ad-hoc or “Peer to Peer”. In ad-hoc networks, computers are brought together to form a network “on the fly”. There is no structure to the network. There are no fixed points. Usually every node is able to communicate with every other node. This topology is shown in the figure 2. A Basic Service Set (BSS) consists of two or more wireless nodes, or stations (STAs), which have recognized each other and have established communications. In the most basic form, stations communicate directly with each other on a peer-to-peer level sharing a given cell coverage area. This type of network is often formed on a temporary basis, and is commonly referred to as ad hoc network, or Independent Basic Service Set (IBSS).

2.1 Wired architecture The first of the three architectures that we evaluated was the Wired architecture. This represented the typical LAN environment. Using a hub, all 16 computers (15 participants and 1 chauffeur station from which the GroupSystems software is operated) were connected to the server with Ethernet cables. It was a typical client-server architecture, where the clients connected to the server via “wired” links. The configuration is visualized in figure 1.

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

2

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

Figure 2. Clients in a wireless ad-hoc architecture along with lead and server machines.

2.3 Wireless access point network The third architecture looked at another way to configure a network: infrastructure mode. This architecture uses fixed network access points with which mobile nodes can communicate. An 802.11b LAN is based on the cellular architecture where the system is subdivided into cells, where each cell (called Basic Service Set or BSS, in the 802.11b nomenclature) is controlled by a base station called an Access Point (AP), see figure 3. The AP is analogous to a base station used in cellular phone networks. When an AP is present, stations do not communicate on a peer-to-peer basis. All communications between stations or between a station a wired network client go through the AP. This architecture is modeled in figure 2. AP's are not mobile, and form part of the wired network infrastructure. A BSS in this configuration is said to be operating in the infrastructure mode.

3. Set up of the experiments

Figure 3. Clients in a wireless infrastructure architecture along with lead and server machines.

Below we describe the design of the experiments in terms of the hardware and software used, the test data, and the procedures.

3.1 Hardware The GDS facility consisted of 17 laptops that were positioned on a U-shaped table with the chauffeur station in front and the server in the middle. All tests were performed in the GDS lab at the University of Nebraska at Omaha. Each of the seventeen IBM laptops had this specification: • PIII processor 1.0 GH • 261,616 KB RAM • Window 2000 professional • Lucent PC24E-H-FC wireless card (for wireless test) Three different network architectures were used: wired, wireless peer to peer, wireless access point (see section 2 for more details).

3.2 Software The GDS application used was GroupSystems Workgroup Edition 3.4 by GroupSystems.com. This software consists of seven modules that support groups’ divergence, convergence, organization, evaluation, and consensus building tasks. We used two modules in the experiments: • GroupOutliner. With this modules, groups can build and modify tree structures of concepts (ideas) and comment on them. Of the four modules that support brainstorming in GroupSystems (Electronic Brainstorm, Categorizer, Topic Commenter, and

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

3

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

GroupOutliner), it allows for the most complex and elaborate structures. • Vote. With this modules, groups can express their opinion on a list of issues using a variety of voting methods such as 10 point scale, rank order, Yes/No, or Multiple Selection. This tool is often used in GDSS meetings to enable participants to make estimates, explore levels of consensus, or make decisions. The chauffeur station in the GroupSystems setup controls all the participant stations. Modules such as GroupOutliner and Vote are started from the chauffeur station. In other words, the chauffeur station leads the whole set. When participants submit ideas to the system, the data is sent from their station to the server. From the server it is distributed to all other stations, including the chauffeur station. Votes are just sent to the server and the chauffeur station that can display the aggregated results on a central screen. An impression of the system in action is given in figure 4. Figure 4. A (wired) GroupSystems session in progress.

3.3 Test data To test the performance of the different network architectures, we prepared different data loads for GroupSystems to work with. These data loads represent increasing amounts of data in the structure that the system has to manage and maintain. A data load can for example be a tree structure consisting of all graduate courses as main topics and strengths, weaknesses, and action items as leafs for each course. GroupSystems has to send that data load to the participant stations when the chauffeur starts them up and update it when additions or changes to the structure are made by the participants.

Based on informal discussions with a number of experienced GroupSystems facilitators, we decided to investigate the following data loads: • For the GroupOutliner module: • Load 1: 0 concepts in place. In other words, an empty tree. • Load 2: 50 concepts in place. • Load 3: 100 concepts in place. • Load 4: 250 concepts in place. • Load 5: 500 concepts in place. • For the Vote module: • Load 1: 50 vote items in place. • Load 2: 100 vote items in place. • Load 3: 150 vote items in place. • Load 4: 250 vote items in place. • Load 5: 500 vote items in place. GroupSystems facilitators regularly face situations in which loads 1-3 for both the GroupOutliner and Vote module are very common. Loads are high as 4 and 5 are more rare yet do occur every now and then, so we decided to test them as well.

3.4 Procedures and Measurement With each load and each module two types of measurements were done: • GroupOutliner: • Start up time. This is the time that elapses between the chauffeur station giving the command to start up all participant stations and the last of the participant stations showing the start up load. This is an important time to measure as very long start up times in GSS sessions are often perceived as unacceptable by the participants. • Submission refresh. This is the time that elapses between one participant station submitting new information to the structure and all the other stations displaying the updated structure. This is an important time to measure as long submission refresh times mean that the participants cannot have a fluent discussion on the system. It is comparable to talking over satellite phones with long delays. • Vote: • Start up time. This is similar to the start up time in GroupOutliner. It is the time that elapses between the chauffeur station giving the command to start up all participant stations and the last of the participant stations showing the start up load (voting list). • Vote casting rate. This is the time that elapses when all participant stations send their vote

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

4

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

Each load was run 10 times for each module and each type of measurement. Each of the four measurements was made using a hand-clocked computer timing utility that was running on a separate laptop computer that was not part of the network configuration. Unfortunately, it was not possible to get automatic time readings from a computer log as GroupSystems does not collect such data. To gain fluency with this method, we piloted a number of timing until we felt comfortable that we achieved consistent reading in terms of reaction times. In other words, we made sure that the reaction time for operation the clocking device was similar over all experiments.

4. Results This section presents the results of the experiment for each of the three architectures.

4.1 Wired Architecture The average startup times are quick, ranging from about 4 seconds for the minimum load (0) to about 18 seconds for the maximum load (10) for the Group Outliner tool, and from about 5 seconds for the minimum load (0) to about 11 seconds for the maximum load (10) for the Vote tool (see figure 5 – GO stands for GroupOutliner). The average submission refresh rate for Group Outliner does not vary much when the network load is changed, as shown in Figure 6. It is close to about 2.5 seconds across all loads. On the other hand, the average cast vote rate for the Vote tool increases when the network load is increased, as shown in Figure 7, ranging from about 4 seconds for minimum load to about 40 seconds for maximum load.

4.2 Wireless Access Point Architecture Here, the startup times are the slowest amongst all three architectures (wired, wireless access point and wireless peer-to-peer), as shown in Figure 5. The average startup time ranges from about 42 seconds for the minimum load to about 15 minutes for the maximum load for the Group Outliner tool. This could be attributed to the saturation of the access point when subject to increased traffic from 15 client machines simultaneously. For the Vote tool, the average startup time ranges from about 1 minute for the minimum load to about 6 minutes for the maximum load. Therefore, the Vote tool exhibits better performance with respect to startup time for large network loads.

The average submission refresh rate for Group Outliner is fairly constant, as in the wired architecture and stays close to about 6 seconds across all loads as shown in Figure 6. As observed in the wired architecture scenarios, the average cast vote rate while using the Vote tool increases when the network load is increased, ranging from about 2 minutes for the minimum load to about 13 minutes for the maximum load.

4.3 Wireless Peer-to-peer Architecture In this architecture, the startup times are quicker on average than in the wireless access point architecture but slower on average than the startup times measured in the wired architecture, as shown in Figure 5. The average startup time ranges from about 23 seconds for the minimum load to about 6 minutes for the maximum load for the Group Outliner tool and from about 31 seconds for the minimum load to about 2 minutes for the maximum load for the Vote tool. As in the wireless access point architecture, the Vote tool exhibits better performance with respect to startup times, particularly when the network load is increased. The average submission refresh rate for Group Outliner follows a trend similar to the submission rates for the wired and wireless access point architectures. It stays fairly constant, between 3 seconds and 4 seconds on average. For the Vote tool, the average cast vote rate increases with an increase in network load, as shown in Figure 7. This is similar to the trends observed in the wired and wireless access point architectures. The average cast vote rate ranges from about 51 seconds for minimum load to about 7 minutes for maximum load. Figure 5 – Average Startup Time (seconds) Startup Time

1000 T im e (secon ds)

simultaneously to the server station until the aggregated results are displayed on the chauffeur station / central screen. This is an important time to measure as it basically states how long it takes for the group to see their voting results.

800 600 400 200 0 0

5 Load

Wired GO Wireless peer-to-peer GO Wireless AP Vote

10

15

Wireless AP GO Wired Vote Wireless peer-to-peer Vote

Figure 6 – Average Submission Refresh Rates (seconds) for Group Outliner tool

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

5

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

wireless architecture has a significant impact on the performance of the applications. In all the tested scenarios, the users of the GDS applications were not disrupted. The peer-to-peer architecture provided better performance among the wireless architectures, and hence, it is recommended for GDS applications. The results provided are helpful for meeting facilitators to make informed decisions about network delays and this will make them more effective. The results obtained for different scenarios will help them make the required adaptations for smooth executions of GDS applications.

Submission Refresh Rate

Su b m is s io n R e fre s h R a te (s e c o n d s )

8 7 6 5 4 3 2 1 0 0

2

Wired GO

4 Load 6 Wireless AP GO

8

10

12

Wireless peer-to-peer GO

Figure 7 – Average Cast Vote Rates (seconds) for Vote tool

The conducted study has two limitations. Firstly, the results are only valid for the specific GDS applications used in the study. Secondly, they have not been tested for a large number of users. However, the results are encouraging and further experiments are currently under way to test the wireless environments under heavier network traffic and more demanding applications.

V ote R ate (seconds)

Cast Vote Rate

The conducted research is still in its infant stages. Testing wireless technologies for a larger set of GDS software packages and for different geographical settings would be the next logical steps. Potential future research also includes the impact of adopting wireless technology in the development of the next generations of GDS software.

900 800 700 600 500 400 300 200 100 0

References 0

2

Wired Vote

4 Load 6 Wireless AP Vote

8

10

12

Wireless peer-to-peer Vote

5. Conclusion With wireless technology moving rapidly to replace wired networks in various applications, wired environments remain the networks of choice for time-critical applications. However, with new advances in wireless technology, it is essential to test the performance of various wireless architectures in domain specific applications and study the impact of choosing the right wireless architecture on the performance of these applications. In this paper, we tested the performance of several GDS applications (GroupSystems’ GroupOutliner and Vote modules) in various wireless environments and showed that by choosing the right wireless architectures, these applications can run effectively in wireless environments with acceptable performance parameters. Our experiments show that while the Quality of Service parameters of wireless networks still do not the match those of wired networks, the performance of wireless environments is acceptable for the used GDS applications. We show that selecting the proper

[1] Cali, F., et al. (2000), “Dynamic Tuning of the IEEE 802.11 Protocol to Achieve a Theoretical Throughput Limit,” IEEE/ACM Transactions of Networking, 8(6):785-799. [2] Chaya, H.S. and S. Gupta (1997), “Performance Modeling of Asynchronous Data Transfer Methods of IEEE 802.11 MAC Protocol,” Wireless Networks, 3:217-234. [3] Chien, C., et al, (1999), “Adaptive Radio for Multimedia Wireless Links,” IEEE Journal on Selected Areas in Communications, 17(5):793-813. [4] Ramachandra, K., and H. Ali (2004), “Evaluating the performance of various Architectures for Wireless Ad Hoc Networks,” Proceedings of the 37th Hawaiian International Conference on Systems Sciences, January 5-8. [5] Seih. L. (2001) “Quality of Service Support for Multimedia Applications in Third Generation Mobile Networks Using Adaptive Scheduling”, Kluwer Academic Publishers, Dordrecht. [6] FJERMESTAD, J., HILTZ, S.R. (1999), An assessment of group support systems experimental research: methodology and results, Journal of Management Information Systems, 15(3), 7-149. [7] FJERMESTAD, J., HILTZ, S.R. (2001), A descriptive evaluation of group support systems case and field

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

6

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

studies, Journal of Management Information Systems,17(3). [8] NUNAMAKER, J.F. JR., BRIGGS, R.O., MITTLEMAN, D.D., VOGEL, D.R., BALTHAZARD, P.A. (1997). “Lessons from a Dozen Years of Group Support Systems Research: A Discussion of Lab and Field Findings”, Journal of MIS, 13, 3, 163-207.

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

7

Suggest Documents