intelligent agents for quality of service evaluation ... - Semantic Scholar

2 downloads 0 Views 171KB Size Report
ESIEE, Department Signal Processing and Telecommunications Department, ..... perception of quality forms an important part of the overall management of the ...
INTELLIGENT AGENTS FOR QUALITY OF SERVICE EVALUATION IN MULTIMEDIA SERVICES Augustin Radu PhD Student, CRITIC Laboratory Institut National des Télécommunications, SGES, 9 rue Charles Fourier 91011 Évry cedex, France University of Marne la Vallée, 5 boulevard Descartes - Champs-sur-Marne 77454 Marne-la-Vallée Cedex 2

Prof. Geneviève Baudoin Associate Professor ESIEE, Department Signal Processing and Telecommunications Department, BP99,Noisy Le Grand CEDEX, 93162

ABSTRACT This paper explores the issues related to developing a transparent Quality of Service (QoS) evaluation for wireless services and distributed applications. The objective of the paper is to show the importance of QoS evaluation suited to end users’ interests. A Mobile Agent QoS architecture is defined and the model is presented and evaluated through simulations. KEYWORDS

Quality of service evaluation, end user, Intelligent Agents

1. INTRODUCTION : A NEW QUALITY OF SERVICE PROBLEM The GSM community is making the experience gained in years of operation of the world's most advanced mobile telecommunication system available to European and global standardization. It is in the interest of GSM network operators that new third generation systems are developed in a time frame convenient for them (and their equipment vendors) and that the scenario for a controlled migration of users is properly taken into account. One key aspect of third generation quality of service is the need for Quality of Service (QoS) to be based on user perception. The Quality of Service model for the GSM community cannot be applied in an environment where we associate mobile radio communications and Internet. Radio packaging networks are more and more developed. For example, GPRS (General Packet Radio Service) as a GSM evolution offers end-to-end packaging services. The launch of UMTS (Universal Mobile Telecommunication Service) will reinforce this trend by offering an important flow to the Internet access. Even if we have tools to analyze each part of the network (radio, hardware, software component, Internet), we don’t have any model that places the end user at the heart of any Quality of Service evaluation. End users and service customers want to perform Quality of Service evaluation for wireless services and distributed applications. Therefore, they need appropriate procedures and accessible tools. In our research we try to show the importance of a common and simple procedure of quality of service evaluation suited to end users’ interests. This procedure is based on the use of a Mobile Agent QoS architecture. It makes it possible to measure, independently of suppliers, the Quality of Service in different environments (network and application).

1219

International Conference WWW/Internet 2003

This paper is organized as follows. Section 1 introduces the concept of Quality of Service. Section 2 explains a procedure in 5 steps in order to evaluate Quality of Service. Section 3 describes the Mobile Agent Quality of Service Architecture that we chose. Section 4 presents our implementation for Quality of Service Evaluation and evaluation through simulations.

2. DEFINITION OF QUALITY OF SERVICE In the general literature we can find many definitions concerning quality of service. For our interests (the end user point of view about QoS) we have retained only 3 that we can find in the international recommendations on telecommunications services. “Quality of Service evaluation is the systematic examination of the extent to which an entity (e.g., a process, a product, a service, an application) is capable of fulfilling specified requirements” [5]. E.800 defines quality of service as “the collective effect of the service performance which determines the degree of satisfaction of a user of the service” [3]. This definition of quality of service is very vague and needs to be more specific. The 2 terms “effect” and “degree of satisfaction” have too general meaning for such an important definition. In fact, to evaluate the quality of service, we have to consider the point of view of end users. From these definitions and the article of Nahrestad [11], we can define four different QoS layers: 1. A network level where quality characteristics such as bandwidth, delay, jitter, and loss rate are important, 2. A system level which covers the operating and communication system and associated devices. Here quality characteristics such as CPU load, utilization, buffering mechanisms, and storage parameters are essential, 3. An application level with QoS parameters e.g., for images and video quality such as image size, frame rate, start-up delay and reliability, 4. A new user level with basically two types of end user perceived QoS parameters. The first type of QoS parameter concerns communication with a server (client-server e.g., World Wide Web access, Video-on-Demand). The second type of QoS parameter relates to communications with other end users through a service (peer-to-peer, e.g., electronic mail or video-conferencing). The QoS parameters should include all aspects of quality, not merely the network-related aspects. Customers cannot be asked to state their network-related and non-network-related QoS criteria separately. QoS performance figures would be more meaningful if specified on a service-by-service basis. This new level introduces us to the first part of our quality of service evaluation procedure.

3. A PROCEDURE FOR THE QUALITY OF SERVICE EVALUATION The environment of our research is placed in a distributed system. This system is designed to support the development of applications and services which can exploit a physical architecture consisting of multiple, autonomous processing elements that do not share primary memory but cooperate by sending asynchronous messages over a communications network [2]. Doing Quality of Service evaluation in this environment is more difficult because generally many parts are involved (software components, communication layers). Our goal is to provide end users with a convenient way to evaluate the quality of different services in those distributed systems. In order to do this we have to define those quality of service characteristics that are decision-relevant on the one hand and easy to measure on the other. The model of QoS evaluation proposed by Bogen [1] could provide a strong theoretical basis for our procedure. Different specifications and international standards can be used as a starting point for our procedure: - a Technical Specification Document (TSD) like H.323 [4]; - a given Service Level Agreement (SLA); - a Software Quality Standard (SQS) such as ISO 9126 [6]; - property service information or a Software Product Description (SPD) (in our case the description of our application).

1220

INTELLIGENT AGENTS FOR QUALITY OF SERVICE EVALUATION IN MULTIMEDIA SERVICES

After identifying our base, the 5 steps of the QoS evaluation model and their meaning are presented in the following table (see Table 1 : A 5 steps Quality of Service procedure). Table 1. A 5 step Quality of Service procedure

Step 1

Action Identification

2

Rating

3 4

Selection Quantification

5

Evaluation

Meaning All quality characteristics (essential and non-essentials) have to be identified. From the list of all quality characteristics found, a rating has to be made on the essential quality characteristics. The quality characteristics to be measured have to be selected. For each characteristic selected, a quantification has to be defined (parameter-value pair) The evaluation has to be done by an end user or on his behalf, taking the specified parameter-value pairs into account.

The final output of the QoS evaluation procedure is the evaluation results. After presenting the procedure we have to elaborate an architecture in which to evaluate it. This architecture is designed to help multimedia user services.

4. A QUALITY OF SERVICE EVALUATION The quality of service evaluation has to be automatic to obtain useful results. We propose an appropriate architecture with the participation of end users, evaluation methods and services for test. By using Intelligent Agents (IA) it is possible to automate the quality of service evaluation. The architecture of our quality of service evaluation agent is composed of an Interface Agent, a Task Agent, a Registry, a Platform and an Agent Manager (see Figure 1 : A System Architecture”). The Interface Agent is responsible for taking the task from the end user and transmitting it the Manager Agent who delegates the boundary conditions, configuration and the given task in order to achieve the required results. This Manager Agent is the highest Agent in the hierarchy. While the Interface Agent waits for the results, the Manager Agent delegates one or more Task Agents that have different characteristics and skills. For example, a Task Agent is responsible for checking the connection time to a server and measuring availability. After measurement they notify the Manager Agent who compiles the results and places them in the Registry. Task Agents do their job on different platforms and can communicate with other Agents. In order to execute the 5 step evaluation procedure, the use of intelligent agents produces some constraints of implementation : a machine platform (PC or Sun) under an operating system (Windows, Linux, Unix) and a programming language which has to be flexible and adaptable (our choice is JAVA). The implementation of our agent is the subject of the next section.

5. AN IMPLEMENTATION OF QAULITY OF SERVICE EVALUATION The services chosen to be tested are the World Wide Web (www) access, email services and video streaming. These 3 services are 3 of the most used telecommunications services that are currently proposed to be nearby by the mobile telephone providers[12]. For each service we applied the 5 step procedure in order to make implementation easier. We can now consider a real situation of an end user who wishes to evaluate the quality of service from one of these 3 services that we have chosen. First, we have to identify the actors (see Figure 2 : A model for Quality of Service Evaluation).

1221

International Conference WWW/Internet 2003

Figure.1 A system architecture

As we can see in Figure 2 A model for Quality of Service Evaluation, our model contains an End User, a Register, an Implementation and a Service under test. An End User wants to conduct a quality of service evaluation for a service that his mobile telephone provider offers him. The only way that he can do it is to verify if the service is accessible or not. No other tools are available. Consequently he seeks external help. He first asks the Customer Service of his provider by calling a special number. Customer Services can give him information about his account or more technical information about the network, but nothing about quality of service. So the End User’s problem is still not solved. So our procedure comes to help the End Users to understand and evaluate the Quality of Service by themselves. Our model is based on an Implementation with Intelligent Agents developed in Java to simulate a real situation. 1 First we supposed that the End User owns a mobile phone that can accept Java SIMToolkit . In this case he can download our application from his service provider’s Internet site. Once installed he can access different quality of service evaluations for different services proposed by his provider. Each time he requests an evaluation he will receive information on his mobile phone (limited by the phone’s screen size) and at the same time he can access the Register where more detailed information is stored. The Register is the property of the mobile phone provider that represents a database of his customers. This database is accessible directly by a person from the Customer Service or by the customer himself by a long distance protected access with a login and a password. Each time an end user conducts a quality of service evaluation for a different type of service the information is stocked in the Register. The situation of nowadays is completely different from our implementation. The arrival of multimedia services is quite slow, access to the World Wide Web is very restricted (due to the limited number of web 2 sites and the rush of WAP ), wireless video streaming does not exist yet.

1

Java SIMToolkit : SIM Toolkit is an ETSI (European Telecommunications Standards Institute] standard for Value Added Services and e-commerce using GSM phones. 2 WAP : Wireless Application Protocol

1222

INTELLIGENT AGENTS FOR QUALITY OF SERVICE EVALUATION IN MULTIMEDIA SERVICES

Figure 2. A model for Quality of Service Evaluation

The Agent Platform that we used to approximate reality is JADE-LEAP [7]. In fact this platform is composed of 2 parts : the JADE (Java Agent Development Network) platform and LEAP (Lightweight Extensible Agent Platform) libraries. The 3 services that were tested with our platform were Web access (www, email services and video streaming). For each service the quality of service evaluation procedure in 5 steps was applied. The entire quality of service evaluation architecture was respected: on a MIDP phone (easy Interface) the end user can enter his name (login) and the service he wants to test. The information is sent to the Manager Agent who activates the appropriate Task Agent (Ping Agent, Download Agent, Mail Agent, Video Agent). After doing their job the Task Agents send the results to the Manager Agent. A part of the results is transmitted to the end user (phone display size). The entire information about the QoS evaluation is placed in a Register under .log files. These files are accessible only by a login and a password. We carried out different simulations on various types of services (Web access, email, and video streaming) for 3 kinds of network (9,6 kbit/sec, 28,8 kbit/sec, 112 kbit/sec). Our results show : • • •

the capacity of our system multi-agent to evaluate QoS for a classic or package mode : evaluation goes through agents from end users directly to the service under test. Apart from this, an end user is completely independent of the evaluation process, the stability and the performance of this multi-agent system : end users receive original and unbiased results whenever needed, interest of a measurement of QoS on the side of the end-user by the intermediary of a user friendly interface.

However, the end user will not accept a set of network performance parameters. He only needs a number 3 like MOS (Mean Opinion Score) used in IP Telephony [8]. In order to establish this number, a network output number must be created as a function of the access mode (given by the Service Level Agreement).

3

MOS : Voice quality is most often evaluated using a subjective measurement called the Mean Opinion Score (MOS)

1223

International Conference WWW/Internet 2003

Based on this access mode, the output obtained will be ranked by its interval quality level (from very poor to very good). Presently, we need to continue the testing of our multi-agent system in the real environment. Unfortunately real service platforms are difficult to find in the current telecommunications world. This is explained by the delay that has been taken in the deployment of third generation mobile networks.

6. CONCLUSION Our quality of service evaluation is but the latest part of the end-to-end quality of service that all service providers talk about. QoS perceived by the customers can be considered as a statement expressing the level of quality which they have experienced. This perceived quality of service is usually expressed in terms of degrees of satisfaction and is usually subjective. Therefore, a service provider must translate these terms into technical service functions. Unfortunately the services providers are more interested in the quality of their equipment and the number of customers at the expense of different services which are more or less attractive. The flop of WAP is a good lesson for service providers with the arrival of multimedia services. Customer perception of quality forms an important part of the overall management of the quality of a product or service.

REFERENCES [1] Bogen M.,1999, A framework for Quality of Service Evaluation in Distributed Environment, GMD Research Series, Germany; [2] David J.F.,1997, Les agents intelligents : une question de recherche, Revue Système d’Information et Management, volume 2 ; [3] ITU-T Recommendations, August 1994, Terms and Definitions Related to the Quality of Service and Network Performance including dependability; [4] ITU-T Recommendations. Visual telephone systems and terminal equipment for local area networks March 1996; [5] ISO 8402, Quality Management and quality assurance, ISO, Vocabulary, 1991; [6] ISO 9126, Information technology- Software product evaluation- Quality characteristics and guidelines for their use, ISO/IEC 1991; [7] “http://leap.crm-paris.com/ “ [8] http://cc.uoregon.edu/cnews/summer1999/ip_phone.html [9] Leap User Guide v2.1 , The Leap Consortium 2001-2002; [10]Marcelo G.Rubinstein, Otto Carlos, M Duarte and Guy Pujolle. Evaluating the Network Performance Management Based on Mobile Agents. In: Mobile Agents for Telecommunication Applications, Paris, France, September 2000, ISBN 3-540-41069-4 [11] Nahrestad K., An architecture for end-to-end quality of service provisions and its experimental validation. PhD Dissertation. Computer and Information Sciences, University of Pennsylvania, USA, 1995; [12] Radu A., & B. Salgues, Rapport sur la technique UMTS et les services, mimeo, INT, 2001.

1224