Proposed Framework for Evaluating Quality of Experience in a Mobile ...

4 downloads 584 Views 434KB Size Report
Jan 29, 2010 - in a Mobile, Testbed-oriented Living Lab Setting. Katrien De Moor ... application developers, and wireless access networks (such as UMTS, Wi-Fi, WiMAX, etc.) ...... Hershey (PA), London, pp 137–166. 20. Intille SS, Tapia EM, ...
Mobile Netw Appl (2010) 15:378–391 DOI 10.1007/s11036-010-0223-0

Proposed Framework for Evaluating Quality of Experience in a Mobile, Testbed-oriented Living Lab Setting Katrien De Moor & Istvan Ketyko & Wout Joseph & Tom Deryckere & Lieven De Marez & Luc Martens & Gino Verleye

Published online: 29 January 2010 # Springer Science+Business Media, LLC 2010

Abstract The framework presented in this paper enables the evaluation of Quality of Experience (QoE) in a mobile, testbed-oriented Living Lab setting. As a result, it fits within the shift towards more user-centric approaches in innovation research and aims to bridge the gap between technical parameters and human experience factors. In view of this, Quality of Experience is seen as a multidimensional concept, which should be considered from an interdisciplinary perspective. Although several approaches for evaluating perceived QoE have been proposed in the past, they tend to focus on a limited number of objective dimensions and fail to grasp the subjective counterparts of users’ experiences. We therefore propose a distributed architecture for monitoring network Quality of Service (QoS), context information and subjective user experience based on the functional requirements related to real-time experience measurements in real-life settings. This approach allows us to evaluate all relevant QoEdimensions in a mobile context. Keywords QoE . QoS . user-centric measurement . mobile Living Lab . modular software tool K. De Moor (*) : L. De Marez : G. Verleye Research Group for Media and ICT, Department of Communication Sciences, Ghent University / IBBT, Korte Meer 7, 9000 Ghent, Belgium e-mail: [email protected] I. Ketyko : W. Joseph : T. Deryckere : L. Martens Department of Information Technology, Ghent University / IBBT, G. Crommenlaan 8, 9050 Ghent, Belgium I. Ketyko e-mail: [email protected]

1 Introduction The mobile communications environment is changing: while the number of mobile devices has increased exponentially during the last decade, the volume of mobile applications and services seems to be exploding at the same time. In the literature, it is assumed that the increasing availability of new content technologies, better tools for application developers, and wireless access networks (such as UMTS, Wi-Fi, WiMAX, etc.), will continue to stimulate the adoption and diffusion of new applications. However, this assumption needs to be further nuanced: whereas most research in the mobile domain is traditionally concerned with technological issues, various authors nowadays stress the crucial importance of the mobile users and their perceived Quality of Experience (QoE) in this respect [1]. This emphasis on the user perspective is acknowledged by authors from diverse fields [2–4]. Furthermore, it is believed to be closely related to the ongoing shift from traditional push- to more user-driven and pull-based approaches in R&D for innovation [5,6]. As they are overwhelmed with innovations and an ever increasing technical quality, users in the ‘wireless world’ demand applications that perform to their personal and situational expectations, allowing them to have a good experience anywhere, at any time [7]. It is argued in the literature that such good experiences with a given technology or service tend to reduce user frustration and dissatisfaction. At the same time, it is assumed that offering a good Quality of Experience stimulates adoption and enhances the end-user happiness and loyalty [8–10]. In view of this, a clear insight in users’ needs and perceived QoE has become indispensable [11,12]. To date however, only a limited number of studies have focused on the (perceived) QoE of mobile systems [13].

Mobile Netw Appl (2010) 15:378–391

Furthermore, the literature on QoE and its related concepts (such as Quality of Service, User Experience), is rather fragmented. As a result, it is still largely unknown which factors affect the mobile QoE and how users’ subjective experiences of such applications and services could be adequately identified and optimized [7]. Moreover, the comprehensive measurement of what users expect and experience in particular usage contexts, remains challenging: both in view of the development of new applications and the optimization of existing services (such as Mobile TV), suppliers are still persistently looking for ways to enhance the user’s QoE [14,15]. In addition, it is crucial to anticipate future killer applications for mobile internet such as mobile P2PSIP [16], SymTorrent1, mobile IPTV… Furthermore, given the heterogeneity and ubiquity of mobile services and the always-carried nature of mobile devices, this is especially difficult for mobile service usage [17,18]. With respect to the technical testing of mobile services, the tradition of testbeds has undeniable proven its value in the past. We here refer to testbeds as ‘standardized laboratory environments used for testing new technologies, products and services and protected from the hazards of testing in a live or production environment’ [19]. Recently however, more and more authors seem to criticize the unnatural character of testbeds when they point to the observation that products that have been developed or tested in a controlled lab environment, tend to fail after their introduction in a more natural environment [11,20,21]. This criticism is shared by Ponce de Leon, Eriksson et al. [22], who refer to testbeds as a crucial tool for ‘integrating technology components into the complex environment of the wireless world and end-users in their daily life’. However, they agree that ‘technology in itself is no longer valid, benefits and usefulness for people in their daily life must be proven before the technology or service can be said to be a success’. As a result, more and more authors have urged on the need for user-centric research infrastructures for assessing and evaluating these new applications services from a true user perspective [4,23,24]. Moreover, this plea is connected to the extension of the controlled environment of traditional testbeds to more natural test settings. In this respect, it is relevant to consider the establishment of Living Labs within the open innovation paradigm. Living Labs have been defined as ‘environments for innovation and development where users are exposed to new ICT solutions in (semi)realistic contexts, as part of medium- or long-term studies targeting evaluation of new ICT solutions and discovery of innovation opportunities’ [21]. Contrary to traditional testbeds, they aim to provide more natural 1

http://amorg.aut.bme.hu/projects/symtorrent

379

settings for studying and involving the users in the innovation process. With respect to this shift in research, the literature makes reference to the ubiquitous computing domain, in which Living Labs were established to study ubicomp solutions [20,25]. Furthermore, during the nineties, Living Labs have also been introduced as facilities that bring ICT testbed applications to the users. Drawing on an analysis of the literature on these new innovation platforms, Følstad [21] has made a classification of Living Labs based on two aspects: contextualized cocreation and testbed association. With respect to the latter, the abovementioned extension of traditional testbeds to Living Lab environments for both technical testing and contextual/co-creation research is seen as an emerging trend, which is supported by various authors [22,26]. From a practical point of view however, there is still a growing need for tools that enable such context and co-creation research in testbed-oriented Living Labs [21]. Although technical advancements in the mobile domain should enable the use of highly sophisticated research techniques in this area, a lot of work still needs to be done in terms of development and implementation of new tools. Within this particular context, this paper aims to address the often overlooked relation between technical parameters and the human experience. In view of this, the goal of this paper is to present a conceptual framework for the multidimensional evaluation of Quality of Experience in a mobile, real-life environment. Furthermore, we aim to address some of the abovementioned challenges related to testbed-oriented Living Labs. The remainder of this paper is organized as follows: in Section 2, we clarify our view on Quality of Experience in relation to other concepts. In addition, a number of approaches for measuring QoE in the mobile environment are evaluated. Section 3 discusses our proposed framework and architecture, followed by the recommendations for implementation (Section 4). Finally, concluding remarks and suggestions for future research can be found in Section 5.

2 Related work 2.1 Positioning of QoE As was already touched upon above, the literature on QoE can be considered as rather scattered: there is a lot of inconsistency at the level of parameters and factors that influence users’ experiences with technology (such as usage context, personal and social variables, expectations, …). Another concern is that—although QoE is an important research topic in various fields—a holistic approach linking QoE to other related concepts such as Quality of Service (QoS) and User Experience (UX), still seems to be lacking

380

[7,10,24]. Whereas measurement approaches based on the well-established Quality of Service-tradition, mainly focus on ‘what’ is happening on the network by investigating parameters such as bandwidth, packet loss, jitter, … , they fail to provide accurate insight in the ‘why’-dimension: e.g. why is the user behaving in a certain way? why does the user feel frustrated? [27]. In this respect, the mantra of ‘faster is better’ is not sufficient anymore: other elements also need be taken into account [28]. Put differently, the abovementioned approaches fail to grasp the user’s subjective experience with a particular application or service. This technology-centric perspective was therefore criticized by authors advocating an end-to-end approach, which also explores the interaction between users and applications and users’ perception of the service [23,29]. In view of this, the concept of QoE has been pushed forward as counterpart of QoS over the last years. Although various definitions of QoE can be found it the literature, a great deal of them tend to stay close to the technology-centric logic, ignoring the subjective character of human experience. QoE has e.g. been defined as ‘an extension of the traditional quality of service (QoS) in the sense that QoE provides information regarding the delivered services from an end-user point of view’ [30]. Likewise, a number of other authors [8,31,32] fail to provide a broader definition of QoE. In Siller and Woods [33], QoE is mainly determined by looking at the Quality of Service of the network. In this context, QoS means that different service types (e.g. VoIP versus downloading) will receive different network quality (throughput, latency, and jitter) in order to optimize the experience. This study uses the hypothesis that QoE will increase with increasing QoS, while this is not necessarily true [34]: ‘It is possible to have excellent QoS and poor QoE’. Soldani [35] defines QoE as the ‘general service application performance’. This consists of properties such as service accessibility and availability, which are measured during service consumption and linked to the QoS parameters measured on a mobile QoS agent. In Eberle et al. [36], a methodology for creating a true cross-layer performanceenergy trade-off is developed, while the extension towards the subjective user perception is still missing. On the other hand, in the field of Human-Computer Interaction (HCI), QoE is usually addressed by the concept of ‘User Experience’ (UX). Authors from this field tend to stress the multi-dimensional character of experience [37– 39]. Some highlight the importance of ‘emotions, expectations, and the relationship to other people and the context of use’ [40] in this respect, while others stress the importance of the broader context [4,41,42]. Moreover, from a social science point of view, the concept of Quality of Experience is often linked to the usability of the application [43,44] and to the way applications and technology merge and assist in the everyday live [45].

Mobile Netw Appl (2010) 15:378–391

Figure 1 illustrates our conception of Quality of Experience in relation to QoS, context and UX. We believe that QoE should be considered from an interdisciplinary perspective, in relation to both QoS and UX. The work of Roto [46] on the characteristics of User Experience for web browsing on mobile phones can serve as a good basis in this respect. However, no definite answers are given to the current network QoS-issues. Furthermore, whereas the UXperspective considers experience as non-quantifiable, and whereas the current QoE-perspective tends to ignore the subjective character of QoE, we aim to propose a usercentric framework for QoE-measurement that enable us to correlate true user-centric measures (grasping both the objective and subjective counterparts of the ‘experience’) with relevant technical parameters. Our conception of QoE is therefore in line with the work of Kilkki [10], who states that Quality of Experience deals with ‘all relevant aspects that define how satisfied a person is with a service’ and who introduces the term of ‘communications ecosystem’— covering the human, business and technical domain—in this respect. Similarly, we propose to use the term QoE as a covering term, focusing on the broad range of aspects that influence the evaluation of QoE by the user when using e.g. an application. Therefore, QoE should contain both an objective (performance and QoS-related) and subjective counterpart (referring to the concept of User Experience). The former deals with the network and application performance and the way these are perceived by the user (does it work well enough for the user? Is it functional?), the latter on the other hand focuses on the subjective experience (expectations, feelings, thoughts, behaviour, ...) when using the application or service. Since ‘QoE is how the user feels about how an application or service was delivered, relative to their expectations and requirements’ [47], this experience is strongly influenced by the user’s expectations. Furthermore, this experience can vary a great deal according to the broader context (e.g. usage context, location, ...) in which it takes place. The interaction of the user with the technology (e.g. application, service), closely related to the aspect of usability, is situated in between the user and the technology [48]. 2.2 Measuring QoE We now consider some of the current quality assessment techniques. In the literature, a distinction is usually made between objective and subjective methods. Whereas human subjects are not involved in objective tests, subjective testing aims to evaluate the perceived QoE from a user perspective. Despite their name however, these subjective testing techniques fail to provide more insight in the subjective dimensions of QoE. In overviews of state-ofthe-art subjective performance measures, the use of Mean

Mobile Netw Appl (2010) 15:378–391

381

Fig. 1 Conception of quality of experience

Quality of Experience usability jitter delay

enjoyment

interaction

device

complexity

efficiency application interface

performance

prior experiences

effectiveness

Quality of Service quality

User Experience

price

personalisation

network

expectations

User

availability mobility

emotions self efficacy

tangibles

Usage context

content

personal context

compatibility social context

motivation

Context

Opinion Scores (MOS) is often referred to. MOS testing is predominantly used in the voice domain as a subjective measure of voice quality. It is however based on the conversion of objective quantities such as echo and latency into a subjective score and aims to determine the user’s perception of the network [15,28]. Test users are asked to evaluate quality parameters by means of standardized scales, using labels such as Excellent, Good, Fair, Poor, Bad, cfr. the ITU-T Recommendation E.800 [49]. This methodology has however been criticized by authors from various fields. First of all, it has been argued that the intervals of the MOS scale are problematic. According to de Koning et al. [15], they are unequal at a conceptual level and also refrain from representing an internationally ordinal scale due to cultural differences in interpretation. Moreover, the subjective nature of MOS is highly questionable as it is relying solely on numeric expressions of perceived QoE. In addition, according to Sullivan et al. [50], a ‘restriction of range’ is created when measuring perceived quality on a 5-point scale. Another concern is that MOS scores tend to be interpreted as absolute values, largely ignoring the possible influence of contextual factors (such as the test setting, equipment, content, ...) upon the results [50]. Given the controlled lab setting in which MOS testing usually takes place, the external validity of the results is also highly questionable. Finally, it can be argued that end-users are not involved in an active, co-creative way. Some of the shortcomings associated with MOS have stimulated the use of other ‘subjective’ measures, such as e.g. acceptability [15]. Acceptability studies aim to measure threshold levels for user acceptance. Due to the expensive and time-consuming nature of these subjective tests, they have been extended with automated subjective measures.

The Pseudo Subjective Quality Assessment (PSQA) technique aims to merge ‘subjective assessments with a statistical learning tool, which allows to produce subjective-like quality estimations’[51]. Although PSQA combines elements from both objective and subjective testing, it can be questioned whether this approach truly reflects QoE from a user-perspective. Similar to the concept of MOS, Andrews et al. [28] attempted to develop a user-centric measure of performance for web browsing and introduced the method of dataMOS. Although their study and others [52,53] have tried to relate the measurement of technical parameters to the perceived QoE, most of them tend to focus on a limited number of metrics and take place in controlled settings. As a result, there is still a need for tools that allow the interdisciplinary multi-dimensional evaluation of QoE in real-life settings. In view of this, traditional technical measures should be complemented with more user-oriented methods. 2.3 Towards QoE-research in real-life environments As a reaction to the shortcomings of ‘single-context’ research environments, which usually don’t resemble users’ daily lives at all, Ponce de Leon et al. [22] present a distributed mobile network testbed environment, drawing on the Living Lab approach. Although their proposed model includes evaluation and co-creation by end-users (referred to as the ‘human resource component’), they refrain from illustrating how this is done in practice. In Liyuan, Wen-an and Jun-de [53] on the other hand, a new approach to evaluate QoE in a pervasive computing environment is presented. This method draws on contextawareness computing for gathering data related to the user

382

experience. Users are however not actively involved. Furthermore, only a limited number of QoE-dimensions are taken into account. In addition, Perkis et al. [23] implemented a framework for measuring Quality of Experience of multimedia services. Their proposed model draws upon the combination of measurable and nonmeasurable quality metrics and can be used for modeling users’ experiences with multimedia. For measuring QoEaspects in a real-life environment, it is relevant to consider existing solutions such as the mobile QoS agent (MQA) [35], which allows active probing and/or passive monitoring of network QoS parameters on a cellular mobile terminal. In [54] Díaz et al. illustrate how existing mobile devices can be used to monitor the performance of mobile Internet services over cellular networks by means of an IP level analysis tool called Symbian Protocol Analyzer (SymPA2). Although these solutions could be used for the purpose of QoEmeasurement in Living Lab environments, they haven’t been used in this sense yet as they largely focus on the ‘what’-question (cfr. 2.1) However, in order to gain insight in the other dimensions (Why? Where? …) of mobile network usage, a broader and interdisciplinary approach that also focuses on QoE from a social and user-centric perspective, is required. Such an approach would then fit in with the notion of Living Labs as userdriven innovation platforms and could enable researchers to extend traditional testbed settings to more user-centric infrastructures, facilitating contextual and co-creation research. As there is currently still a lack of robust methodological tools in this respect, it is relevant to consider studies and tools from several fields and discuss how these could be integrated within the testbed-oriented Living Lab idea. In the field of social sciences, several methods (such as in-depth interview, survey, observation, etc.) are available to study users and their experiences in a natural setting. A great deal of them are self-report methods: they largely rely on the introspection and recollection of users. In this respect, it could be questioned whether they yield reliable and comprehensive data and whether they are not too obtrusive. As a means to reduce this uncertainty, a variety of studies has studied user behavior in natural settings by using the diary method. This method is used for the ‘selfrecording of everyday life events’ at certain intervals [55]. According to Bolger, Davis et al. [56], diary methods have the advantage that they ‘capture the particulars of experience in a way that is not possible using traditional designs’. A particular type of diary methods, which can be used for evaluating users’ experiences in different contexts, is the Experience Sampling Method (ESM). ESM is a reliable and valid method that can be used for various 2

http://www.lcc.uma.es/∼pedro/mobile/Software/sympa.html

Mobile Netw Appl (2010) 15:378–391

purposes and in many fields [57]. Similar to other diary methods, participants to an ESM study are triggered to selfreport on their activities or to answer a questionnaire, e.g. on a PDA (in the case of electronic ESM). Such questionnaires usually probe into the users’ current activities, experiences, feelings,… In the context Quality of Experience-measurement research, ESM could therefore help us to yield information on the more subjective dimensions of QoE. However, ESM largely reflects the user perspective and ESM studies and might suffer from a subjective and selective reporting bias. Moreover, when measuring QoE, objective technical parameters should also be included. A number of relevant studies have used ESM for studying the user in a natural context. In view of the need of a robust self-report data collection tool for in situ measurement, Intille et al. [20] developed a context-aware experience sampling (CAES) tool for studying behavior and technology. This open source software tool draws on contextual information (e.g. location, time, event, …) to trigger self-reports and offers a number of options for collecting self-report data from the users (e.g. pictures, GPS coordinate samples, …). As a result, users are only interrupted during or after the behavior or activity of interest. Similarly, Henderson et al. [27] have used ESM to look at the usage of a campus-wide wireless network: results from the traditional network monitoring were integrated with the data from the ESM. Although this study focused on ‘usage’ rather than on the users’ experiences, it illustrates that ESM can be used as a complement to traditional monitoring. More recently, Consolvo et al. [58] presented the MyExperience tool for supporting computerized self-report in natural settings. MyExperience is ‘a software application for in situ data collection to support the study of human behavior and the evaluation of mobile computing technologies’ [58]. This open source software tool was developed for Windows Mobile devices. Another similar example is the SocioXensor software toolkit, which integrates traditional logging techniques with experience sampling and which aims to ‘bring the lab to the people’ [59]. Finally, Deryckere et al. [7] present an interdisciplinary approach for measuring QoE in a mobile (Living Lab) setting. This approach incorporates both objective and subjective QoEdimensions and draws upon a software tool consisting of three layers that is installed on the end-user device. Moreover, it includes qualitative user research pre- and post-usage for evaluating QoE. In this paper, we intend to refine and extend this approach in several ways (e.g. monitoring of more parameters, context-awareness, ...). In view of this, we present a framework for evaluating Quality of Experience in a Living Lab setting in the following section.

Mobile Netw Appl (2010) 15:378–391

383

3 Proposed framework and architecture An integrated methodology for relating human experience and QoS-parameters in Living Lab environments needs the support of well-designed evaluation tools that assist researchers or network operators in gathering information across different dimensions such as network quality, context, location and user perception. This tool must comply with a basic set of requirements allowing longterm measurements: & & & & &

The measurement tool should be non-disruptive: endusers may not feel any influence of the tool during usage. The measurement tool should allow measurements on different kinds of dimensions and levels (e.g. contextual, social, application, network, device, …). The tool should be modular and support new measurement features. The tool should work on IP-based services. The tool should be remotely manageable and allow researchers to automatically set up numerous and specific tests.

We propose an integrated system combining the different dimensions in one measurement. Figure 2 shows a high level overview of the proposed architecture. Initially, the basic idea of the architecture was proposed in the abovementioned studies of Deryckere et al. [7] and De Moor et al. [60]. Here, the tool was designed for

small-scale testing in controlled environments. The architecture consists of a highly distributed system allowing measurements on the device (handset-based approach) [54], measurements in the network (network-based approach) [27] and data processing in the back-end. On the mobile device, an always running application, called Mobile Agent is installed. It is connected to the back-end infrastructure, which stores and analyzes the incoming data. This architecture allows us to study and understand cross-contextual effects, to assess the relative importance of parameters (for each application and scenario), and to develop a basic algorithmic QoE model. In order to achieve these goals, there are two fundamental approaches that can be applied: &

&

Neural Network-based evaluation: In the literature, it is popular to use neural networks to solve problems for this field of research [61,62]. However, it is principally carried out in the back-end because of the processing demand. On the other hand, based on the magnitude of the input parameters, it is possible to set up a neural network on the limited capability device also [63]. These papers provide solid basis for following also this approach in our case. Statistical model-based evaluation: Another approach is to build statistical models and evaluate them (e.g. multiple regression models taking the dominating parameters into account). In [64], Soldani describes post-processing and statistical methods for QoE evaluation, and emphasizes the importance of statistical

DVB-H WiMAX UMTS

Wireless Access network

Mobile Agents - Measuring - Logging - Feedback

Device characteristics

Application

Fig. 2 High-level network architecture

Backbone network

Wireless Access network WiFi

Context

Location, alone, work, home, ...

- repository - log - data mining

Video quality, usability

Network QoS Delay, jitter, throughput, SNR, ...

Network agent: - measuring - logging - Access network - Backbone

Network QoS Device parameters Contextual data User feedback

384

Mobile Netw Appl (2010) 15:378–391

confidence during the measurements. In [65] it is demonstrated that a wide variety of statistical methods, combined with data mining procedures, can be used in analyzing the type of data the focal handset-based usage monitoring software retrieves.

into four basic blocks: device, infrastructure, network and application: &

3.1 Mobile agent The central component of the architecture is the mobile agent. The mobile agent is installed on the end-user device, used for normal service consumption. Figure 3 gives a conceptual overview of the functionality of the mobile agent. We see a logical partitioning of the mobile agent in different entities: the QoS monitoring entity, the Contextual monitoring entity, and the Experience monitoring entity. This logic partitioning reflects the interdisciplinary approach in which social and technical research efforts are combined in view of the development of valid user models. The mobile agent contains a local data repository to store measured data and this repository can be synchronized with a remote database. A central controller engine on the device manages the monitoring process and can be configured remotely or locally on the device. We will further descript the different entities in more detail. 3.1.1 QoS monitoring entity The QoS monitoring entity is in charge of measuring the objective, technical parameters. We logically divide this

Fig. 3 Concept of mobile agent

&

&

Device: The purpose of this block is logging the capabilities related to the mobile device. The operating system (OS), screen size, and the fundamental services are permanent. As a result, frequent logging is not needed. However, the actual CPU utilization, memory consumption, and battery status need more frequent checking during the investigations. Infrastructure: This block is gathering information about the access network infrastructure, such as the actual access type (GPRS, UMTS, HSPA, LTE, Wi-Fi, WiMAX, DVB-H, etc.), the perceived signal strength, and the service prices. Changes of the access infrastructure (inter-system handovers [66,67], vertical handovers [68–70]) and received signal strength oscillations can be tracked and their effects can be investigated. Network: This block is strongly coupled to the Infrastructure block as indicated in Fig. 3, however, a separation is made so that it becomes possible to investigate all network QoS parameters as Internet Protocol (IP) level parameters in a homogeneous way apart from the access technology. This block is one of the most important information sources for QoE studies. Network QoS derived from IP level packet throughput, delay, jitter, and loss [71] represent the fundamental basis of the objective metrics for QoE investigation. In a reasoned scenario, service availability and network level fairness [72,73] can also be related to the other QoE dimensions.

Mobile Netw Appl (2010) 15:378–391

&

Application: Logging of the application properties under use. The examined parameters depend on the actual application type. In case of communication or a multimedia application, the voice and video codecs are being investigated. It is important to separate the quality of content (or e.g. the usability of the application) from the quality of network. Therefore, content quality is included in this block.

385

Based upon the context, application and other monitored parameters, feedback can be dynamically collected from the end user. &

3.1.2 Contextual monitoring entity The purpose of this monitoring entity is to deal with the determination of the context of the application usage, it consists of four blocks: &

&

&

&

Location: Based on the received GPS coordinates or the network cell information, the user can be tracked, and this information can represent the basis for further location-related examinations. Mobility: The user’s concrete activity has a great influence on the used application and the perceived experience. To exactly identify the different situations, this block serves accurate data with the assistance of measured velocity and acceleration parameters. Sensors: This block provides an opportunity for the researchers to collect information about the environment of the user. With the support of on-body sensors [53,74] or with the analyzed audio data from the device or a complementary microphone [75] it is possible to have supplementary facts reflecting the users’ current mood or activities automated and objectively, and then correlate them to the subjective sources. Other running applications: The operating system (OS) and other background applications influence the performance and provided quality of the running application under investigation. For that reason, this impact should be also taken into account by e.g. polling the running tasks and these parameters from the OS.

3.1.3 Experience monitoring entity In contrast to the QoS monitoring entity and the Contextual monitoring entity, monitoring of experience does not contain straightforward objective measurements. In order to translate subjective experience parameters to qualitative metrics, the Experience monitoring entity interacts with the user by gathering explicit feedback in the form of questionnaires and pictographic feedback (e.g. pushing a red button if things go wrong). These questionnaires pop up at different times such as before, after, and during application usage and are set up in such a way that the disruption of the user and the usage flow is minimized.

&

&

The experience probes consist of the software probes with a built-in intelligence in order to capture the perceived user experience. To this end, automatic ESM questionnaires completed by the user on the mobile device before, after, or even during application usage are be a possible mechanism. Participants are triggered at different times during the day: e.g. randomly, at the occurrence of certain predefined events (e.g. use of a particular application), .... In this respect, information from the context- and QoS-monitoring probe is used as input for triggering self-reports. Other possible input sources are e.g. the monitoring of keystrokes during application usage. Both closed and open-ended question formats are supported: in order to address the problems with e.g. MOS-scales, questions on 5- or 10-point scales are complemented with other question formats (e.g. visual cues). The experience probe can easily be extended with new modules and parameters. A possible extension consists of the collection of implicit experience measures by e.g. coupling explicit information (such as monitoring data, self-reports) with measurements of brain activity, heart activity and even eye movement.

This approach is well-embedded in the long-term approach for evaluating QoE within a Living Lab setting. In such an approach, pre-usage user research and analysis of expectations, allows us to identify the relevant QoE-dimensions for every user and to provide personalized questions on the device. This will help to lower the burden for the test users and might help to keep them motivated. 3.1.4 User module One of the main purposes of our research is to build the proper intelligence of this module. In light of the actual user, this module meddles in the control of the monitoring entities. For real-time, on-device processing the User Module queries data from the local database, based on this data and self-learning capabilities it is capable to give personalized setup inputs for the Controller Engine. In the beginning, this module is just a set of predefined rules for the actual user, and during time, based on experiences, its intelligence is developing. This intelligence can be used for recommendations [76] for the users. The User Module has an interface for realtime monitoring of the user (e.g. tracing of location and actually perceived network quality).

386

Mobile Netw Appl (2010) 15:378–391

4 Software architecture and recommendations for implementation In order to support extendable real-time tests in a Living Lab environment, we must be very careful in making good choices in terms of software platform and architecture. The purpose of the implementation of our framework is to support as many devices as possible. Due to the heterogeneity of devices, one of the challenges is to target as many devices as possible without having to rewrite the software for each operating system. Table 1 shows a compilation of smart phone mobile platforms, their market shares in Q3 2009 [77] and Java native access possibilities. The largest common denominator for all devices is the Java Platform, available on Android OS, Symbian OS, Windows Mobile OS and BlackBerry OS (iPhone also can provide Java support). Consequently, the Java Platform seems to be the logic choice for the implementation of Mobile Agent. In terms of functionality, we see a trade-off between the different software platforms. Native applications written in C++ on Symbian OS, or developed to the .NET Compact Framework on the Windows Mobile platform, will immediately have access to low-level parameters such as signal strength, battery level, keystrokes, network status and protocol attributes. While in case of the Java Platform, the usage of Java Native Interface (JNI) and platform-specific Java API extensions or native extensions are needed for all different mobile platforms. The position of our proposed framework for evaluating Quality of Experience is shown in Fig. 4 for the Java Platform. The framework is component based architecture, the main components are the following: &

&

Controller Engine: This component includes the main controlling functionality, it controls the working of Monitoring probe components (create, destroy, start, and stop them). The setup is via the Management component (Fig. 4). Monitoring probe: On the level of the software implementation, we define the concept of Monitoring probes that represents a class for measuring a specific parameter. Examples are signal strength probes,

&

&

&

&

throughput probes, CPU probes, and feedback collectors. Implementation of this concept makes the software better manageable and extendable for further probes, such as face recognition. A Monitoring probe has the common purpose to measure data and logging it. The concrete instantiation is dependent on the analyzed dimension (QoS monitoring, Contextual monitoring, Experience monitoring) and the given OS APIs. Factory design pattern [78] is an elegant solution for the implementation in an extendable way. For the data logging, every Monitoring probe is composed of a Logger component. Logger: The function of this component is to store measured data into a local database and synchronize it to the remote database. To avoid inaccuracy and overhead at network throughput measurements, the synchronization can be delayed and scheduled by preset events (e.g., arrival to an area with Wi-Fi coverage). User Module: This is the component for the personalization of the experiments. By processing the preset rules and analyzing the existing data, it modifies the behavior of the Controller Engine component (e.g., threshold-based surveys, date-based probes). Management: This component is responsible for providing management interface and doing management via the Commander Interface by XML and scripting. The management can be locally or remote. By remote management it is possible to centrally setup the mobile agents to perform specific measurements. The mobile agent communicates to the management center via the network. Platform Abstraction Layer: This component is defining an interface for the Monitoring probes and for the Logger component to hide the variations provided by the different mobile platforms.

The Controller Engine component, the Management component, the User Module component, and the Logger component are feasible to implement on the Java Platform in a portable way apart from the underlying device and operating system. These components are just depending on those core packages that are commonly supported by all Java implementations. The implementation of the Monitoring

Table 1 Own compilation of mobile platforms (Market shares figures from Canalys Ltd 2009) Platform

Market share Q3 2009

Java Platform

Java native access

.NET CF

Symbian OS iPhone OS BlackBerry OS Windows Mobile OS Android OS

46.2% 17.8% 20.6% 8,8% 3.5%

Java ME Available Java ME IBM WEME Java Harmony

JNI for Series 80 Java API Extensions for Series 60 Java/Objective-C bridge Java API Extensions JNI Java API Extensions

Available Not available Not available Built-in support Not available

Mobile Netw Appl (2010) 15:378–391

387

components are implemented for Windows Mobile OS and Android OS. Our experiences learn that Android OS offers a wider range of possibilities, e.g. for monitoring network parameters related to online video content (e.g. YouTube) at the client-side. Windows Mobile OS has more limitations, e.g. it is not possible to run a network packet capture tool (such as tcpdump [79]) to log IP-packets. In general, our aim with these small experiments is to investigate the influence of the mentioned parameters on QoE and to explore several ways of modelling the possible relation of the relevant QoE-aspects. Fig. 4 Software architecture of mobile agent

probe components is not possible in a common way. The user interface of the device is different for every brand. For example, just the study of keystrokes cannot be implemented in a common way because of the keyboard solutions (buttons vs. touch screen). Similarly, device information (e.g. memory statistics, actual velocity) and network information (such as wireless network signal strength, bandwidth) cannot be accessed in a standardized way. By way of illustration, the iPhone native API does not provide any access to wireless network signal strength information. The implementation of these functions has to be platform-specific, and this is possible with the platform-specific Java API extensions in case of Android OS, Blackberry OS and Symbian OS Series 60 3rd Edition Platform, via the JNI (in case of Symbian OS Series 80 Platform and Windows Mobile OS) (see also Table 1). The usage of the Factory design pattern for the implementation of the Platform Abstraction Layer automates the choice between platforms and devices in such a way that the Mobile Agent can be implemented as one portable solution. A number of experiments have been set up in view of the implementation and testing of the proposed framework. The aim is to enable holistic QoE-measurement and modeling on a large scale, in mobile Living Lab settings. In this respect, experiments are dedicated to the evaluation of QoE for several mobile applications (mobile web browsing, mobile video and recommendation, mobile VoIP and video conferencing). In an ongoing experiment, the QoE of mobile video streaming is evaluated by a panel of test users in their natural environment: the measurement of technical parameters (QoS: UMTS/Wi-Fi signal strength, jitter, packet loss, Quality of Video: resolution, bit rate, frame rate) is linked to the subjective evaluation of QoE as a multidimensional concept in-situ. This evaluation is based on a questionnaire consisting of closed and open-ended questions regarding the user’s appreciation of the mobile video, technical aspects, the user’s location, social context, activities, feelings, etc. Currently, the Monitoring probe

5 Conclusion and suggestions for further research This paper has addressed the shift towards more usercentric research in the domain of ICT development, a shift which is also reflected in the rise of new ‘user-driven’ innovation infrastructures such as Living Labs. It has been argued that such labs represent a logical extension of the controlled research environment (associated with traditional testbeds) to more natural, daily life research infrastructures. Not surprisingly, these new ‘innovation laboratories’ offer a lot of far reaching possibilities for true user-centric research. However, although the support for Living Lab research continues to grow at various policy levels, these possibilities are predominantly still to be explored and expanded. Furthermore, there is a need for tools that enable context- and co-creation research in such real-life test environments. In line with the increased importance of the abovementioned user-centric rationale, it has been argued that Quality of Experience has become a key concept. The purpose of our investigation was to contribute to the literature in several ways. Drawing on extensive desk research, we have defined the position of QoE compared to two related concepts, namely Quality of Service (telecommunications domain) and User Experience (tradition of Human-Computer Interaction). Further, we provided an integrated vision on QoE, taking into account both objective, subjective and contextual parameters. In this respect, it was proposed to use the term Quality of Experience as a covering term, focusing on the broad range of aspects that influence the perception and experience of the user when using a particular application (and thus including both QoS- and UX-elements). We have argued that in view of the accurate and comprehensive measurement of QoE, a multidimensional and interdisciplinary approach is required. Therefore, we reviewed the relevant quality assessment techniques that are currently being used for the measurement of QoE and attempted to shed light on the limitations they entail. In order to tackle these limitations, which were

388

mainly concerned with the disregarding of the subjective character of QoE in current measurement practices, a number of requirements for more user-centric QoE measurement were discussed. These requirements were incorporated in the main contribution of this paper, namely the proposition of a framework for evaluating QoE in mobile, testbed-oriented Living Lab settings. This proposed framework, which is based on an overview of previous studies on QoE-measurement in (semi-)natural settings, focuses both on the what- and why dimension of mobile network usage is it aims to relate human experience dimensions to QoSparameters. Moreover, it fits in with the long-term and usercentric perspective advocated by the Living Lab rationale. Central component of this proposed architecture is the mobile agent, which is composed of three entities: the QoS monitoring entity, the Contextual monitoring entity and the Experience monitoring entity and which allows the interdisciplinary measurement of Quality of Experience as multidimensional concept. Notwithstanding the conceptual nature of this contribution, the proposed tool supports extendable and large-scale real-life tests in a mobile Living Lab environment. Not only will such tests enable us to model the relations between the human experience and technical aspects, they will also help to increase the perceived experience of an application and to detect possible future killer applications for specific user segments. Furthermore, the modular approach of the mobile agent offers many possibilities for the development of tailored or extended software tools for measuring the QoE of various types of mobile media. Acknowledgements This work was supported by the interdisciplinary GR@SP-project (Bridging the gap between QoE and QoS), funded by IBBT (Interdisciplinary Institute for BroadBand Technology), a research institute founded by the Flemish Government in 2004 (www.ibbt.be). W. Joseph is a Post-Doctoral Fellow of the FWO-V (Research Foundation—Flanders).

References 1. Crisler K, Turner T, Aftelak A, Visciola M, Steinhage A, Anneroth M et al (2004) Considering the user in the wireless world. IEEE Commun Mag 42:56–62 2. Haddon L, Mante E, Sapio B, Kommonen K-H, Fortunati L, Kant A (eds) (2005) Everyday innovators: researching the role of users in shaping ICT’s. Springer, Dordrecht 3. Cabral A (2007) Quality of experience: a conceptual essay. In: Wang W, Wang W (eds) IFIP international federation for information processing, volume 252, integration and innovation. Springer, Boston, pp 193–199 4. De Marez L, De Moor K (2007) The challenge of user- and QoEcentric research and product development in today’s ICT environment. Observatorio (OBS*) 1(3):1–22 5. Rickards T (2003) The future of innovation research. In: Shavinina LV (ed) The international handbook on innovation. Pergamon, Elsevier, Oxford, pp 1094–1100

Mobile Netw Appl (2010) 15:378–391 6. Trott P (2003) Innovation and market research. In: Shavinina LV (ed) The international handbook on innovation. Pergamon, Elsevier, Oxford, pp 835–844 7. Deryckere T, Joseph W, Martens L, De Marez L, De Moor K (2008) A software tool to relate technical performance to user experience in a mobile context. Paper presented at WOMWOM 2008 International Symposium on a World of Wireless, Mobile and Multimedia Networks. http://doi.ieeecomputersociety.org/ 10.1109/WOWMOM.2008.4594902. Accessed 25 October 2008 8. Kumar K (2005) A marriage made in QoE heaven. CED Magazine, July 2005, 37–39 9. Nokia (2004) Quality of Experience (QoE) of mobile services: can it be measured and improved? White paper. http://www.nokia. com/NOKIA_COM_1/About_Nokia/Press/White_Papers/ pdf_files/whitepaper_qoe_net.pdf. Accessed 3 October 2007 10. Kilkki K (2008) Quality of experience in communications ecosystem. J Univers Comput Sci 14(5):615–624 11. De Marez L, Verleye G (2004) Innovation diffusion: the need for more accurate consumer insight. Illustration of the PSAP scale as a segmentation instrument. J Target Meas Anal Market 13(1):32–49 12. Veryzer RW, Borja de Mozota B (2005) The impact of useroriented design on new product development: an examination of fundamental relationships. J Prod Innovat Manag 22(2):128–143 13. Väätäjä H (2008) Factors affecting user experience in mobile systems and services. Proceedings of the 10th international conference on Human-Computer Interaction with mobile devices and services, 551–551 14. Knoche H, Papaleo M, Sasse MA, Vanelli-Coralli A (2007) The Kindest Cut: Enhancing the User Experience of Mobile TV through Adequate Zooming. Proceedings of the 15th ACM International conference on Multimedia, 87–96. 15. De Koning TC, Veldhoven P, Knoche H, Rooij RE (2007) Of MOS and men: bridging the gap between objective and subjective quality measurements in mobile TV. In Creutzburg R, Takala JH, Cai J (Eds) Multimedia on Mobile Devices 2007: Proceedings of the SPIE, Volume 6507, 65070P 16. Singh K, Schulzrinne H (2005) Peer-to-peer internet telephony using SIP. Proceedings of the international workshop on Network and operating systems support for digital audio and video, 63–68 17. Verkasalo H (2008) Contextual patterns in mobile service usage. J Pers Ubiquit Comput 13(5):331–342 18. Tsai CC, Lee G, Raab F, Norman GJ, Sohn T, Griswold WG, Patrick K (2007) Usability and feasability of PmEB: a mobile phone application for monitoring real time caloric balance. Mob Netw Appl 12(2–3):173–184 19. Ballon P, Pierson J, Delaere S (2007) Fostering innovation in networked communications: test and experimentation platforms for broadband systems. In: Heilesen SB, Siggaard Jensen S (eds) Designing for networked communications—strategies and development. Hershey (PA), London, pp 137–166 20. Intille SS, Tapia EM, Rondoni J, Beaudin J, Kukla C, Agarwal S et al (2003) Tools for studying behavior and technology in natural settings. In: Dey AK, Schmidt A, McCarthy JF (eds) UbiComp 2003. Springer-Verlag, Berlin, pp 157–174 21. Følstad A (2008) Towards a living lab for the development of online community services. Electron J Virtual Organ Netw 10:47– 58 22. Ponce de Leon M, Eriksson M, Balasubramaniam S, Donelly W (2006) Creating a distributed mobile networking testbed environment—through the Living Labs approach. Proceedings of the 2nd International IEEE/Create-Net Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities, 135–139. 23. Perkis A, Munkeby S, Hillestad OI (2006) A model for measuring quality of experience. Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG 2006), 189–201

Mobile Netw Appl (2010) 15:378–391 24. Fehnert B, Kosagowsky A (2008) Measuring user experience— complementing qualitative and quantitative assessment. Proceedings of the 10th International conference on Human-Computer Interaction with mobile devices and services, 383–386 25. Schmidt A, Strohbach M, van Laerhoven K, Friday A, Gellersen H-W (2002) Context acquisition based on load sensing. In: Boriello G, Holmquist LE (eds) Lecture notes in computer science 2498. Springer, Berlin, pp 333–350 26. Niitamo V-P, Kulkki S, Eriksson MH (2006) State-of-the-art and good practice in the field of living labs. Proceedings of the 12the International Conference on Concurrent Enterprising: Innovative Products and Services through Collaborative Networks, 349–357 27. Henderson T, Anthony DA (2005) Measuring wireless network usage with the experience sampling method. Proceedings of the First Workshop on Wireless Network Measurements, Trentino, Italy 28. Andrews M, Cao J, McGowan J (2006) Measuring human satisfaction in data networks. Proceedings of the 25th IEEE International Conference on Computer Communications, 1–12. 29. Reichl P (2007) From ‘Quality-of-Service’ and ‘Quality-ofDesign’ to ‘Quality-of-Experience’: A holistic view on future interactive telecommunication services. Proceedings of the 15th International conference on Software, Telecommunications and Computer Networks, 2007, 1–6. 30. Lopez D, Gonzalez F, Bellido L, Alonso A (2006) Adaptive multimedia streaming over IP based on customer oriented metrics. Proceedings of the International Symposium on Computer Networks, 185–191. 31. Van Ewijk A, De Vriendt J, Finizola L (2006) Quality of Service for IMS on fixed networks. White Paper. http://www1.alcatellucent.com/publications/abstract.jhtml?repositoryItem=tcm% 3A172-850091635. Accessed 3 October 2007 32. O'Neill TM (2002) Quality of experience and quality of service for IP video conferencing. White paper. http://hive2.hive.packetizer. com/users/h323forum/papers/polycom/QualityOfExperience+ ServiceForIPVideo.pdf. Accessed 3 October 2007 33. Siller M, Woods JC (2003) QoS arbitration for improving the QoE in multimedia transmission. Proceedings of the International Conference on Visual Information Engineering, 238–241 34. Empirix (2001) Assuring QoE on next generation networks. Whitepaper. http://www.triple-play-news.com/voip/whitepapers/ whitepaper_NGNT_AssuringQoE.pdf. Accessed 3 October 2007 35. Soldani D (2006) Means and methods for collecting and analyzing QoE measurements in wireless networks. Proceedings of the International Symposium on a World of Wireless, Mobile and Multimedia Networks, 531–535 36. Eberle W, Bougard B, Pollin S, Catthoor F (2005) From myth to methodology: cross-layer design for energy-efficient wireless communication. Proceedings of the 42nd Design Automation Conference, 303–308. 37. Forlizzi J, Battarbee K (2004) Understanding Experience in interactive systems. Proceedings of the DIS04 Conference, 261–268 38. Gaggioli A, Bassi M, Delle Fave A (2003) Quality of experience in virtual environments. In: Riva G, Davide F, Ijsselsteijn WA (eds) Being there: concepts, effects and measurement of user presence in synthetic environments. Ios, Amsterdam, pp 122–136 39. Alben L (1996) Defining the criteria for effective interaction design. Interactions 3(3):11–15 40. Arhippainen L (2003) Capturing user experience for product design. In Laukkanen S, Sarpola S (Ed) Electronic Proceedings of the 26th Information Systems Research Seminar in Scandinavia (1–10) 41. Vyas D, Van Der Veer GC (2005) Experience as ‘meaning’: creating communicating and maintaining in real-spaces. Proceedings of the Tenth IFIP TC13 International Conference on Human-Computer Interaction, 1–4 42. Hassenzahl M, Tractinsky N (2006) User experience—a research agenda. Behav Inf Technol 25(2):91–97

389 43. Blythe MA, Overbeeke K, Monk AF (2004) Funology: usability to enjoyment (Human-Computer Interaction). Kluwer Academic, Dordrecht 44. McNamara N, Kirakowski J (2005) Defining usability: quality of use or quality of experience? Proceedings of the Professional Communication Conference 2005, 200–204 45. Abowd G, Mynatt E, Rodden T (2002) The human experience [of ubiquitous computing]. IEEE Pervasive Comput 1:48–57 46. Roto V (2006) Web browsing on mobile phones—characteristics of user experience (Unpublished doctoral dissertation). University of Technology, Helsinki 47. Corrie B, Wong H-Y, Zimmerman T, Marsh S, Patrick AS, Singer J et al (2003) Towards quality of experience in advanced collaborative environments. Unpublished paper. http://www. andrewpatrick.ca/cv/WACE-2003-Corrie-et-al.pdf. Accessed 3 October 2007 48. McNamara N, Kirakowski J (2006) Functionality, usability and user experience: three areas of concern. Interactions 13(6):26– 28 49. ITU-T (1994) ITU-T Recommendation E.800. Terms and definitions related to quality of service and network performance including dependability. International Telecommunication Union, CH-Geneva. ITU-T Recommendation E.800. Terms and definitions related to quality of service and network performance including dependability. International Telecommunication Union, CH-Geneva. Geneva: International Telecommunication Union 50. Sullivan M, Pratt J, Kortum P (2008) Practical issues in subjective video quality evaluation: human factors vs. psychophysical image quality evaluation. Proceedings of UXTV 2008, 1–4 51. da Silva AP, Varela M, de Souza e Silva E, Leão RM, Rubino G (2008) Quality assessment of interactive voice applications. Comput Netw 52(6):1179–1192 52. Khirman S, Henriksen P (2002) Relationship between quality-ofservice and quality-of-experience for public internet service. Unpublished paper. http://www.pamconf.net/2002/Relationship_ Between_QoS_and_QoE.pdf. Accessed 3 October 2007 53. Li-yuan L, Wen-an Z, Jun-de S (2006) The research of quality of experience evaluation method in pervasive computing environment. Proceedings of the 1st International Symposium on Pervasive Computing and Applications, 178–182 54. Díaz A, Merino P, Rivas FJ (2008) Customer-centric measurements on mobile phones. Proceedings of the IEEE International Symposium on Consumer Electronics (ISCE 2008), 1–4 55. Wheeler L, Reis H (1991) Self-recording of everyday life events: origins, types, and uses. J Personal 59(3):339–354 56. Bolger N, Davis A, Rafaeli E (2003) Diary methods: capturing life as it is lived. Annu Rev Psychol 54(1):579–616 57. Csikszentmihalyi M, Larson R (1987) Validity and reliability of the experience-sampling method. J Nerv Ment Dis 175(9):526– 536 58. Consolvo S, Harrison B, Smith I, Chen M, Everitt K, Froehlich J et al (2007) Conducting in situ evaluations for and with ubiquitous technologies. Int J Hum Comput Interact 22(1–2):103–118 59. Mulder I, Velthausz D, Strating P, ter Hofte GH (2006) Bring the lab to the cities: experiences from two Dutch Living Labs. Proceedings of e-Social Science conference. http://www.ncess.ac. uk/events/conference/2006/papers/papers/MulderBringLabTo Cities.pdf. Accessed 7 October 2007 60. De Moor K, Berte K, De Marez L, Joseph W, Deryckere T, Martens L (2009) User involvement in living lab research: experiences from an interdisciplinary study on future mobile applications. Proceedings of the Third International Seville Seminar on Future-Oriented Technology Analysis, 103–104 61. Majumdar K, Das N (2005) Mobile user tracking using a hybrid neural network. Wirel Netw 11(3):275–284

390 62. Mäkelä J-P, Pahlavan K (2005) Performance of neural network handoff algorithm under varying mobile velocities. Proceedings of the 2005 Finnish Signal Processing Symposium (FINSIG’05), 42– 45 63. Schmitt J, Hollick M, Roos C, Steinmetz R (2008) Adapting the user context in realtime: tailoring online machine learning algorithms to ambient computing. Mobile Netw Appl 13(6):583– 598 64. Soldani D, Li M, Cuny R (2006) QoS and QoE Management in UMTS Cellular Systems. Wiley, Chichester 65. Verkasalo H (2005) Handset-based monitoring of mobile customer behavior. Master’s Thesis Series. Networking Laboratory, Department of Electrical and Telecommunications. Helsinki: Helsinki University of Technology 66. Corvaja R, Zanella A, Dossi M, Tontoli A, Zennaro P (2004) Experimental Performance of the Handover Procedure in a WiFi Network. Proceedings of the Seventh International Symposium on WIRELESS PERSONAL MULTIMEDIA COMMUNICATIONS (WPMC04) http://www.dei.unipd.it/∼zanella/PAPER/CR_2004/ WPMC04_czdtz_WiFi_cr.pdf. Accessed 7 October 2007 67. Becvar Z, Zelenka J, Bestak R (2006) Comparison of handovers In UMTS And WiMAX. Paper presented at the 6th International Conference ELEKTRO 2006. http://fireworks. intranet.gr/Publications/Fireworks_6CTUPB008a.pdf. Accessed 7 October 2007 68. Hornsby A, Bangash S, Benchimol S, Defee I (2008) An approach to handover between DVB-H and Wi-Fi networks. Proceedings of the 2008 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, 1–8 69. Huang S-M, Wu Q, Lin Y-B, Yeh C-H (2006) SIP mobility and IPv4/IPv6 dual-stack supports in 3G IP multimedia subsystem: Research Articles. Wirel Commun Mob Comput 6(5):543–739 70. Calvagna A, Di Modica G (2004) A user-centric analysis of vertical handovers. Proceedings of the 2nd ACM international workshop on Wireless mobile applications and services on WLAN hotspots, 137–146 71. Kim HJ, Lee DH, Lee JM, Lee KH, Lyu W, Choi SG (2008) The QoE evaluation method through the QoS-QoE correlation model. Proceedings of the 2008 Fourth International Conference on Networked Computing and Advanced Information Management, 719–725 72. Jain R, Chiu D, Hawe W (1984) A quantitative measure of fairness and discrimination for resource allocation in shared computer systems. Research Report. http://www.cs.wustl.edu/ ∼jain/papers/ftp/fairness.pdf. Accessed 20 November 2008 73. Piamrat K, Ksentini A, Viho C, Bonnin J-M (2008) QoE-based network selection for multimedia users in IEEE 802.11 wireless networks. Proceedings of the 33rd IEEE Conference on Local Computer Networks, 388–394 74. Bharatula NB, Tröster G (2006) On-body context recognition with miniaturized autonomous sensor button. Proceedings of 4 th GMM Workshop on Energieautarke Sensorik, 621–628 75. Stägera M, Lukowiczb P, Tröster G (2007) Power and accuracy trade-offs in sound-based context recognition systems. Perv Mob Comput 3(3):300–327 76. De Pessemier T (2008) Proposed architecture and algorithm for personalized advertising on iDTV and mobile devices. IEEE Trans Consum Electron 54(2):709–713 77. Canalyis Ltd (2009) Smart phone market shows modest growth in Q3—But Apple and RIM hit record volumes. Press release, http:// www.canalys.com/pr/2009/r2009112.htm. Accessed 3 November 2009 78. Cooper JW (1998) The design patterns java companion, Online book, http://www.patterndepot.com/put/8/DesignJava.PDF. Accessed 20 November 2008 79. http://www.tcpdump.org/. Accessed 20 October 2009

Mobile Netw Appl (2010) 15:378–391 Katrien De Moor holds a Masters degree in Communication Sciences from Ghent University (Belgium). Currently, Katrien works as a researcher at MICT (Research Group for Media and ICT, www.mict.be), affiliated to the Interdisciplinary Institute for BroadBand Technology (IBBT) and the Department of Communication Sciences of Ghent University. Her research interests include interdisciplinary research on Quality of Experience and Quality of Service in mobile media environments, the evaluation of user-driven innovation techniques in the ICT domain and advances in Living Lab methodologies. Katrien is preparing a Ph.D on the Measurement of Quality of Experience in mobile, living lab environments. E-mail: [email protected], Mailing address: MICT-UGENT, Korte Meer 7, 9000 Ghent, Belgium.

Istvan Ketyko graduated with MSc in Computer Science specialized in telecommunications at the Budapest University of Technology and Economics, Hungary in 2008. Currently, he is a PhD student at the Department of Information Technology at Ghent University, affiliated to the Interdisciplinary Institute for BroadBand Technology (IBBT). His research interests include modeling and analysis of telecommunication systems and evaluation of Quality of Experience. E-mail: [email protected], Mailing address: INTEC-UGENT, Gaston Crommenlaan 8 bus 201, 9050 Gent, Belgium.

Wout Joseph holds a M. Sc. degree in electrical engineering from Ghent University (2000). He started his career as research assistant at the Department of Information Technology (INTECUgent) and received a Ph.D degree from Ghent University in 2005. His research focused the measurement and modeling of electromagnetic fields around base stations for mobile communications related to the health effects of the exposure to electromagnetic radiation. Since October 2007, he works for IBBT-Ugent/INTEC as a Post-Doctoral Fellow (FWO-V Research Foundation—Flanders). His professional interests are electromagnetic field exposure assessment, propagation for wireless communication systems, antennas and calibration. Furthermore, he specializes in wireless performance analysis and QoE. E-mail: Wout.Joseph@Intec. Ugent.be, Mailing address: INTEC-UGENT, Gaston Crommenlaan 8 bus 201, 9050 Gent, Belgium.

Mobile Netw Appl (2010) 15:378–391 Tom Deryckere was born in Ghent, Belgium on the 25th of August, 1981. In July 2004, he received the MSc degree in electrical engineering with the specialization of micro- and optoelectronics from Ghent University (Belgium). The same year he started as a research engineer at the IBBT / Ghent University in the field of interactive media. His research interests are interactive applications, personalization, recommendation systems and evaluation of user experience and Quality of Experience. E-mail: Tom.Deryckere@ Intec.Ugent.be, Mailing address: INTEC-UGENT, Gaston Crommenlaan 8 bus 201, 9050 Gent, Belgium.

Lieven De Marez owns a Masters degree in Communication Sciences (1999) and marketing (2000). He started his career as research assistant at the department of Communication Sciences of Ghent University and owns a Ph.D in Communication Sciences. The main contribution of this work is situated in the development of a ‘segmentation forecasting’ tool for prior-tolaunch prediction of adoption potential, and the development of a blueprint for better introduction strategies for ICT-innovations in today’s volatile market environment. Currently, Lieven is research director of MICT-IBBT. He also teaches ‘innovation research’ and ‘new communication technologies’ at the department of Communication Sciences (Ghent University). E-mail: [email protected], Mailing address: MICT-UGENT, Korte Meer 7, 9000 Ghent, Belgium.

391 Luc Martens was born in Ghent, Belgium on the 14th of May, 1963. He received the MSc degree in electrical engineering and a PhD degree from Ghent University (Belgium), in July 1986 and December 1990, respectively. From September 1986 to December 1990, he was a research assistant at the Department of Information Technology (INTEC), Ghent University. Since January 1991, he has been a member of the permanent staff of INTEC and is responsible for the research on experimental characterization of the physical layer of telecommunication systems and smart interactive services. His research group joined the Belgian research institute IBBT (the Interdisciplinary institute for BroadBand Technology) in 2004. E-mail: [email protected], Mailing address: INTECUGENT, Gaston Crommenlaan 8 bus 201, 9050 Gent, Belgium.

GinoVerleye owns a Masters in Experimental Psychology and a Ph.D. in Psychometrics. Currently, he teaches statistics and research methodology to social sciences students at Ghent University, Belgium. His research background is in data quality, especially missing data technology. From an applied research point of view, Gino is interested in closing the gap between qualitative and quantitative research. E-mail: Gino.Verleye@Ugent. be, Mailing address: MICT, Korte Meer 7-9-11, 9000 Ghent, Belgium.