Usability Evaluation of Web-Based Collaboration Support Systems: The Case of CoPe_it! Nikos Karousos1, Spyros Papaloukas1, Nektarios Kostaras1, Michalis Xenos1, Manolis Tzagarakis2, and Nikos Karacapilidis3 1
Software Quality Research Group, Hellenic Open University 26 222 Patras, Greece {karousos,s.papaluk,nkostaras,xenos}@eap.gr 2 Dept. of Economics, University of Patras 26 504 Patras, Greece
[email protected] 3 IMIS Lab, MEAD, University of Patras 26 504 Patras, Greece
[email protected]
Abstract. Usability is considered as a very significant factor towards the wide acceptance of software applications. Although the usability evaluation can take place in different forms, the entire evaluation procedure usually follows predefined ways according to a classification of the common characteristics of software applications. However, contemporary Web 2.0 applications, which aim at both social network development and collaboration support, reveal the need for modifying the settings of the evaluation procedure. This is due to some unique characteristics of these applications, such as the support of both synchronous and asynchronous collaboration, the use of common spaces for working and information exchanging, and the advanced notification and awareness services. This paper explores these applications’ particularities with respect to the way the whole usability evaluation procedure is affected and proposes a composite evaluation technique based on the development of appropriate heuristics that is suitable for such cases. The aforementioned issues are elaborated through the case of CoPe_it!, a Web 2.0 tool that facilitates and enhances argumentative collaboration. Keywords: Collaboration Support Systems; Usability; Evaluation.
1 Introduction Usability remains a critical issue in both the design and implementation of any software application with high interactivity. Usable systems have great potential to become widely accepted, while systems with very rich functionality may become useless if they are difficult to be used. In order to ensure a high level of usability in software applications, researches have developed an open set of usability evaluation methods that can be followed in different phases of the software lifecycle and can be M.D. Lytras et al. (Eds.): WSKS 2010, Part I, CCIS 111, pp. 248–258, 2010. © Springer-Verlag Berlin Heidelberg 2010
Usability Evaluation of Web-Based Collaboration Support Systems
249
also combined for optimal results. These methods are categorized – according to their characteristics – as analytic, empirical and inquiry [1], [2], [3]. The selection of the appropriate method (or set of methods) to be applied in a particular system’s evaluation still remains an open research issue. Until now, the most common way to evaluate software is first to classify it to a predefined software class and then to follow an evaluation procedure that best fits to this class (according to empirical facts). However, recent advances in computing and Internet technologies, together with the advent of the Web 2.0 era, resulted to the development of online web-based collaboration support tools that cannot be easily classified under the traditional software classes, since they offer a wide set of novel functionalities and diverse visual representations to an open set of potential users. Such tools offer people an unprecedented level of flexibility and convenience to participate in complex collaborative activities such as communication, online debates, distance learning, co-authoring, decision support, mind maps, common workspaces, problem solving etc. The usability factor of these tools cannot be evaluated by following an existing evaluation methodology and has to be revised based (but not limited) on their unique characteristics. This paper explores the particularities of web-based collaboration support systems, with respect to the way the whole usability evaluation procedure is affected, and proposes a composite evaluation technique that is suitable for such cases. More specifically, Section 2 sketches the most widely accepted usability evaluation methods classified under two main categories and discusses the ways they could combine for the evaluation of applications. Section 3 elaborates critical issues that affect the settings of the evaluation methodology for contemporary web-based collaboration support systems, while section 4 presents the proposed evaluation approach for such cases. Finally, Section 5 presents a case study for the proposed methodology; this study concerns CoPe_it! [4], a Web 2.0 tool that facilitates and enhances argumentative collaboration.
2 Usability Evaluation: Methods and Techniques The term usability is described in the ISO 9241-11 standard [5] as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”. Effectiveness is defined as the accuracy and completeness with which users achieve specified goals. Efficiency measures the resources expended in relation to the accuracy and completeness with which users achieve goals. Finally, satisfaction is the freedom from discomfort, and positive attitudes towards the use of the product. Nielsen [6] further described usability according to five basic parameters, namely: (a) ease and speed of learning of system use, (b) efficiency to use, (c) easiness to remember system use after a certain period of time, (d) reduced numbers of user errors and easy recovery from them, and (e) subjective satisfaction of users. The evaluation of usability has three main goals: the assessment of software’s functionality, the assessment of users’ experience during interaction with the interface and the identification of specific problems of the software. Usability evaluation can be
250
N. Karousos et al.
performed using various methods. The most common categories these methods belong to are the analytic and the empirical [1], [2], [3]. Analytic methods are based either on standards and rules or on theoretical models that simulate a user’s behavior. These methods are often used in a usability laboratory at the stage of the syntax of specifications before the development of the prototypes and without the participation of users. This category includes two types of evaluation methods: (i) inspections, which include heuristic evaluation and walkthroughs, and (ii) theoretically-based models, which are used to predict user performance. Heuristic evaluation employs mainly usability experts, but typical users may be used as well to identify any usability problem. This is achieved with the guidance of heuristics, which is a mixture of rules that use common sense knowledge, usability guidelines and standards. Walkthroughs involve mainly usability experts walking through scenarios with prototypes of the application. Finally, theoretically-based models are used for comparing the efficacy of different interfaces of the same system and the optimal arrangement and location of features on the interface base [7]. A main characteristic of analytical evaluation is that users do not need to be present [8]. Empirical methods are based on the development and evaluation of the behaviour and the characteristics of a prototype or a completed system. These methods can be employed either in a usability laboratory or wherever the system is in full operation. The participants of the evaluation process can be both representative users and usability experts. The empirical methods can be further divided into two main categories: experimental and inquiry. The most commonly employed experimental methods are performance measurement, thinking aloud protocol and user actions logging. Performance measurement provides quantitative measurements of a software system’s performance when users execute predefined actions or even complete operations. Thinking aloud protocol is a method that focuses on the measurement of the effectiveness of a system and the user’s satisfaction. According to this method, users interact with the system, while they state aloud their thoughts, opinions, emotions and sentiments regarding the system. Finally, user actions logging involves techniques that record the actions of users while they interact with a software product. The most common of them are note taking, voice recording, video recording, computer logging and user logging. Finally, inquiry methods focus on the examination of the usability characteristics of a software system by measuring users’ opinion. The most popular of them are user questionnaires, user interviews, focus groups and field observation. The use of questionnaires provides valuable feedback and obtains answers to specific questions from a large group of people, especially in the case that the target group is spread across a wide geographical area [7]. Interviews form a structured way of evaluating a software system, where the researcher is in direct contact with the user. The questions of the interview follow a hierarchical structure, through which the general opinion of the product is formed, while more specific quality characteristics are also considered. Focus group is a method according to which a group of about 10 users is formed under the supervision of a coordinator, who is in charge of the topics of the conversation. At the end of this conversation, the coordinator gathers their conclusions on the quality of the software product. Finally, in field observation the researcher observes
Usability Evaluation of Web-Based Collaboration Support Systems
251
the users at their working place, while they are using and interacting with the software product. In general, usability evaluation can be performed using methods from the abovementioned two broad categories (analytic and empirical). Each of these categories comprises several methods that may be performed independently, in order to evaluate a specific usability aspect of a system. The settings of a complete evaluation procedure that includes the selection of the appropriate methods, the implementation of the evaluation stage and the results analysis usually depend on the type of the software applications. Based on evaluators’ experience, the existing infrastructure and the applications’ main characteristics, the settings may be modified in order to develop a suitable evaluation procedure for a particular application. However, the development of innovative software applications, such as the contemporary web-based collaboration support systems, raises the need for a deep pre-evaluation analysis of their characteristics in order to determine an appropriate usability evaluation procedure.
3 Critical Issues about Usability Evaluation in Contemporary Web-Based Collaboration Support Systems The selection of the evaluation techniques is usually based on the type of software applications. However, contemporary collaboration support systems cannot be easily classified as an ordinary type of applications with respect to the evaluation purpose. These systems usually cover a wide range of functionalities and offer different visual representations in the same environment. Moreover, the exploitation of the Web 2.0 capabilities in such systems increased the level of complexity since social networks and different semantics over the same data can be supported. In this context, these systems should be examined with respect to their particularities towards the extraction of a set of important characteristics that may aid the determination of a suitable evaluation procedure. The main idea, in which collaborative software is based on, is that such systems should focus on the support of both individual and team work. It is required to design environments that can handle and represent different kinds of interactions while they can provide intelligent functionality in order to assist participants to problem solving. Furthermore, as contemporary collaborative software evolves and migrates into the Internet itself, it contributes to the development of the so called Web 2.0, bringing a set of web-based collaborative features within corporate networks. These include document sharing and group authoring, group calendar, instant messaging, web conferencing, etc. Apart from the above features, Web 2.0 collaborative applications are nowadays characterized by their ability to work with and manage a large number of participants organized by themselves in social networks. In such cases, issues like awareness, personalization and adaptation are also playing a critical role for the systems’ exploitation; at the same time, they may complicate the usability evaluation process. Below is a non-exhaustive list of critical characteristics of contemporary Web-based collaboration support systems that have to be taken into consideration for the design of an effective evaluation procedure:
252
N. Karousos et al.
•
•
•
•
•
•
•
Context of a system's use [9]. Individuals may collaborate while being colocated or geographically dispersed; besides, they may collaborate in a synchronous or asynchronous mode (not depending on others to be around at the same time). The above can affect both the human interface and the types of interactions between participants. Individuals’ and teams’ work. Collaborative work usually aims at the establishment of a solution in a team’s problem. For this purpose, individuals may work alone or as a part of a team. For each case, the evaluation method may consider the role of each single user inside the context of the team’s objectives. Appropriation. An individual or group adapts a technology to their own particular situation; the technology may be appropriated in a manner completely unintended by the designers [10], [11]. In such cases, the more generic purpose the collaborative applications are, the more different scenarios for collaboration they may support. However, the overall evaluation procedure on generic purpose software is not an easy task since it highly depends on the selection of one or more representative scenarios. Awareness. Individuals working together need to be able to gain some level of shared knowledge about each other's activities [12]. Awareness may be achieved with various techniques within and outside the scope of an applications’ environment and may include different technological means (email, mobile notification etc) that have to be considered while evaluating the whole application. Cognitive overhead and information overload [13]. Diverse types of data and knowledge resources may appear during the exchange of numerous ideas about the solution of a specific issue. In such cases, individuals usually have to spend much effort to conceptualize the current state of the collaboration and grasp its contents. Specifications for providing multiple projections, scalable filtering and timely processing of the associated big amounts of data should be considered and supported while evaluating the application. Social behavior [13]. The representation and visualization of dynamically changing social structures, relationships and interactions taking place in a collaborative environment with multiple stakeholders are of major importance. Perception and modeling of actors, groups and organizations in the diversity of collaborative contexts are usually supported. What is required is development and utilization of appropriate mechanisms that perceive given structures in order to extract useful information and enable adaptation. Expression of tacit knowledge [13]. A community of people is actually an environment where tacit knowledge (i.e. knowledge that the members do not realize they possess or knowledge that members cannot express with the means provided) predominantly exists and dynamically evolves. Such knowledge must be efficiently and effectively represented in order to be
Usability Evaluation of Web-Based Collaboration Support Systems
253
further exploited in a collaborative environment. However, the subjectivity of such representation may lead to misunderstandings. Past studies in the area of available practices in usability measuring revealed a lack of existing methodologies in such Web-based tools [14]. Thus, the adoption of new evaluation methodologies became crucial.
4 The Proposed Usability Evaluation Methodology A set of evaluation methods for the measurement of usability of the games Civilization and Second Life were carried out in the Software Quality Evaluation Laboratory of Hellenic Open University (HOU) by the Software Quality Research Group [15-17]. The results and the experience gained have shown that the combination of methods amplifies the progress of the experimental procedure, providing that the conducting conditions simulate reality adequately and the users could interact and simultaneously express their thoughts in a very easy and spontaneous manner [18]. These methods were used to categorize usability problems through the observation of users and validate them by the use of an integrated experiment. The combined methods were used both in HOU’s laboratory, under the discrete attendance of the usability experts, as well as in the users’ own places. The development of the proposed methodology consists of four main stages: •
•
•
•
Analysis of existing scientific studies, identification and classification of usability problems for web-based collaboration systems, and extraction of a set of specifications that deals with both user interface principles and application specific characteristics. Observing users interacting with the system, while the evaluation expert records the usability problems they may encounter (some of these problems already exist in a list produced in the previous stage). At this stage, known heuristics are extended to more specific ones, according to the particular requirements. Description of how usability problems can be resolved through the creation of heuristics. Although some heuristics have derived from previous studies, they can be adapted for Web-based collaboration support systems. It is, therefore, imperative at this point to categorize heuristics. Usage and validation of heuristics by an integrated experimental procedure using three different methods. First, heuristic evaluation by usability experts, during which most usability problems of the software are detected (thus indicating the effectiveness of our heuristics). The heuristics verification is achieved by using a new observation and logging combined with the thinking aloud protocol and, finally, questionnaires adapted upon the above heuristics.
254
N. Karousos et al.
5 Evaluating CoPe_it! CoPe_it! is an innovative web-based tool that complies with collaborative practices to provide members of communities with the appropriate means to manage individual and collective knowledge, and collaborate towards the solution of diverse issues (Figure 1). CoPe_it! achieves this by introducing the notion of incremental formalization of argumentative collaboration, in which the tool considers semantics as an emergent aspect and gives control over formalization to the user [13]. As the collaboration proceeds, more advanced services can be available. Once the collaboration has been formalized to a certain point, CoPe_it! can exhibit an active behavior facilitating the decision making process. CoPe_it! enables synchronous and asynchronous collaboration. It adopts a spatial metaphor to depict collaboration in a 2-dimensional space and supports the process of information triage [13]. Although CoPe_it! is a tool that can be used in several scenarios – from informal discussions and content structuring and sharing to medical decision support and diplomacy – its usage remains at a low level. Traffic reports (using Google Analytics) show that there are many users that visit CoPe_it! but only a small percentage of them revisits it again. The tool’s usability evaluation was expected to give valuable results that will aid its designers to a forthcoming user interface revision.
Fig. 1. Collaboration taking place in a workspace of CoPe_it!
The evaluation of CoPe_it! is based on the proposed methodology, in which a set of heuristics, adapted according to the requirements of the software under evaluation, was developed. Usability experts using the heuristic evaluation method have detected a number of usability errors. The results of heuristic evaluation will be validated and extended in the future by the use of experimental and inquiry evaluation methods like users logging, thinking aloud protocol and questionnaires. More precisely, the idea was that heuristics can be developed for specific software categories like Web-based collaboration support systems. The validation of our heuristics can be made by evaluating CoPe_it! and by developing principles that describe the usability problems that may occur.
Usability Evaluation of Web-Based Collaboration Support Systems
255
Table 1. Usability Heuristics Heuristic Rule H1.1. Visibility of system status H1.2. Match between system and the real world
H1.3. User control and freedom
H1.4. Consistency and standards H1.5. Error prevention
H1.6. Recognition rather than recall
H1.7. Flexibility and efficiency of use
H1.8. Aesthetic and minimalist design
H1.9. Help users recognize, diagnose, and recover from errors H1.10. Help and documentation
Summary The system should always keep users informed about what is going on through appropriate feedback within reasonable time. The system should speak the user’s language, with words, phrases and concepts familiar to the user, rather than systemoriented terms. Follow real-world conventions, making information appear in a natural and logical order. Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Accelerators - unseen by the novice user - may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Error messages should be expressed in plain language (no codes), precisely indicate the problem and constructively suggest a solution. Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Our methodology suggests the development of heuristics adapted according to both the Nielsen’s approach [6] (Table 1) and the specific particularities of Web-based collaborative systems (Table 2). This was conducted by studying usability problems in existing scientific surveys, by exploiting experience gained during the development of this kind of heuristics for specific software [16],[19], and through the observation of users by usability experts. The heuristics produced are classified into two categories which concern: (i) The user interface: a set of usability heuristics [8] derived from a factor analysis of 249 usability problems was used (see Table 1); (ii) The
256
N. Karousos et al.
Web-based collaboration support systems characteristics: heuristics derived from such characteristics have been analyzed in Section 3 and are presented in Table 2. Furthermore, in the case of CoPe_it!, additional heuristics have been included to address the argumentation related features and functionality. Table 2. Heuristics derived from the system’s particularities Heuristic H2.1. Context of a system's use H2.2. Individuals’ and Teams’ work H2.3. Awareness
H2.4. Appropriation H2.5. Cognitive overhead and information overload H2.6. Social behavior
H2.7. Expression of tacit knowledge H2.8. Argumentation H2.9. Decision Support
Summary The system may support both synchronous and asynchronous collaboration mode, as well as co-located and distributed cooperation. Individuals work should be considered by the system within the context of a team’s work. The system should provide a variety of awareness services in order to keep the user informed about the whole status of the collaboration. Users should have the ability to adapt their environment to best fit to their special use case. The system should support the provision of multiple projections, scalable filtering and timely processing of the associated big amounts of data. The system should represent and visualize dynamically changing social structures, relationships and interactions and should also perceive given user structures in order to extract useful information and to enable adaptation. Users should be able to efficiently and effectively discover and represented tacit knowledge as well as to give and understand semantics added by other individuals or teams. Users should be able to express their thoughts as arguments in a well formed discussion towards the problem solving. The system should aid users to problem solving through decision support mechanisms.
The next stage of our methodology concerns the heuristic evaluation. It is worth mentioning that the evaluation will take place in two different stages for each of the two tables and by different teams of evaluation experts. For the heuristics of Table 1, some user interface experts will be used, while for those of Table 2 some experts on Web-based collaborative systems will be involved. Based on the initial results of the evaluation, some scenarios of usage will be created. These will lead users to some potential usability problems. Next, these scenarios will be given to the users in order to start observe and record their actions. In this way, it will be possible to confirm both the initial results of heuristic evaluation and the validity of the heuristics derived from the web-based collaboration support systems’ characteristics. The results of the observation are expected to uncover usability flaws, which may not be detected in the first stage of the evaluation. In order to achieve this task, specialized software that records the activity on the computer screen will be used. This software will allow us to record the user interactions, facial expressions and users’ verbal reactions when the thinking aloud protocol is used.
Usability Evaluation of Web-Based Collaboration Support Systems
257
In the final stage of the proposed methodology, a questionnaire-based form will be offered to users in order to have the ability to get some particular comments about their experience. This is expected to be an extra confirmation about the result of the heuristic evaluation. As the evaluation takes place, the produced errors in the first table’s heuristics will highlight the need for corrections in the user interface, while the second table’s heuristics will bring out functional errors that may occurred in the earlier stages (design phase) of the system’s development. Three main issues were pointed out during the first stages of the evaluation process of CoPe_it!: •
•
•
The usage of heuristics in the usability evaluation of such tools seems to be promising enough, since the evaluation turned to be focused on the main problems of the tool, while users may later bring up new unnoticed problems. Generic purpose collaboration tools are too difficult to be evaluated by using a single scenario of usage. Unaware users may choose wrong functionality for their tasks and fail to reach their target. Such systems have to be evaluated using a plethora of scenarios of usage. Observing user-machine interaction requires a rich infrastructure that can support parallel recording of multiple user workstations. The gap between the theory and the implementation of such a method increases when synchronous collaboration is taking place.
Finally, the usability experts observed some crucial issues while using the abovementioned heuristics: • Heuristics may enable evaluators to identify problems that they would have otherwise failed to notice. • It is easy to use separate heuristics for each category of usability errors. In this work, separation concerns user interface and the Web-based collaboration support systems characteristics. • Flexibility in heuristics determination may result to a high level of adaptation in the evaluation procedure as far as the application’s particularities are concerned.
6 Conclusion This paper presents an innovative usability evaluation technique that is based on the combination of existing evaluation methods and takes into account the particularities of contemporary Web-based collaboration support systems. In such systems, which are difficult to be classified under a predefined software application class, the application of a traditional evaluation procedure may not be efficient. The usage of heuristics that concern user interfaces and application’s particularities seems to be suitable in cases of applications with complicated functionality. This is due to the flexible heuristics determination, which makes the entire evaluation process adaptable and can also help the evaluators (both experts and users) to focus on critical application problems.
258
N. Karousos et al.
References 1. Avouris, N.: Human Computer Interaction. Hellenic Open University Publications (2003) (in Greek) 2. Crosby, P.: Quality is still free. McGraw-Hill, New York (1996) 3. Lindgaard, G.: Usability Testing and System Evaluation: A Guide for Designing Useful Computer Systems. Chapman and Hall, London (1994) 4. CoPe_it! Available at http://copeit.cti.gr 5. ISO 9241 Part 11, Ergonomic Requirements for Office Work with visual display terminals (1998) 6. Nielsen, J.: Usability Engineering. Academic Press, London (1993) 7. Sharp, H., Rogers, Y., Preece, J.: Interaction Design: beyond human-computer interaction, 2nd edn. Wiley, Chichester (2007) 8. Nielsen, J., Mack, R.L.: Usability Inspection Methods. John Wiley & Sons, Inc., New York (1994) 9. Shen, H.H., Dewan, P.: Access control for collaborative environments. In: Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work, pp. 51–58. ACM Press, New York (1992), http://portal.acm.org/citation.cfm?id=143461 10. Tang, J.C., Isaacs, E.A., Rua, M.: Supporting distributed groups with a Montage of lightweight interactions. In: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, pp. 23–34. ACM Press, New York (1994), http://portal.acm.org/ citation.cfm?id=192861&dl=GUIDE 11. Neuwirth, C.M., Kaufer, D.S., Chandhok, R., Morris, J.H.: Issues in the design of computer support for co-authoring and commenting. In: Proceedings of the 1990 ACM Conference on Computer-Supported Cooperative Work, pp. 183–195. ACM Press, New York (1990), http://portal.acm.org/citation.cfm?id=99354 12. Patterson, J.F., Hill, R.D., Rohall, S.L., Meeks, S.W.: Rendezvous: an architecture for synchronous multi-user applications. In: Proceedings of the 1990 ACM Conference on Computer-Supported Cooperative Work, pp. 317–328. ACM Press, New York (1990) 13. Karacapilidis, N., Tzagarakis, M., Karousos, N., Gkotsis, G., Kallistros, V., Christodoulou, S., Mettouris, C., Nousia, D.: Tackling cognitively-complex collaboration with CoPe_it! International Journal of Web-Based Learning and Teaching Technologies 4(3), 22–38 (2009) 14. Hornbæk, K.: Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human-Computer Studies 64(2), 79–102 (2006) 15. Papaloukas, S., Xenos, M.: Usability and Education of Games through Combined Assessment Methods. In: Proceedings of the 1st ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA 2008), Athens, Greece, July 15-19 (2008) 16. Papaloukas, S., Xenos, M.: Enhanced socializing through the usability of a videogame virtual environment: a case study on Second Life and SimSafety. Technical Report 30-0110, Hellenic Open University (2010) 17. Software Quality Research Group, Hellenic Open University (2009), http://quality.eap.gr 18. Xenos, M., Papaloukas, S., Kostaras, N.: Games’ Usability and Learning – The Civilization IV Paradigm. In: Proceedings of the IADIS Game and Entertainment Technologies, Conference (GET 2009), Algarve, Portugal, June 17 - 19, pp. 3–10 (2009) 19. Papaloukas, S., Patriarcheas, K., Xenos, M.: Usability Assessment Heuristics in New Genre Videogames. In: Proceedings of the 13th Panhellenic Conference on Informatics (PCI 2009), Corfu, Greece, September 10 - 12, pp. 202–206. IEEE Press, Los Alamitos (2009), ISBN: 978-0-7695-3788-7