IEEE MELECON 2004, May 12-15, 2004, Dubrovnik, Croatia
Evaluation of Page Design Concepts of a Web-based Authoring Shell Andrina Granić*, Vlado Glavinić** and Lada Maleš* * Faculty **
of Natural Sciences, Mathematics and Education, University of Split, Croatia Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia {andrina.granic, lada.males}@pmfst.hr,
[email protected] ITSs/ASs' interface, which still remains the Achilles heel of most systems [6]. Our research is focussed on user-centered, usable user interfaces for intelligent computerized educational systems like authoring shells. In order to develop an advanced Web-based authoring shell, Web-based AS, that can offer some amount of adaptivity and intelligence, we enhance the operating capabilities of an authoring shell [7] and its distributed version [8] already used in the process of learning and teaching with adaptive and "Web-based" features. Web-based AS's usability issues, in particular the relevant ones concerning usability evaluation of the shell's Web-based design, are elaborated in this paper. The methodology applied for the evaluation of page design concepts, along with the results thus obtained, is described in the following.
Abstract — Inclusion of intelligence in computer-based learning and teaching systems, along with the employment of Internet and World Wide Web, has led to the development of Web-oriented educational systems like Webbased authoring shells. In order to redesign a shell's interface, usability evaluation, as a distinct validation process, is required. The methodology applied for the evaluation of a Web-based authoring shell's page design concepts, along with the achieved results, is elaborated in this paper.
I. BACKGROUND Improving the usability of computer systems is perhaps the most important goal of current research in the field of human-computer interaction, HCI. However, in spite of the great progress in the field of HCI, restrictions in usability are still decreasing effectiveness and efficiency of the majority of user interfaces as well of whole systems. Namely, neither both comprehensive functional/task analysis and specification nor usability guidelines themselves can guarantee that a usable system will be developed. Usability evaluation is required as a validation process by which HCI characteristics of a system are measured and weaknesses are identified for correction, e.g. [1]. On the other hand, the last grand milestone in educational technology was the introduction of the Internet and the World Wide Web what seems to radically alter the way humans teach and learn, e.g. [2]. The educational community can not disregard the fact that the Web represents information that can be disseminated worldwide and accessed in minimum time. As Web-based distance education technologies may improve education and support totally new eduactional system, Web-based education is at the present a hot research and development area [3]. This goes along with contemporary efforts in including an increasing level of intelligence in computer-supported learning and teaching systems, what has led to the development of intelligent tutoring systems, ITSs, whose principal operating paradigm is the imitation of human tutor capabilities [4]. In order to further ease and automate the preparation of specialized ITSs and hence to cover a variety of different domains of interest, ITS generators were developed, which are usually denoted as authoring shells, ASs [5]. These shells can be interpreted as to support "programming" by being parameterized with particular domain knowledge. It should be noted, however, that starting already from the early days of development, inadequate consideration has been given to 0-7803-8271-4/04/$20.00 ©2004 IEEE
II. EVALUATING DESIGN CONCEPTS Methodologies for building usable computer systems have been introduced and refined over the past fifteen or so years within the discipline of HCI. These methodologies are intended to provide usable and functional computer system [9] in order to achieve its almost transparency and enable end users to fully concentrate on the work. Hence, HCI principles include an early and consistent focus on end users and their tasks, empirical measurements of system usage, as well as iterative development. As one of the key system features [1], usability is mainly concerned with making an interactive system easy to learn and easy to use. In the particular area of ITSs/ASs this means providing harmony among the four components of any work situation: user, task, environment, as well as the ITS or the shell itself. A major reason for poor usability generally of most interfaces and in fact of the majority of present day ITS/AS user interfaces, is the lack of understanding of the process by which a usable interaction between users and machines is developed. Studies show that redesigning user interfaces on the basis of user testing (interaction measurement between users and computer systems) and iterating can substantially improve usability [10], because usability can only be meaningfully measured during task performance [11]. Therefore, the most promising approach to the realization of usable systems is to iterate design and usability evaluation until a satisfactory solution is achieved [10], [12], [13].
751
easily and quickly, with relatively no cost except the employees' time. Namely, the point is that any kind of usability evaluation, like most methodological process improvements, will improve the final version of the system, as long as its results provide designers an appropriate feedback on which further improvements could be achieved. Firstly we conducted usability evaluations of the on-site version of authoring shell [7] already used in the process of learning and teaching at our Faculty. Considering different methods of usability evaluation, having in mind that usability can only be meaningfully measured during task performance [11] and that it is better to perform any kind of usability measurement than no testing at all, e.g. [1], we selected an approach which comprehends formal user testing during users' walkthrough along an AS interface, guided with a set of predefined steps, e.g. [22]. Test users were tested with actual tasks under conditions that are as close to those in the actual AS usage. Such approach to authoring shell's usability evaluation was based on criteria expressed in terms of objective performance measures in systems use, as well as in terms of users' subjective assessment. Scenario-based usability evaluation enabled not only setting quantitative goals of execution before the evaluation is performed, but also enabled the specification of operationally defined criteria for success [ibid.]. The results obtained through such usability evaluation were subsequently used for determining the interface strength and weakness, hence furnishing a direction in AS interface design improvement. Presently, in order to develop an advanced Web-based AS, we enhance the operating capabilities of an existing authoring shell with Web-based features. Consequently, because designing for Web is different from designing traditional computer system interfaces, we need to determine an appropriate methodology for Web-based design testing. Evaluation of Web page design concepts elaborated in this paper is just one part of the comprehensive approach to usability evaluation of advanced Web-based AS.
A. Evaluating Web-based Design Concepts Although usability engineering has come to play an increasingly important role in conventional software development, it is rarely part of the Web-based system development, e.g. [14]. While some Web style guidelines have now appeared to aid system designers, e.g. [15], [16], [17], employing usability guidelines by themselves does not guarantee the development of a usable system. Usability evaluation, as a distinct validation process, is required. It is important to point out that Web-based systems have both significant similarities and significant divergences with respect to other software systems, which must be taken into account when performing usability testing. The particular challenges of Web development include on the one hand a highly diverse user population that is non-trivial to predict, as well as a highly diverse set of end-user computer configurations (including hardware, systems software and browsers) and on the other hand a wide disparity in connectivity speed and bandwidth [14], [18]. Conversely, there are also similarities between designing for the Web and the traditional user interface design – both are interactive software designs. Once Webbased system creation is seen as software development, its design life cycle is identical to that of traditional software and includes the common phases of requirements gathering, analysis, design, implementation, evaluation and deployment. Consequently, the fundamental challenge remains – how to identify usability shortcomings before releasing a new system or in the early stages of a redesign, when changes can still be made relatively easily and cheaply. There are two different aspects of Web design in general, and of a Web-based authoring shell as well, along with corresponding different usability evaluation methods to elicit information on these aspects, e.g. [17]. Usability evaluation methods that inspect a site as a whole and define site-level usability, evaluate the overall site structure and navigation flow between pages (how well everything works together) including site navigation, page layout, overall writing, as well as graphical style. On the other hand, usability methods that evaluate the design of each individual page define a page-level usability, because they deal with the ease of understanding links, headlines, relevance of graphics and icons used, inclusion of useful information and exclusion of the useless one. In order to elicit information on above design aspects, there are many ways to evaluate the usability of a Webbased system design [1], [15], [17], [19], [20], [21]: (i) heuristics, i.e. design principles, can be used by experts to judge usability, (ii) benchmarking can be used to compare one Web site with another or against a set of standards, (iii) prototyping can be used to quickly and cheaply develop a mock site that can be shown to users before the real site is launched, (iv) a Web site can be evaluated against a checklist of usability items or (v) users can participate in focus groups or in controlled laboratory sessions in order to provide feedback on the usability of the site.
III. EVALUATING WEB PAGE DESIGN CONCEPTS It is a well-known fact that users evaluate the usability of any interactive computer system in terms of quality of its user interface. Specifically in the field of computersupported learning and teaching, literature review reveals that poor interface design can prevent students from learning [23] and also admits that communication among student, teacher and knowledge usually remains the Achilles heel of most ITSs/ASs [6]. Another reason for this emphasis on user interface design is an interpretation of the collection of Web pages as a software system. Namely, a Web-based AS can be considered at the same time a software system and also a Web site, because "the Web is delivery medium, content provider and subject matter all in one" [24]. As for any well-designed software system or Web site, good usable user interface design is crucial. Accordingly, conceiving the fact that redesigning interfaces on the basis of user testing and iterating can substantially improve usability, and been aware at the same time that the visual design of the current page is one of the primary factors in AS's poor usability, we applied a method for testing page-level usability.
B. Our Experience with Usability Evaluation Our experience at the Faculty of Natural Sciences, Mathematics and Education, University of Split indicates that useful usability evaluation can be performed quite
752
•
In order to evaluate page design concepts of a Webbased AS and to identify the extent to which visual design assists users to locate particular elements that might be contained on a Web page, we used a usability test method described by Tullis [19]. To test whether layout itself helped find the page elements, all text on the page was "greeked", i.e. all words were replaced with meaningless nonsense (sometimes called mumble text). This meant that the users had to rely on the communicative aspects of each design in order to perform their test tasks.
subjective judgement: test users were asked to subjectively rate the designs in order of their preference, giving three points to their first choice, two points to the second and one point to the design they liked the least.
B. Results As in the Tullis study, we were quite generous in determining whether test users had successfully completed a task. Namely, if their selection included the correct page element, it was considered correct even if it also included incorrect elements. Moreover, in some designs a specific page element was placed more then once (e.g. help) and test user's selection was considered correct even if she/he identified only one appearance of a particular element.
A. Methodology When working on the redesign of our Web-based AS we wanted to evaluate each aspect of the design effort. Screenshots for each design were tested to ensure that the layout and the overall page design should help users to identify the diverse elements (e.g. intranet identifier/logo, navigation labels, last updated) of the page and to "interpret" them properly. Because good design should "guide the user's eye" [ibid.], designs for providing basic page layouts were developed and were subjected to usability evaluation. Although we were aware of a range of design problems that needed to be addressed (there was too much text on the current AS's home page, with a small font size, local navigation was not highly visible and often overlooked, there was also a distracting moving image), we envisaged how the problems might be solved and detailed these in the visual design of the respective pages. However, we wanted to verify that the proposed visual page design concepts would in fact resolve the issues without introducing new usability problems. Therefore we decided to explicitly test to which extent users could identify various more or less "standard" elements represented in each "greeked" visual design submitted to them. It is important to point out that the element intranet identifier/logo has not been altered. Moreover, in order to achieve complete feedback, we asked the test users to subjectively assess on a rating scale each of the designs. When performing usability evaluation we were taking into account the fact that the best results come from testing no more than 5 test users, because they can find 85% of the usability problem [25]. Seven employees of our Faculty participated in the evaluation thus conducted. They hold quite diverse positions, from teaching to technical staff, and most of them use the Web at least once a day. Two sets of usability tests with three design concepts (see Figure 1 and Figure 2) were conducted – one for each of the page levels for which a design concept was requested: (i) the home page, i.e. login page of a Web-based AS and (ii) the page with a main menu for each of the AS's users (expert, teacher, student and administrator). Each set comprised of three designs offered in a random order to the test users. After a brief explanation of the nature and purpose of the evaluation conducted, test users were asked to perform the following tasks for each design: • measure of performance: given a list of seven standard page elements (see next section) test users were asked to identify and draw labeled blocks around the part of the color printout that matched each one of the listed elements; blocks could not overlap and could not be nested; if a test user thought that an element was missing in the page, it had to be marked as "not there", and
a) Design A
b) Design B
c) Design C Figure 1. Three different concepts for the design of Web-based AS home page
753
corresponded to each one of the listed elements. A list of seven standard page elements for evaluation of home page included: intranet identifier/logo, page title, system login, main content for page, person responsible for page, last update and help. The percentage of correct identification for seven standard page elements for each of the three home page designs is depicted in Figure 3. The design elements intranet identifier/logo (intentionally placed in the same location of each design), page title and system's login are identified correctly by all users in each design. As a Webbased AS home page should show in a noticeable place the information on the shell itself, due to its purpose of informing potential users about the reached site, significantly better results for the element main content for page are achieved in the designs B and C. Because of a quite high percentage level of incorrect identification of the element person responsible for page on designs B and C, we concluded that this element should be placed at the bottom of the page where users expect it. The same conclusion can be made also for the element last update. Namely, since in design B this element was not centered at the bottom of the page, the identification results were the worst. As the page element help had in each design multiple instances, the high percentage of correct answers shown in Figure 3 indicates the correct identification per page design of at least one of them. In order to obtain the subjective judgement of the three proposed designs, test users were asked to subjectively rate them in order of their preference. Overall, design C performed best and was the first choice of 71 percent of test users. Taking into account the above results design C appears to be best in terms of performance, as well as in terms of preference.
a) Design A
b) Design B
100% 90% 80%
% correct
70% 60% 50% 40% 30% 20% 10% 0%
lg
pt
sl
mc
prp
lu
h
standard page elements LEGEND: lg: logo pt: page title sl: system's login mc: main content for page prp: person responsible for page lu: last update h: help
c) Design C Figure 2. Three different concepts for the design of Web-based AS main menu page
Design A Design B Design C
Figure 3. Percentage of correct identification for seven standard page elements for each of the three home page designs
1) Results for the Design of a Home Page In order to obtain the measure of performance, test users were asked to identify and draw labeled blocks around the part of the color printout of the home page that
2) Results for the Design of a Main Menu Page Results obtained when measuring the performance of page elements' identification in the main menu page were
754
analyzed in the same way as above. The list of seven standard page elements for the evaluation of the main menu page was slightly different from the previous one and comprised of: intranet identifier/logo, page title, main content selection, site local navigation, person responsible for page, last update and help. The percentage of correct identification for seven standard page elements for each of the three main menu page designs is shown in Figure 4. The page element intranet identifier/logo was successfully identified in each design by all test users. It is important to point out that this element was intentionally placed at the same location in each design. A high percentage of correct recognition of elements page title and main content selection was also obtained. Site local navigation identification was relatively high with the same percentage for each design no matter of its placement. According to the obtained performance results as well as those for the location of the elements person responsible for page and last update we reached the same conclusion as for the home page. On the other hand, significantly poor results were obtained for the page element help, although it was present in each design more then once and the selection was considered correct even if test users identified only one appearance. Such poor results were, however, the consequence of a possible mistake in the design, as this element was generally confused with an additional page element otherwise not included in the list of standard ones. Test users' subjective judgement about the three proposed designs for an AS's main menu page was obtained through preference rating. As with the previous set of designs, the design C once again performed best, with the same percentage, as the test users' first choice. Considering the above results, it can be concluded that design C once more came out to be the best both in terms of measure of performance and of subjective judgement.
C. Final Design The final designs for both the Web-based authoring shell's home page and the main menu page are the results of analyzing what seemed to have worked well in every proposed design, where all text was replaced with nonsense. Therefore, in order to perform their test tasks test users had to rely on communicative aspects of the design itself. According to the evaluation results obtained in terms of test users' performance measurements, as well as in terms of subjective judgement in two sets of usability tests, we designed the shell's home page and main menu page as shown in Figure 5. Such final page designs of Web-based authoring shell will resolve issues related with addressed design problems without introducing new usability problems.
a) Home page
100% 90% 80%
% correct
70% 60% 50% 40% 30% 20% 10% 0%
lg
pt
mcs sln prp lu standard page elements
LEGEND: lg: logo pt: page title mcs: main content selection sln: site local navigation prp: person responsible for page lu: last update h: help
h
Design A
Design B
b) Main menu page
Design C
Figure 5. Final page designs of Web-based authoring shell
IV. CONCLUSION Over the past decade the field of education has witnessed the introduction of a new and revolutionary technology – Internet and the World Wide Web – that seems to radically alter the way humans teach and learn. Web-oriented education constitutes a major direction in
Figure 4. Percentage of correct identification for seven standard page elements for each of the three main menu page designs
755
current research in the field of educational technology, with a challenging goal in development of advanced educational systems like Web-based ASs, which enable automated generation of emulators of human teacher in the process of learning and teaching. As for any welldesigned software system or Web site, good usable user interface is crucial for Web-based AS, too. Therefore usability evaluation is required as a validation process by which human-computer interaction characteristics of a system are measured and weaknesses are identified for correction. Web-based AS's usability issues, in particular the relevant ones concerning usability evaluation of a shell's Web-based design, along with the methodology applied are elaborated in this paper. In order to evaluate page design concepts of a Web-based AS and to test whether the layout itself helped find the page elements, all text on the page was "greeked". This meant that the test users had to rely on communicative aspects of each design in order to perform their test tasks. Achieved results matched our initial predictions about design elements that might lead to usability problems. According to evaluation results obtained in terms of test users' performance measurements, along with results acquired in terms of subjective judgement, we designed the Web-based authoring shell's final home page, as well as main menu page.
[7]
[8]
[9]
[10] [11]
[12]
[13]
[14]
[15]
ACKNOWLEDGMENT This work has been carried out within projects 0036033 Semantic Web as Information Infrastructure Enabler and project TP-02/0177-01 Web-based Intelligent Authoring Shell, both funded by the Ministry of Science and Technology of the Republic of Croatia.
[16] [17]
[18]
REFERENCES [1] [2]
[3]
[4]
[19]
J. Nielsen, Usability Engineering, Academic Press, Boston, MA, 1993. P. Avgeriou and I. Kassios, "Advanced learning technologies in the new instructional paradigm", Proceedings of IES-2000 - The 2nd International Scientific and Methodic Conference "InternetEducation-Science", Vinnitsa, Ukraine, October 2000. P. Brusilovsky, "Adaptive educational systems on the World Wide Web: a review of available technologies", in G. Ayala (Ed.): Proceedings of Workshop "Current Trends and Applications of Artificial Intelligence in Education", at the 4th World Congress on Expert Systems, Mexico City, Mexico, ITESM, pp. 9-16, 1998. A. Fleischmann, "The electronic teacher: the social impact of Intelligent Tutoring Systems in education", 2000,
[20]
[21]
[22]
http://www.student.informatik.tu-darmstadt.de/~andreasf/inhalte/its.html
[5]
[6]
M. Barton, "Authoring shells for intelligent tutoring system", 7th World Conference on Artificial Intelligence in Education, Washington, USA, August 16-19, 1995, http://www.pitt.edu/~al/aied/barton.html J. Rickel, "Intelligent computer-aided instruction: a survey organized around system components", IEEE Transactions on System, Man and Cybernetics, vol. 19, no. 1, pp. 40-57, 1989.
[23] [24] [25]
756
S. Stankov, Isomorphic Model of the System as the Basis of Teaching Control Principles in the Intelligent Tutoring System, Ph.D., Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, 1997 (in Croatian) M. Rosić: Establishing of Distance Education Systems within the Information Infrastructure, M.Sc. Thesis, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia, 2000 (in Croatian) J. Preece, Y. Rogers, H. Sharp, D. Benyon, S. Holland and T. Carey, Human-Computer Interaction, Addison-Wesley, Wokingham, England, 1994. J. Nielsen, "Iterative user interface design", IEEE Computer, November, pp. 32-41, 1993. J. Bennett, "Managing to meet usability requirements: establishing and meeting software development goals", in J. Bennett, D. Case, J. Sandelin and M. Smith (Eds.): Visual Display Terminals, Prentice Hall, Englewood Cliffs, NJ, pp. 161-184, 1984. B. Shackel, "Usability – context, framework, design and evaluation" in B. Shackel and S. Richardson (Eds.): Human Factors for Informatics Usability, Cambridge University Press, Cambridge, pp. 21-38, 1991. J. Whiteside, J. Bennett and K. Holtzblatt, "Usability engineering: our experience and evolution", in M. Helander (Ed.): Handbook of Human-Computer Interaction, Elsevier Science B.V. Publishers (North-Holland), pp. 791-817, 1988. M. Levi and F. Conrad, "Usability testing of World Wide Web sites: a CHI 97 Workshop", SIGCHI Bulletin Vol. 29, No. 4, October 1997, http://www.acm.org/sighchi/bulletin/1997.4/levi.html J. Borges, I. Morales and N. Rodriguez, "Guidelines for Designing Usable World Wide Web Pages", Conference Companion of the CHI'96 Computer-Human Interaction Conference, Vancouver, Canada: ACM, pp. 277-278, 1996. B. Tognazzini, "First principles", Nielsen Norman Group, 2003, http://www.asktog.com/basics/firstPrinciples.html M. Levi and F. Conrad, "Usability testing of World Wide Web sites", Research Papers of Bureau of Labor Statistics, 1998, http://stats.bls.gov/ore/htm_papers/st960150.htm J. Nielsen, "The difference between Web design and GUI design", Jacob Nielsen's Alertbox for May 1, 1997, http://www.useit.com/alertbox/9705a.html T. Tullis, "A method for evaluating Web page design concepts", Proceedings of ACM CHI'98 Conference on Human Factors in Computing System, April 1998, Los Angeles, CA, pp. 323-324, 1998. B. Keevil, "Measuring the usability index of your Web site", Proceedings of the 16th Annual International Conference on Computer Documentation, Quebec, Canada, pp. 271-277, 1998. J. Morkes and J. Nielsen, "Applying writing guidelines to Web pages", Proceedings of ACM CHI’98 Conference on Human factors in Computing systems", pp. 321-322, 1988, http://www.useit.com/papers/webwriting/rewriting.html A. Granić and V. Glavinić, "An approach to usability evaluation of an intelligent tutoring system", N. Mastorakis and V. Kluev (Eds.): Advances in Multimedia, Video and Signal Processing Systems, Athens, Greece: WSEAS Press, pp. 77-82, 2002. H. Henke, "Evaluating Web-based instructional design", 2001, http://www.chartula.com/evalwbi.pdf Th. McManus, "Delivering instruction on the World Wide Web", 1996, http://www.svsu.edu/~mcmanus/papers/wbi.html J. Nielsen: “Why You Only Need to Test With 5 Users”, Alertbox March, 2000, http://www.useit.com/alertbox/20000319.html