Using Quality Dimensions in the Evaluation of Websites

9 downloads 64818 Views 193KB Size Report
method of evaluating a website can contribute to the development of more quality ... this paper a website evaluation instrument is designed that is flexible ...
Using Quality Dimensions in the Evaluation of Websites Rosemary Stockdale Michael Borovicka Innsbruck University School of Management Information Systems, e-tourism Innsbruck, Austria [email protected] Abstract Tourism companies do not always understand the attributes of a quality website. An effective method of evaluating a website can contribute to the development of more quality websites. In this paper a website evaluation instrument is designed that is flexible enough to serve a variety of e-tourism players. The instrument is developed from quality dimensions derived from an existing IS e-commerce success model. It was tested against typical tourism websites and showed that it can support travel players in the identification of website quality.

Keywords: Website, Evaluation, Quality, Model, Success Measures

1

Introduction

Concerns have been raised that there is a lack of understanding of the importance of websites among some tourism companies. Law & Leung argue that this, together with the tendency to outsource, leads to organisations having websites that ‘contain a lot of information, but with a large portion being poorly organised, outdated or inaccurate’ (2002 p. 25). Attitudes towards websites often remain locked in the advertising domain (Corigliano & Baggio, 2004) and tourism companies fail to realise benefits such as increased sales volume and improved reputation (Law & Leung, 2002). Gianforte (2003) argues that improving customer experiences of website use could lift sales by at least 33%. Good website design must fulfil customers’ needs for information or transaction capabilities (Heldal, Sjovold, & Heldal, 2004). Effective websites require continuous assessment, careful management, frequent updates (Albert, Goes, & Gupta, 2004) and ongoing innovation (Reichheld, Markey Jr, & Hopton, 2000). An effective method of evaluating a website, such as an easy-to-use questionnaire, can support an understanding of the attributes of a good website. The contribution of this paper is the development of a website evaluation instrument that is flexible enough to serve a variety of players and evaluation scenarios within the online travel industry. The instrument is developed from dimensions of quality from an existing success model and supported by constructs and factors that influence evaluators’ perceptions. A pilot study of restaurant websites was conducted to test the evaluation questionnaire and to examine its flexibility.

2

Website Evaluation

Recent developments in evaluation theory are based on the concept of evaluation projects leading to consensus and understanding rather than judgement (Guba & Lincoln, 1989). However, mechanistic methods of evaluations are often used to achieve definitive outcomes such as a pass-or-fail/ yes-or-no judgement (Irani & Love, 2001; Smithson & Hirschheim, 1998). Where an understanding of how users perceive the quality of a website is required holistic methods are needed to reflect the subjectivity the user brings to the website. The subjectivity inherent in such evaluations should not be seen as a weakness of the evaluation but rather as a strength. Although these methods reduce the ability to find ‘generalisable truths’, it does allow for a local solution or local meaning to be identified (House, 1980). In the context of websites this is an important step towards achieving real benefit from an evaluation. Understanding of the local solution, in this case the website, is preferable to a generalisable judgement. Effective websites are usually dynamic, subject to constant update, innovation and management (Albert et al., 2004). To evaluate a website as a static object loses meaning, and sets the evaluation into the category of ritualistic measurement to reinforce existing judgements (Walsham, 1993). Website evaluation has developed in an ad hoc way using a variety of criteria and methods. A variety of studies have emerged based on existing underlying models such as the Technology Acceptance Model (Scharl, Wöber, & Bauer, 2003). Mich et al (2003) developed a model based on Cicero’s rhetoric to gain complete coverage of evaluation. This model takes the criteria of who, what, when, where and how that are familiar in the Content, Context & Process evaluation framework originally developed by Symons (1991). In contrast, Zhang & Dran (2002) develop their arguments from Kano’s model of customer expectations for feature categorisation and also apply the nature of quality changes over time. The underlying concept of these different models arises from the consideration of what is being evaluated and for what purpose the evaluation is being carried out. This affects the different way that website elements are considered in evaluations, such as domains, the ongoing of time and even cultural differences (Aladwani & Palvia, 2002; Mich et al., 2003; Zhang & von Dran, 2002). In the tourism domain, automated evaluation of website criteria has been developed by Scharl et al. (2003), to capture information about design factors such as internal structure, links, interactive features and content. The advantages of evaluating multiple websites regularly are significant when assessing these features. In contrast, an in-depth user perspective and more complete understanding of user behaviour requires an individual approach (Ivory, Sinha, & Hearst, 2001). Mich et al. (2003) contribute by calling for consideration of the stakeholders’ views within the evaluation. The stakeholders vary according to the reason for the evaluation. For example, users hold a central stake when user satisfaction is under consideration, but developers may have a greater influence on evaluations of website design. In both cases, users and designers are stakeholders in the website. User satisfaction has long been a significant measure of information systems success and this is echoed in the

many evaluations that take this perspective. (DeLone & McLean, 1992) An evaluation instrument that is adaptable to a variety of uses requires that the instrument be easy to use, parsimonious, and flexible enough to allow evaluator insights to be recorded (Barnes & Vigden, 2002; Mich et al., 2003; Smithson & Hirschheim, 1998). This research uses quality dimensions to develop an evaluation instrument that can be used to support travel industry players gain a comprehensive understanding of what constitutes a quality website.

3

Developing the website evaluation instrument

A focal point for the development of a website evaluation instrument is the work of DeLone and McLean on success measures in IS (1992). Their work was based on a literature review of over 100 papers from the 1980s. The original model was proposed as a comprehensive framework that integrated IS research finding and provided some continuity within the discipline. The 300 papers published on refining and developing the various constructs of the model support its authority. An updated version of the DeLone and McLean model (D&M Success Model) tests the original constructs against changes brought about by e-commerce (2003). Incorporating a review of ecommerce literature, the model was adjusted to include the constructs of e-commerce success measures as seen in Figure 1. The quality constructs are well founded as critical success factors in website evaluation together with system use (Law & Leung, 2002; Liu & Arnett, 2000). DeLone and McLean argue that the three quality dimensions affect use and user satisfaction (2003). The authors of this paper argue that consideration of the quality must precede any measurement of use. Information Quality System Quality

Intention to use

Use Net benefits

User satisfaction Service Quality

Figure 1: Updated DeLone & McLean IS Success Model (2003) This evaluation questionnaire supports travel players in assessing their website quality. Measuring use or user satisfaction is a later stage in evaluation as lack of quality constructs will hinder the use of the website. Use is a key success measure of the entire system (Liu & Arnett, 2000) but ‘too frequently, simple usage variables are used to measure this complex construct’ (DeLone & McLean, 2003 p.21). Counting the number of hits on a site, measuring the length of stay or monitoring click stream data does not contribute to assessing the success of a site (Ivory et al., 2001). This paper does not address the arguments for use or user satisfaction or the attendant

concerns as to the process or causal nature of use as a success variable. Evaluation of a website using the three quality dimensions in the proposed instrument will contribute to understanding where the website can be improved as a prerequisite for any assessment of use to be made. This in turn affects assessment of the net benefits of the website that constitute the final success variable (DeLone & McLean, 2003). In this way, a holistic evaluation of a company’s e-commerce success can be developed. The metrics used by DeLone and McLean (2004) are not intended to be exhaustive, but rather illustrative. Many of the metrics used are grounded in the quantitative methods of objective evaluations that result in a judgement of success or failure. The initial structure for the instrument in this paper is developed using the main constructs of the model within the three quality dimensions. It takes advantage of the flexibility of the D&M Success Model to avoid definitive metrics and to develop more qualitative questions from the identified constructs. These are further supported by recent literature on website evaluation to both test and expand the existing constructs. 3.1

Quality Dimensions

Nielsen (1999) argues that quality is a pervasive set of attributes, while Aladwani & Palvia (2002) consider quality to be a complex thing and its measurement multidimensional in nature. Quality dimensions are hard to define and are influenced by, culture, participators and even time (Zhang & von Dran, 2002). DeLone &McLean’s quality dimensions are informed by constructs that enable influences of any evaluation plan to be considered. Tables 1 to 3 give details of the constructs and the sources used to support the building of the instrument. System Quality - refers to the elements of a system that affect the end user in the way they interact and use an e-commerce system. This is a basic dimension of any ecommerce evaluation. Table 1: System Quality Constructs Constructs Accessibility Responsiveness Usability Functionality Reliability Flexibility Security

Communication

Additional References (DeLone & McLean, 2003; Mich et al., 2003; Smith, 2001) (DeLone & McLean, 2003) (Aladwani & Palvia, 2002; DeLone & McLean, 2003; Mich et al., 2003; Smith, 2001) (DeLone & McLean, 2003; Mich et al., 2003) (Aladwani & Palvia, 2002; DeLone & McLean, 2003; Limayem, Vogel, & Hillier, 2003) (DeLone & McLean, 2003) (Aladwani & Palvia, 2002; Barnes & Vigden, 2002; DeLone & McLean, 2003; Limayem et al., 2003; Mich et al., 2003; Smith, 2001) (DeLone & McLean, 2003; Smith, 2001)

Information Quality - content is considered to be the most important element of websites (Turban & Gehrke, 2000) and is seen to be directly related to website

success (Liu & Arnett, 2000). To encourage repeat visits, visitors need to be provided with appropriate, complete and clear information (DeLone & McLean, 2003). Table 2: Information Quality Constructs Constructs Relevance Accuracy Understandable Complete Current

Dynamic Personalised

Additional References (Barnes & Vigden, 2002; DeLone & McLean, 2003; Mich et al., 2003; Smith, 2001) (Aladwani & Palvia, 2002; Barnes & Vigden, 2002; DeLone & McLean, 2003; Mich et al., 2003; Smith, 2001) (Barnes & Vigden, 2002; DeLone & McLean, 2003; Mich et al., 2003) (Aladwani & Palvia, 2002; Barnes & Vigden, 2002; DeLone & McLean, 2003; Smith, 2001) (Aladwani & Palvia, 2002; Barnes & Vigden, 2002; DeLone & McLean, 2003; Limayem et al., 2003; Mich et al., 2003; Smith, 2001) (Albert et al., 2004; DeLone & McLean, 2003; Tierney, 2000) (Barnes & Vigden, 2002; DeLone & McLean, 2003; Mich et al., 2003; Smith, 2001)

Service Quality - was added to the updated D&M Success Model to acknowledge ecommerce use. The dimension allows for examination of the role of service provider within organisations. This is particularly important in the context of e-commerce where the end user is the customer and not the employee (DeLone & McLean, 2004). Consumers demand more service quality in the online environment (Werthner & Klein, 1999) although the service quality dimension is not well recognised in website evaluation literature. Table 3: Service Quality Constructs Constructs Perception of service Trust building Empathy After sales service Customisation

3.2

References (DeLone & McLean, 2003) (DeLone & McLean, 2003; Mich et al., 2003; Smith, 2001) (DeLone & McLean, 2003; Liu & Arnett, 2000) (DeLone & McLean, 2003; Liu & Arnett, 2000) (DeLone & McLean, 2003)

The Context of the Evaluation

Context is an important consideration within an evaluation study and technical innovations must be set within an organisational environment for effective evaluation (Avgerou, 2001). The type of system studied and the stakeholders concerned also influence the success measures (Seddon, Staples, Patnayakuni, & Bowtell, 1999). Within the social dimensions of the tourism industry, websites are socio-technical constructions that require to be evaluated from a stakeholder perspective. Evaluations should consider the basic questions of what is being evaluated for whom, how and for what reasons (Mich et al., 2003; Symons, 1991). The evaluation instrument has been

developed to use with a pre-set scenario to inform the context of the evaluation being carried out. 3.3

The Instrument

Previous experience with evaluation instruments has shown that misunderstandings occur when questions are too broad or endeavour to cover too many points in one sentence. Simplicity is therefore a prerequisite for the instrument. This also provides the instrument with sufficient flexibility to be useful in different e-tourism contexts. Factors that affect each of the identified constructs were gathered from the literature that informed the construct development. These factors include facilities offered on websites, design points and supplementary points. It is envisaged that these factors will carry different levels of importance according to the scenario presented for the evaluation. For example, in a transactional booking website the constructs related to service quality will figure more prominently than in an informational website. In accordance with the need to gain understanding from the evaluation, space was given for the evaluators to record their comments against each construct. A Likert scale was also added as this gives an overview of how different evaluators assess the different constructs. The questionnaire is given in Table 4. This dual approach gives flexibility to the instrument as it does not rely on specific metrics, but also enables outcomes to be questioned and discussed. Table 4. Quality Instrument

SYSTEM QUALITY

Constructs Accessible The website is easy to find Usability The website is easy to use

Functionality The website offers the relevant mechanisms to meet the purpose of the website

Responsiveness The website responds well Reliability The website is reliable

Factors to consider Site is easily found via search engines URL is appropriate to firm Navigation is clear and simply laid out Links are easy to follow and embedded in appropriate places Breadcrumbs/icons/menus are clearly evident Site offers text only options or appropriate settings for different browsers Download speed is acceptable Content is well-ordered and readable Design is aesthetically appropriate e.g. colour combinations, text size etc Is it clear what the purpose and intended audience of the website is? Does it work well? i.e. the site does all the things you want it to do to achieve the purpose of the visit Provides all the functions needed by the visitor to transact/find information e.g. shopping basket for goods, forms for online bookings, print versions of information Pages upload quickly Graphics are easily downloadable Website is accessible at all times Links are working Online forms work

Flexibility The website supports different types of users Security The system feels secure for transaction purposes Communication It is easy to communicate with the firm Constructs Relevance of website content

INFORMATION QUALITY

The website meets information needs Accuracy of website content Content is judged to be accurate and valid Website content is understandable The information presented is easy to understand Website content is complete The information presented covers all information needs Website content is current Content is judged to be up to date

SERVICE QUALITY

Website displays dynamic content Content is varied and changing Personalisation of website content The website creates a sense of individuality Constructs Perception of service quality Website shows evidence that the firm considers service quality

Email works Date of site creation or update is recent Allows for extended search Enables sophisticated users to skip stages or novice users to access help e.g. flight search offers variable date options Declarations of security policy Third party approval e.g. Verisign Secure transaction software Calls for feedback Contact details Factors to consider Does the information refers to the subject matter sought? Is additional information supportive of the subject matter? Does the information appear credible and authoritative Does the information appear to be correct, exact and without fault? Well written Plain language used rather than jargon Grammar and spelling are correct All questions answered All details available to support the intended transaction e.g. contact details are given, company details/background, list of products, prices where appropriate References to current information Date of last update given No evidence of outdated content e.g. last year’s ski lift prices Date is appropriate for information presented Evidence of services such as weather reports, flight arrivals, news, webcams etc. Welcomes visitor with name or message to first time visitor Offers record of your previous searches

Factors to consider Helpdesk Hotlines Service centres FAQs Printer friendly files

Evidence of trust building Website shows evidence that the firm is using services that will engender trust in the visitor

Website projects a feeling of empathy Visitors identify with the firm

Post visit services The firm offers follow-up services Customisation The site caters for special requirements

4

Resource links Sitemap Privacy statements Security attributes Credibility that website has authoritative sources of information Regard for copyright/IP evident Level and type of any advertising is acceptable Evidence of brand building Use of recognisable logo Use of standard firm colour scheme Use of language e.g. we/ you The site is enjoyable to use Email is requested from visitor on purchase Order tracking, order status etc Promise of confirmatory email Does the site offer opportunities to specify any special requirements ? e.g. south facing room in a hotel, vegetarian meals in aircraft etc

The Pilot Study

A pilot study was developed to test the evaluation instrument against one tourism industry domain (see Table 4). Using empirical evidence it was possible to assess the value of the evaluation instrument in identifying the quality of the selected websites. A scenario was set for the evaluators to ensure that they were clear as to what they were to evaluate (Limayem et al., 2003; Mich et al., 2003). The context for the pilot study was set in Austria where fifteen restaurants in a central tourism area were selected. The sites were individually evaluated by four people using the evaluation instrument. Previous research has shown that four people is an optimal number in evaluation situations where usability is not the single criteria (Nielsen & Landauer, 1993). The Scenario – the prospective user wished to find a restaurant in the area. The task was to find out as much as possible with special reference to the location, owner, ambience, food and prices and if possible, make an online-reservation for dinner.

5

Pilot Study Outcomes

Results from the questionnaire were very even across the four reviewers with consensus agreement on the majority of points. Where disagreement of two or more points on the Likert scale was identified, the evaluator comments were used to elicit the reason for the differences. Table 5 gives the mean results from the Likert scale.

Table 5: Evaluation Scores

Customisation

Post visit

Empathy

Trust

Perception

Personalisation

Service

Dynamic

Current

Complete

Understandable

Accuracy

Relevance

Communication

Secure

Information

Flexible

Reliable

Responsiveness

Function

Usability

Accessible

System

Average 6,0 5,3 5,1 4,7 5,4 1,4 1,0 5,5 5,7 5,9 5,9 5,2 5,4 2,7 1,3 2,1 2,6 4,8 1,6 1,3 Variance 1,1 1,6 1,2 1,3 1,2 0,6 0,0 0,8 1,1 0,9 0,7 1,2 1,5 2,9 1,0 1,6 1,8 1,6 0,7 0,9 O.Avg. O.Var.

4,3 1,0

4,6 1,3

2,5 1,3

Under the dimension of system quality the highest scoring construct was that of accessibility. The restaurants were easily found through search engines and via their URLs that all related to the restaurant name. As expected, flexibility and security were not rated in any of the restaurants due to the lack of online transactions and the purely informational nature of the sites. The next lowest scores in the system quality dimension related to usability. Qualitative input from the reviewers showed that several aspects of design failed to meet their requirements as users, thereby supporting Law & Leung’s contention that website design is often inadequate (2002). Information quality was seen as a key dimension in the study. Evaluators were seeking information on which to base a decision as to whether to dine in the restaurant. Apart from two constructs, only two restaurants failed to score well on information quality. One, although advertised as a restaurant, devoted its website information predominantly to its hotel rooms. The second presented information in a generic form. For example, the website gave details of different types of fish rather than how it was served in the restaurant, and discussed Italian wine rather than its own wine list. The constructs of dynamic content and personalisation presented a range of responses from the evaluators. Evidence of personalisation was found only on one website; a pizza restaurant that offered discounts for loyal customers through a log-in facility. The construct of dynamic content was found on four of the fifteen sites examined. The content ranged from a weekly events list that was regularly updated to competitions to win a dessert and photographs of local events. Despite this, evaluation scores for dynamic content were low. Evaluators commented that where dynamic content was present, it tended to be simplistic and unexciting. The third quality dimension, that of service, is the least reported in the literature. The need for value-add through exceptional service is recognised in e-commerce literature (Werthner & Klein, 1999), but has yet to be fully appreciated in website design. The constructs of service quality found in the evaluation were very poorly rated with many restaurants failing to provide the facilities that indicate recognition of the construct. There was a general lack of help desks, site maps or printer friendly files. The construct of empathy was better served and attention had been paid to developing brand recognition through colour schemes, the use of appropriate but informal language and photographs. The level of design complexity was at times high, and

devotion to design flair often overcame considerations that indicated a customer centric approach to the sites. This supported the view that a firm’s desire for artistry often supersedes the users’ needs (Heldal et al., 2004). Since the designer viewpoint is rarely the same as the users’ the constructs of perception of service, empathy and trust were insufficiently supported. During the evaluation it was found that many sites were not maintained by their owners. In one case, new and old versions of a website were simultaneously available. In several cases complex technology was used for functions which could be more simply provided. This highlighted the struggle between design versus responsiveness. Again, design took precedence over the users’ needs, in this instance responsiveness. On one page the content was displayed as a single picture. Although the website was accessible, it was poorly rated by evaluators who considered that it was an example of design hindering usability. The issue of design led the evaluators to re-examine the websites to establish who was responsible for the design. Six of the websites appeared to be created by the owners and the remaining nine carried links to website consultancies. No two restaurants had used the same company and all were locally based with the exception of one company that originated in another part of Austria. There was no correlation found between design origin and overall ratings of the websites and evaluators could not distinguish quality differences between commercially and privately designed sites.

6

Re-evaluating the evaluation instrument

The restaurant study provided empirical evidence to validate the evaluation instrument. Comparison of the websites was not the purpose of the pilot, except to show that the questionnaire worked across the three quality dimensions in a number of websites. The evaluation highlighted the weaknesses and strengths of each website and enabled an overall assessment to be made. After the evaluation, both qualitative comments and quantitative measures were collected and discussed. An outcome of the evaluation was to improve the evaluation instrument by redefining the constructs and factors that were identified as incomplete or confusing. The questionnaire was adjusted and changes were discussed with the evaluators to ensure that clarity had been achieved and meanings made clearer. Three constructs were changed. Accessibility was confused with responsiveness by the evaluators who sought more clarity. Visibility was therefore added to the System Quality dimension to address the question of the ease with which websites were located. Accessibility and responsiveness were amalgamated, with the former term preferred by evaluators, and seen to refer to connection speeds, access to multimedia content and download speeds. The second construct was communication. Evaluators argued that this referred more to service quality than system quality and associated factors were expanded to include timeliness of replies, considered an essential service quality. Finally, usability was considered to contain too many factors. Although the construct was retained the factors were divided into navigational and design factors and given separate scores and comment boxes.

Further comments from the evaluators highlighted the interconnection of the constructs and the factors that identify them. Factors to consider within the evaluation questionnaire were also adjusted to provide more clarity for the evaluators. It is anticipated that these factors would be adjusted in accordance with the reason and context of a planned evaluation. In the pilot study the factors were not adjusted to the context but considered as a whole. They were therefore not always appropriate to the scenario. This highlighted the need for planning within the context of an evaluation.

7

Conclusions

The empirical evidence gathered using the evaluation instrument identified the constructs of quality that were evident in the websites. It was possible to pinpoint areas of concern where websites did not meet the quality standards identified. System quality was found to be generally adequate while the highest rated constructs were from the dimension of information quality. However, it was evident that the important construct of dynamic content (Albert et al., 2004) was not well understood by designers. As expected, service quality was found to be the weakest area of development. Overall the pilot study confirmed Law & Leung’s contention that many tourism websites are of inadequate quality (2002). The flexibility of the instrument was shown in the ability of the evaluators to adjust the factors to take account of the scenario they were given. This flexibility will enable the evaluation instrument to be used in a variety of scenarios and enable tourism players to asses the quality of their websites from multiple perspectives. Further empirical validation is required to fully test this argument and research will be extended to use the instrument in further tourism domains.

8

References

Aladwani, A. M., & Palvia, P. C. (2002). Developing and validating an instrument for measuring user-perceived web quality. Information and Management, 39(6), 467-476. Albert, T. C., Goes, P. B., & Gupta, A. (2004). GIST: A model for design and management of content and interactivity of customer-centric web sites. MIS Quarterly, 28(2), 161-182. Avgerou, C. (2001). The significance of context in information systems and organizational change. Information Systems Journal, 11, 43-63. Barnes, S., & Vigden, R. (2002). An integrative approach to the assessment of e-commerce quality. Journal of Electronic Commerce Research, 3(3), 114-127. Corigliano, M. A., & Baggio, R. (2004). Tourism, technology, information and the relationship with customers. Paper presented at the International Conference on Leisure Futures, Bolzano, 10 to 12 November. DeLone, W. H., & McLean, E. R. (1992). Information Systems Success: The quest for the dependent variable. Information Systems Research, vol 3(1), pp. 60-95. DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems, 19(4), 9-30.

DeLone, W. H., & McLean, E. R. (2004). Measuring e-commerce success: Applying the DeLone and McLean Information System success model. International Journal of Electronic Commerce, 9(1), 31-47. Gianforte, G. (2003). The world at our fingertips - How online travel companies can turn clicks into bookings. Journal of Vacation Marketing, 10(1), 79-86. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. London: Sage. Heldal, F., Sjovold, E., & Heldal, A. F. (2004). Success on the Internet - optimizing relationships through the corporate site. International Journal of Information Management, 24, 115-129. House, E. R. (1980). Evaluating with validity. London: Sage. Irani, Z., & Love, P. E. D. (2001). Information systems evaluation: past, present and future. European Journal of Information Systems, vol 10, pp 183-188. Ivory, M., Y, Sinha, R., R, & Hearst, M., A. (2001). Empirically Validated Web Page Design Metrics. Paper presented at the ACM SIGCHI'01, Seattle, WA, USA. Law, R., & Leung, K. (2002). Online airfare reservation services: A study of Asian-based and North American-based travel web sites. Information Technology & Tourism, 5(1), 2533. Limayem, A., Vogel, D., & Hillier, M. (2003). Sophistication of online tourism Web sites in Hong Kong: an exploratory study. Paper presented at the Americas Conference on Information Systems, Tampa, USA. Liu, C., & Arnett, K. (2000). Exploring the factors associated with web site success in the context of electronic commerce. Information and Management, 38, 23-33. Mich, L., Franch M, & L, G. (2003). Evaluating and designing the quality of Web sites: the 2QCV3Q metamodel. IEEE Multimedia, 10(1), 34-43. Nielsen, J. (1999). User interface directions for the Web. Commun. ACM, 42(1), 65-72. Nielsen, J., & Landauer, T. K. (1993, April 24-29). A mathematical model of the finding of usability problems. Paper presented at the Proceedings ACM/IFIP INTERCHI'93 Conference, Amsterdam, The Netherlands. Reichheld, F., Markey Jr, R., & Hopton, C. (2000). E-customer loyalty - applying the traditional rules of business for online success. European Business Journal, 12(4), 173179. Scharl, A., Wöber, K., & Bauer, C. (2003). An integrated approach to measure Web site effectivenss in the European hotel industry. Information Technology & Tourism, 6(4), 257-271. Seddon, P. B., Staples, S., Patnayakuni, R., & Bowtell, M. (1999). Dimensions of Information Systems Success. Communications of the Association for Information Success, 2(20). Smith, A. G. (2001). Applying evaluation criteria to New Zealand government websites. International Journal of Information Management, 21, 137-149. Smithson, S., & Hirschheim, R. (1998). Analysing information systems evaluation: another look at an old problem. European Journal of Information Systems, vol 7, pp 158-174. Symons, V. J. (1991). A review of information systems evaluation: content, context and process. European Journal of Information Systems, vol 1(3), pp 205-212. Tierney, P. (2000). Internet-Based evaluation of tourism Web site effectiveness: Methodological issues and survey results. Journal of Travel Research, 39(2), 212-219.

Turban, E., & Gehrke, D. (2000). Determinants of e-commerce website. Human Systems Management, 19, 111-120. Walsham, G. (1993). Interpreting information systems in organizations. Chichester: John Wiley. Werthner, H., & Klein, S. (1999). Information Technology and Tourism - A Challenging Relationship. Wien: Springer Verlag. Zhang, P., & von Dran, G. (2002). User expectations and rankings of quality factors in different Web site domains. International Journal of Electronic Commerce, 6(2), 9-33.