Modeling Web-based library service quality - Semantic Scholar

4 downloads 11735 Views 591KB Size Report
May 6, 2012 - users as customers indicates an acceptance of libraries as service or- .... instruction, online reference and helpdesk services, online document.
Library & Information Science Research 34 (2012) 184–196

Contents lists available at SciVerse ScienceDirect

Library & Information Science Research

Modeling Web-based library service quality K. Kiran ⁎, S. Diljit Department of Information Science, Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur, Malaysia

a r t i c l e

i n f o

Available online 6 May 2012

a b s t r a c t Studies of e-service quality have consistently used adaptations of service-quality measurement tools that have been adopted and extended from traditional service-quality frameworks. However, a fresh insight into the investigation of key determinants of Web-based library service quality, with an emphasis on how library customers perceive service quality, has much to offer. Key determinants were identified, and contributed to the development and empirical testing of a proposed conceptual model of service quality that encompasses environment, delivery, and outcome quality. Unlike the disconfirmation approach, the performance-only measure was used. Participants included postgraduates and academic staff from four research intensive universities in Malaysia. Exploratory factor analysis and confirmatory factor analysis using structural equation modeling was carried out in order to develop and validate a measurement model for Web-based service quality, which included three second-order dimensions and eight first-order dimensions. Insights into the conceptualization of Web-based library-service quality as a multidimensional hierarchical construct are provided. The emergence of specific determinants specific to the Web services supports the notion that measurement of electronic-service quality is different from traditional services, though they may share some common factors. © 2012 Elsevier Inc. All rights reserved.

1. Introduction In academic libraries, the quality of delivery of services to library users, mainly students and researchers, is a key factor affecting the performance of libraries. Current indicators of higher education quality evaluations include the assessment of quality of library collections and services. The evolution of library services, along with technological innovation, has, however, impacted the evaluation and measurement of these services. Today's academic library provides access to digital and electronic collections and other services to complement the information search process. These services, including digital reference services, online document delivery, interlibrary loan, online help, and information skills tutorials, among others, are Web-based services that have been developed to improve the library's professional image and social status (Hong & Bassham, 2007). The assessment of how well a library succeeds depends on the user as judge of quality (Nitecki, 1996), and Hernon's (2001) reference to library users as customers indicates an acceptance of libraries as service organizations with business principles. Not only have libraries adopted the measurement tools of service quality from the business and marketing fields, but they have also become highly dependent on the conceptualization of service quality and electronic-service quality, as denoted by the increasing research contribution to the phenomenon of service quality.

The library literature reveals a heavy reliance on library-service quality assessment using SERVQUAL and adaptations of this tool. Even the most widely used library-service quality measurement tool, LibQUAL +™ was originally developed using the SERVQUAL methodology. Currently, adaptations of SERVQUAL are being used to evaluate electronic or digital library services (Hernon & Calvert, 2005; Yu, Hong, Gu, & Wang, 2008). Libraries function differently from business entities, and the adaptation of traditional services to electronic services goes beyond application of advanced technology to deliver services. Web-based services provide users with a different experience, and so should be assessed using criteria specific to the characteristics of Web services. As a consequence of the rapid development of services available through the Internet, preserving the library's relevance is increasingly important because students and researchers prefer to use non-library Web-based service providers (Griffiths & Brophy, 2005; Ross & Sennyey, 2008). Users' changing needs and experiences with other service providers on the Web may gradually shift loyal users from the library to the public Internet. Building and retaining the loyalty of library customers in the Web environment poses new challenges for libraries. Thus, it is important to understand and conceptualize libraryservice quality in the Web-based service environment, and how it can be assessed. 2. Problem statement

⁎ Corresponding author. E-mail address: [email protected] (K. Kiran). 0740-8188/$ – see front matter © 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.lisr.2012.02.005

There are various service quality assessment tools available, but the mere transfer of traditional services to the self-service electronic

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

environment does not necessarily mean that traditional service measures will adequately capture the quality of electronic service (Fassnacht & Koese, 2006). This is true especially since online services have unique characteristics that can affect the perception of service quality (Collier & Bienstock, 2006). Most research studies adopt service quality tools from the business and marketing fields, specifically SERVQUAL, SERVPERF, and e-SERVQUAL, which may not be directly applicable to a non-profit library service environment in higher education (Quinn, 1997). In library and information services, the most commonly used measurement tool for service quality is LibQUAL +™, developed by the Association of Research Libraries (ARL) in collaboration with several faculty members at the Texas A&M University. LibQUAL +™ was developed along the same conceptual and methodological framework as SERVQUAL. Cook (2001) found SERVQUAL dimensions to be inadequate, and included new dimensions in LibQUAL +™ specific to library services. Another effort, specifically in electronic services, was by Hernon and Calvert (2005) with the development of e-SERVQUAL, also based on relevant dimensions from SERVQUAL and E-S-QUAL. Hernon and Calvert (2005) was the first published study investigating the application of service-quality models and related tools from marketing research, specifically SERVQUAL, to measure library eservice quality. That study did not report a conclusive empiricallytested e-service quality measurement instrument, however, it was suggested that researchers should examine the various dimensions reported in the study and seek reconceptualization of the e-service construct (Calvert, 2008). In addition, there has been much criticism in the literature of the theoretical and operational issues of the use of disconfirmation theory and its corresponding measurement scale, SERVQUAL and its variant scales. Parasuraman, Zeithaml, and Berry (1988) adopted the disconfirmation theory to justify that service quality was a measure of how well the service level delivered matched customers' expectations. Some major objections relate to predictive power of the instrument, validity of the five-dimension structure, and length of the questionnaire (Babakus & Boller, 1992; Badri, Mohamed, & Abdelwahab, 2005; Cronin & Taylor, 1992; Dabholkar, Shephard, & Thorpe, 2000; Van Dyke, Prybutok, & Kappelman, 1999; Wilkins, Merrilees, & Herington, 2007). The importance of assessing Web-based service quality is especially significant because of the competition from informationservice providers on the Web. Despite continual efforts by academic libraries to adopt new technologies in providing services, students and researchers appear to prefer to use other non-library Internet service (Griffiths & Brophy, 2005; Ross & Sennyey, 2008). According to Imrie, Cadogan, and McNaughton (2002), another concern is that it is quite likely that perception and experiences are rooted in a country's culture, so how well do theories or tools, such as SERVQUAL, LibQUAL +™ and E-S-QUAL, developed in the United States and the United Kingdom, work in other cultures, such as Asia, where different cultural values could influence customer perception and experiences of service? Currently the lack of available tools to evaluate Web-based library-services quality (Hernon & Calvert, 2005) makes it difficult to assess the extent to which library services meet user information needs and requirements. Little research has examined the conceptualization of service quality in this nonprofit environment, nor explored whether an integrated perspective of service quality can lead to increased use of library Web-based services. Study objectives were focused on: • identifying key attributes underlying the dimensions of perceived Web-based library-service quality; • identifying the relative importance of the dimensions of perceived Web-based library-service quality on overall service quality; and • developing a model of Web-based library-service quality.

185

3. Literature review 3.1. Web-based service quality A common term used to differentiate between traditional face-toface services and newer ones is electronic services. Rust and Lemon (2001) referred to this broadly as services provided over the Internet, whereas Fassnacht and Koese (2006) defined it as services delivered via information and communication technology where customers interact solely with an appropriate user interface, in order to retrieve desired benefits. In library and information science (LIS) literature, similar definition is evident. Examples include network-based services (Bertot, 2003), services through the Internet (Henderson, 2005; Hernon & Calvert, 2005), Web-based services (Li, 2006), and technology mediated interaction (Shachaf & Oltmann, 2007). The conceptualization of Web-based service quality hinges on an understanding of the concept of service quality. The inclusion of the terms “electronic” or “Web-based” is an indication of the service being delivered and consumed in a networked environment. A universal definition of service quality has not yet been achieved (Hu, Brown, Thong, Chan, & Kar, 2009) and it might never be. This is mainly because the definition of service quality depends on the context of the service being provided—marketing, operations, industrial, education, health, and so forth. Since service itself is a complex phenomenon, efforts to define service quality and its dimensions have been subject to academic debate. One of the most cited and applied concepts of service quality comes from Parasuraman, Zeithaml, and Berry (1985), who simply stated that service quality, as perceived by consumers of a specific service firm, results from comparing the firm's performance with the customer's general expectations of how the firm should perform. This definition somewhat confuses the concept of service quality and satisfaction. Earlier, Lancaster (1993) equated satisfaction as the difference between service expectations and perceived performance. Furthermore, Parasuraman et al. (1985) noted that a lack of definition of the “expectations” construct has caused a problem with the operationalization of service quality as the difference of scores between expectations and performance of the service. This led them to develop the SERVQUAL (Parasuraman et al., 1988) tool to measure service quality, and subsequently the E-S-QUAL (Parasuraman, Zeithaml, & Malhotra, 2005) to measure e-service quality. After all these years of examining SERVQUAL, however, even the developers of the instrument have agreed that there is not a clear consensus on the number of dimensions measuring service quality and the interrelationship between these dimensions (Chowdhary & Prakash, 2007).

3.2. Web-based library service quality Since the focus was on services provided by academic libraries via the library's Web site, “Web-based services” is the preferred terminology. Since Web-based library services are the reference object of the quality construct, it is important to first provide a clear understanding of them. Library services are described as services that facilitate the use of materials and information made available at a library, and which normally involve interaction between the user and the librarian (Edwards & Browne, 1995). Historically, library services typically meant reference and information-desk services, reader education programs, interlibrary loan, and bibliographic search services. Over the last two decades, however, technology has been used to introduce many new services, either by delivering existing services via electronic media, or by developing and implementing entirely new services for search, delivery, and use of information (Poll, 2005). Some examples of these modern library services include: access to electronic or digital collections such as online databases, electronic journals, e-books and digitized collections; and other services including Web portals, personalized services, online library

186

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

instruction, online reference and helpdesk services, online document delivery, and electronic publishing. A common term used to differentiate these services from traditional library services is electronic services. Most studies in digital library research also use the term electronic services to denote digital library services (Bertot, 2004; Gonçalves, Fox, Watson, & Moreira, 2007). Typically, institutional libraries deliver these services through a Web site accessible on the Internet—thus the description of “Webbased services.” In this study, the term Web-based library services is used to refer to services accessible via an academic library's Web site, as to differentiate from purely digital library services that may be delivered by means of a digital library. Because of the wide use of SERVQUAL in library service assessment, Green (2006) tested the validity and reliability of SERVQUAL in the public library context and concluded that the SERVQUAL model did not fit the data. Furthermore, he cautioned against the use of SERVQUAL in a library service setting without empirical testing. Moreover, adopting an off-the-shelf measurement tool runs the risk of yielding inaccurate data, since each service industry might have its own unique dimensions (Carman, 1990). Thus, developing a measurement instrument specific to the service is pertinent to the accurate representation of the service quality concept. Various service industries are adopting and using information technologies to either provide information through a Web site to their customers, or innovatively develop new services enabling customers to interact and perform complex transactions with the online systems. Parallel to this, research is focusing on Web site quality. This includes research on WebQUAL (Loiacono, Watson, & Goodhue, 2002; Yang, Cai, Zhou, & Zhou, 2005); online retailing, including SiteQUAL (Yoo & Donthu, 2001) and eTailQ (Wolfinbarger & Gilly, 2003); E-S-QUAL (Parasuraman et al., 2005; Zeithaml, Bitner, & Gremler, 2006); electronic banking (Jun & Cai, 2001; Waite, 2006); travel agencies (Ho, 2007; Yen, 2005); and eTax (Hu et al., 2009). In LIS research, the focus is on library Web site quality (Chao, 2002); digital library quality (Bertot, 2004; Gonçalves et al., 2007; Kyrillidou & Giersch, 2005) and library e-service quality (Hernon & Calvert, 2005; Li, 2006). There has been much criticism in the literature of the theoretical and operational issues of the use of disconfirmation theory, and its corresponding measurement scales, SERVQUAL and its variant scales. Major objections relate to predictive power of the instrument, validity of the five-dimension structure, and length of the questionnaire (Babakus & Boller, 1992; Badri et al., 2005; Cronin & Taylor, 1992; Dabholkar et al., 2000; Van Dyke et al., 1999; Wilkins et al., 2007). To begin with, there is no definite definition of “expectation” and it is open to multiple interpretations, which can result in measurement validity problems (Buttle, 1996; Cronin & Taylor, 1992). Iacobucci et al., (cited in Dabholkar et al., 2000) warned that expectations might not even exist, or be formed clearly enough to serve as a standard evaluation of a service experience because that evaluation may be formed simultaneously with service consumption. The various interpretations of expectations can also cause measurement validity problems. Parasuraman, Berry, and Zeithaml (1991) define the discrepancy between customers' expectations for excellence and their perceptions of the actual service delivered as a judgment of service performance. Thus the measurement of service quality is done by perception score minus the expectation score, more often referred to as the gap score Green (2006) warned that it is imperative that each of the perception and expectation scores be subjected to factor analysis to determine if the same factors emerge in the analysis, and that the measures are unidimensional. Failure to do this before subtracting the perception and expectation scores may explain the failure to replicate the original five-factor structure of SERVQUAL. Some researchers have subsequently suggested that a performance-only measure (or directeffect model) is superior to the gap score (Cronin & Taylor, 1992;

Page & Spreng, 2002; Roszkowski, Baky, & Jones, 2005; Wilkins et al., 2007) because it is more reliable and explains more variance than the disconfirmation model (Babakus & Boller, 1992; Cronin & Taylor, 1992, 1994; Dabholkar et al., 2000; Landrum & Prybutok, 2004; Parasuraman, Zeithaml, & Berry, 1994). Page and Spreng (2002) have further argued that performance is a much stronger indicator of service quality than expectations. There is a need for more research in examining what constitutes Web-based service quality and its relative dimensions that contribute to empirical measurement of library service quality. Opposing the disconfirmation approach is the performance-only measure of service quality advocated by Cronin and Taylor (1992). They proposed that service quality is a form of customer attitude, and concluded that perception-only scores are a sound measure of service quality. Brady, Cronin, and Brand (2002) replicated Cronin and Taylor's study, and their results supported their argument about the superiority of the performance-only approach to the measurement of service quality. Numerous studies have since successfully adopted the performance-only measure (Gounaris, 2005; Landrum & Prybutok, 2004; Parasuraman et al., 2005; Wilkins et al., 2007) in various service settings. Hence, the electronic service quality concept may also be an attitudinal response towards electronic services focusing on the interaction between the customers and the Web site offering these services (Collier & Bienstock, 2006) as evident in the development of WebQual (Lociacono et al., 2000) and SiteQUAL (Yoo & Donthu, 2001). 4. Method A mixed-method approach for scale development was employed, similar to what was used in the development of major service quality measurement tools such as SERVQUAL (Parasuraman et al., 1988), LibQUAL +™ (Cook & Heath, 2001), and E-S-QUAL (Parasuraman et al., 2005). The scale development process is based on the sequence suggested by Churchill (1979) and extended by Anderson and Gerbing (1988) and DeVellis (2003). The study was conducted sequentially in two phases, as shown in Fig. 1. In the first phase, the priority was to explore the service quality phenomenon in relation to Web-based library services. Beginning with qualitative data gathering by means of focus group interviews, the research moved on to quantitative data gathering and factor analysis to determine factor structure among the proposed dimensions of service quality. The second phase focused on quantitative methods involving confirmatory factor analysis using structural equation modeling to support and refine the findings of the first phase, and examine the theorized service quality conceptual model through model testing. This exploratory mixedmethod design is popularly used to explore a phenomenon, identify themes, design an instrument, and subsequently test it. Researchers use this design when existing instruments, variables, and measures may not be known or available for the population under study (Creswell, 2008). In response to the repeated criticism of the performance/expectation approach used by the SERVQUAL developers, this study used the performance-only approach. 4.1. Participants Participants were postgraduate students and academic staff from four research intensive universities in Malaysia. Patton (cited in Creswell, 2008) suggested that the standard used in choosing participants for a study is to ensure that they are “information rich.” Since the intent was to explore the perspectives of library users on library service quality using a qualitative data collection method, users who would have had longer experience with using the service were deemed able to contribute rich information. According to Hill (1995), the maturation of the university students influences their

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

187

Phase 1 : Model development Step 1: Articulated the meaning and domain of web-based library e-service quality based on insights from the extant literature Step 2: Conduct focus-group discussions to conceptualize and revise the key domains of web-based library e-service quality Step 3: Formulate a preliminary scale based on Step 1 & 2 and present it to LIS experts for comments. Revise scale if necessary. Step 4: Administer revised scale Step 5: Develop a scale through an iterative process Phase 2 : Model verification Step 6: Administer the final scale—Survey II Step 7: Scale purification & model testing

Fig. 1. Research sequence.

perception of service quality over time, implying that a more matured group of users' may have well-formed perceptions. Thus, the population under study was users who are actively involved in research activities, specifically academic staff and postgraduate students. Participants for focus group discussion were invited from four research universities using snowball sampling. Ten focus groups were held, with a total of 71 participants.

5. Model development 5.1. Conceptual understanding of Web-based library service quality When attempting to develop a scale for measuring a construct, it is important to begin by articulating the meaning and domain of the construct under examination (Churchill, 1979). A review of the literature on service quality, especially electronic-service quality, suggested that service-quality assessment is a judgment of the extent to which the Web site facilitates the intended transaction, whether it is an online purchase or information service. Since there was little literature on Web-based library-service quality, focus groups were used to collect grounded data from library users in order to identify desirable characteristics of Web-based library services. These interviews were conducted between January and March 2008. All interviews were audio-recorded, and initially coded in vivo—that is, using the participants' own words. The frequency of the codes in the discussion transcripts were used as an indication of the level of interest in, and relevance of each item. The unit of analysis for the coding was a participant's sentence. A list was generated for all codes, and emerging themes were identified and renamed, based on the researcher's knowledge about the phenomenon. Creswell (2008) recommends looking for existing instruments that could be modified to fit the themes and statements found in the qualitative exploratory phase of the study. Some existing scales (SERVQUAL, E-S-QUAL, Library E-SERVQUAL, and LibQUAL +™) were referred to when wording the items, but the themes were solely based on data from participants. The preliminary scale had 14 themes representing all facets of Web-based library-service quality. As recommended by Moore and Benbasat (1991), the scale was first reviewed by experts for validity. At this point, all four library directors or their representative examined the scale to see if the description of themes was fair and representative. Also involved were two LIS professors from the International Islamic University Malaysia and Universiti Teknologi Mara Malaysia, with research interest in quality issues in library management. Generally, the experts were in agreement with most of the themes identified and their relevance to the construct. This resulted in an instrument of 95 items covering 14 constructs. Details of this qualitative study were given in Kaur

and Diljit (2008). Each of the key variables is operationalized as depicted in Table 1. Evidence from the focus groups suggested that Web-based service quality could be viewed as a multidimensional construct. The 14 themes that emerged were: • • • • • • • • • • • • • •

site design/links; site accessibility/technical/security; organization of information; personalization; flexibility; content quality; communication; customer relationship; customer service; customer feedback; reliability; self-reliance; functional benefit; and emotional benefits.

Initially, site design/links, site accessibility/technical/security, organization of information, and content quality themes suggested that the input to the service before service interaction happens influences the perception of quality. Furthermore, themes such as personalization, flexibility, communication, customer relationship, customer service, and customer feedback gave an indication of the interaction process during service delivery. Reliability, self-reliance, functional benefit and emotional benefits themes suggested an indication of the outcome of the service. At this point, the multidimensional service quality model proposed by Fassnacht and Koese (2006) was referred to. According to their framework, service quality is a hierarchical multidimensional construct, with three second-order dimensions: 1) environment quality, relating to appearance of user interface; 2) delivery quality, pertaining to customer–Web site interaction during service usage; and 3) outcome quality, viewed as what the customer is left with after the service delivery. Though this hierarchy was evident in the qualitative data, no model specifications were made at this stage. 5.2. Preliminary scale The set of 95 items derived from the focus groups formed the initial scale. From that, a questionnaire was designed that respondents used to indicate which items were important to them in assessing quality of Web-based library services. It began with a brief description of the purpose of the study, and included a confidentiality note. Following that was a definition of Web-based library services, and instructions to respondents about how to identify services that

188

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

maintain consistency in scale measurement (Caruana, Ewing, & Ramaseshan, 2000), and allow for correlations and factor analysis.

Table 1 Operationalization of key constructs. Key constructs and variables

Operational definition Items Variables

Site design/links

The ability to navigate 6 the Web site with a sense of ease

Site accessibility/ technical/ security

5.3. Data collection (Survey I)

The availability of 11 tools to access the Web-based services in a convenient way

Organization of information

Well-organized information that is easy to find

7

Personalization

The ability to manipulate the system to suit personal needs The ability to search and retrieve easily Access to library information and electronic information resources

5

Flexibility Content quality

Communication

Customer relationship

4 13

Able to contact library 5 staff and other library users 14 Being kept informed and dealt with in a good manner

Customer service

Getting assistance in searching and using information

10

Customer feedback

The ability to convey problems

3

Reliability

Dependable service that is on time and error-free

Self-reliance

Gaining independence and control using the service Feeling of worth during and after service interaction

Functional benefit

Emotional benefits

Feeling good during and after service interaction

4

5

3

Ease of use (2) Layout Navigation Visually appealing Working links Convenience Login Remote access Server speed Equipment (2) Wi-Fi (2) Security (3) Information content Easy to find Content arrangement (2) Links to other resources (2) Single login Specify personal preference (3) Save search Alert service Search (3) Download Relevant (2) Comprehensive (3) Up-to-date Trusted Free Accurate (2) Link to e-resources (2) Contact staff Contact users Assistance Awareness Keep informed (3) Technical help (2) Responsive (2) Prompt response Courteous Willing to help Understand needs Knowledgeable Assurance Help use resources Help search (2) Instruction (3) Reference Instill confidence Query Prompt response Satisfaction At promised time Error free Solve problems Dependable Independent use (2) Control (2)

Easy (2) Save time Extra information Useful Positive feeling (2) Innovative

they had used. In sequencing the items, due consideration was given to keep respondents focused on a particular area of service. The variables were measured using 7-point Likert-type statements anchored by one (strongly disagree) to seven (strongly agree). The aim was to

A total of 1000 preliminary questionnaires, along with selfaddressed stamped return envelopes, were sent out via mail to academic staff of the four universities from July to August 2008. Research assistants were employed to distribute another 1000 questionnaires to postgraduates. A total of 535 responses were received, indicating a return rate of 26.75%. 5.4. Scale reduction Data from Survey I was subjected to scale-reduction analyses consistent with those suggested by Churchill (1979), DeVellis (2003), and Parasuraman et al. (2005). Each item was grouped according to the 14 a priori conceptual dimensions from which they were derived. Reliability analysis was conducted by computing Cronbach's alpha coefficients and examining corrected item-to-total correlation. The cutoff value for item retention in the scale was set at Cronbach's alpha value of 0.70 (Nunnally, 1978). 5.5. Reliability The first survey using the 95 multi-item instrument was subjected to an internal consistency test using Cronbach's alpha. Cronbach's alpha values ranged from 0.835 (site design) to 0.952 (customer service), as seen in Table 2. High Cronbach's alpha values suggested a strong relationship of each item with what the subdimension scale was measuring in general. All intercorrelations were fairly high. All item–total correlations were fairly high (typically greater than 0.5). 5.6. Exploratory factor analysis Exploratory factor analysis (EFA) was conducted to interpret the factor structures. Tabachnick and Fidell (2007) suggested using the principal component analysis (PCA) technique in reducing a large number of variables to a smaller number of components, and as an initial step to reveal the maximum number and nature of factors. Since there was no a priori evidence to provide empirical evidence for the factor structure derived in the previous quantitative phase, EFA using PCA as the extraction method was carried out on responses

Table 2 Scale reliability. Service quality indicators

Range of itemtotal

Cronbach's Mean of inter-item alpha correlation value

Site design/links (6 items) Site accessibility (11 items) Organization of information (7 items) Personalization (5 items) Search options (4 items) Content quality (13 items) Communication (5 items) Customer relationship (14 items) Customer service (10 items) Customer feedback (3 items) Reliability (5 items) Self-reliance (4 items) Functional benefit (5 items) Emotional benefit (3 items)

.510–.716 .632–.728 .667–.759

.835 .919 .912

.469 .513 .601

.693–.793 .713–.786 .664–.800 .640–.730 .553–.804

.900 .889 .946 .854 .948

.644 .667 .578 .544 .589

.692–.839 .763–.842

.952 .894

.667 .738

.784–.849 .731–.787 .787–.875 .782–.797

.927 .890 .935 .894

.721 .672 .742 .737

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

189

Table 3 Eigenvalue and factor loadings. Eigenvalue EFA loadings (oblique rotation) Factor 1: Access and collections (coefficient alpha = .903) 13.925 The Web site is easy to use The Web site has links that are all working The Web site is convenient to access The Web site is always available from outside the campus There is a menu that helps me quickly understand how content is arranged Online information resources are arranged by subject/discipline Provides access to a wide range of electronic resources in my subject area, in particular full text e-journals and online databases Provides trusted information compared to the Internet/the Web The online catalog (OPAC) records are accurate and match the actual collection

.749 .771 .787 .755 .644 .594 .515 .499 .531

Factor 2: Service benefits (coefficient alpha = .897) Using Web-based services, I can easily get what I am looking for most of the time Using Web-based services, I can get the information I am looking for in minimal time and effort Using Web-based services, I can get the exact information I'm looking for I feel very happy when I get what I want from the Web-based services The Web-based services have innovative features that are interesting to use Using Web-based services makes me feel the library is truly dedicated to fulfilling my needs

2.278 .797 .825 .825 .745 .752 .764

Factor 3: Customer relationship (coefficient alpha = .900) Online librarians interact with me in a courteous manner Online librarians are always willing to help me Online librarians understand my specific information needs The site allows me the convenience of sending a query/comment online The service promptly responds to my online complaints/suggestions

2.074

Factor 4: Personalization (coefficient alpha = .877) I am able to save my searches and display my search history I am able me to set up an alert for new materials in my discipline The library system stores all my preferences to offer me extra information

1.582

Factor 5: Customer support (coefficient alpha = .911) The service enables me to determine which electronic resources are most relevant to my course needs/research interests There are clear, precise instructions at the point of use The instructions on remote access are easy to follow

1.350

Factor 6: Reliability (coefficient alpha = .894) Online document delivery requests are dealt with in promised time Online interlibrary loan requests are dealt with in promised time Materials listed in OPAC can surely be found at the library

1.180

Factor 7: Equipment (coefficient alpha = .772) There are enough working computers to access Web-based services There are enough ports for laptop use to access Web-based services

1.037

(n = 441). The Monte Carlo PCA for parallel analysis program developed by Marley Watkins (as cited in Pallant, 2007) was used to determine the number of factors to extract. The data were first subjected to Oblimin rotation (oblique), which provided information about the degree of correlation between factors, followed by Varimax rotation (orthogonal),in order to minimize complexity of factors (Tabachnick & Fidell, 2007). Oblique rotation was repeated, specifying a seven-factor extraction, with a cut-off point of 0.05 for factor loading (Hair, Black, Babin, Anderson, & Tatham, 2006). Twenty-two items were removed because of loading less than 0.05, or loading highly on two or more factors. The remaining 73 items were rotated again and an iterative process was carried out, removing items with low loadings. This resulted in retaining 46 items with sufficient loading, greater than .05 (Hair et al., 2006). Since the aim was to create a scale with a manageable number of items while including all possible dimensions, factors were examined to further reduce items within each factor. If the items represented high intercorrelation, they were replaced with the next higher-loading item. Items with highest loadings within each factor were retained, though some exceptions were made if the item was found not to have another item with similar quality indicator; removals may have caused the loss of valuable information. There are no fixed rules on factor extraction and items retained, however;

.881 .906 .876 .687 .717

.857 .865 .854

.857 .870 .878

.758 .776 .747

.840 .826

the researcher's experience and knowledge determines the content of the scale. A final 31 scale items were retained to measure Webbased library service quality across seven dimensions. All 31 items were subjected to PCA using oblique rotation. The Kaiser– Meyer–Oklin value was 0.930, and Bartlett's Test of Sphericity reached statistical significance, supporting the factorability of the correlation matrix. Total variance explained was 73.3%. The access and collection factor revealed the highest explanatory power by its highest Eigenvalue (13.9) among the seven factors retained. Table 3 shows the Eigenvalues and factor loadings of each factor. Based on the multilevel model of Dabholkar, Thorpe, and Rentz (1996), Fassnacht and Koese's (2006) hierarchical model of service quality, and the high correlation' values among items, the presence of higher-order dimensions were examined. The seven dimensions revolved around three main areas, as evident during the qualitative data analysis. Service quality may be assessed by the environment in which it occurs, the delivery process, and the outcome of the service interaction. Three higher-order factors were proposed, and EFA was carried out by specifying a 3-factor extraction for the 31 items. Results of the EFA corresponded with the proposed hierarchical conceptualization of Web-based service quality. Three factors were extracted, with a total variance of 55.787%, and KMO = 0.930 (p = 0.00). The following results were obtained (Table 4):

190

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

6.2. Confirmatory factor analysis

Table 4 A three-factor solution.

E-Service environment E-Service delivery E-Service outcome

Coefficient alpha

Eigenvalue

.900 .916 .904

42.666 6.372 6.749

1. Factor 1 (access and collection) and Factor 7 (equipment) loaded on the same factor, which was termed “service environment;” 2. Factor 3 (customer relationship), Factor 4 (personalization), and Factor 5 (customer support) loaded on the same factor, which was termed “service delivery;” 3. Factor 2 (service benefits) and Factor 6 (reliability) loaded on the same factor, which was termed “service outcome.” Building on the results from the focus groups and EFA of the survey data, a measurement model for Web-based library service quality was presented. The model is shown in Fig. 2. The final output from the analysis of data from Survey I was an instrument consisting of 31 service-quality measures with seven firstorder dimensions. The three-factor solution was tested in the next phase.

6. Model verification 6.1. Survey II A second survey using the reduced scale with 31 items was administered to all four universities in April 2009. The sample size was determined by referring to Hinkins (as cited in Mayasuki, 2009), who suggested regarding an item-to-response ratio—that is, the ratio of scale items to the number of subjects. The item-toresponse ratio ranged from 1:4 to 1:10. Since there were 31 items in the second scale, the number in the sample would range between 124 and 400. Therefore, the sample size was planned at 2000, with an expected minimum response rate of 10%. Though confirmatory factor analysis (CFA) requires random sampling, this was not possible 1) because the exact number of postgraduates was difficult to ascertain, and 2) because it was difficult to reach potential respondents individually. A nonrandom convenience sampling was considered, as it was cost-efficient and time saving. A total of 441 questionnaires were returned, with a 22% response rate. This method would affect external validity, therefore, generalization to the target population is done with caution.

The second phase of the study attempted to confirm this measurement model through CFA, using data obtained from a second survey at all four participating universities. Prior to CFA and structural equation modeling (SEM), there were several preliminary analyses to ascertain if the data violated the assumptions of inferential statistics. These included missing data, outliers, and violation of the normal distribution association with maximum likelihood estimation in CFA. All variables were initially screened for frequencies, means, standard deviations, skewness, and kurtosis. The means of the service-quality measurement variables ranged between 4.61 and 5.70, indicating a higher than average level of perceived-service quality among the respondents. The majority of the measurement variables were negatively skewed towards the high end of “strongly agree” and ranging from −0.276 to −0.742. The kurtosis values for all indicators of service quality were between −0.01 and 0.621. Many self-reported social services contexts are uniformly negatively skewed, so this skewness does not necessarily indicate a problem with the scale, but rather reflects the underlying nature of the construct being measured (Pallant, 2007). In fact, the clustering of scores on the favorable side reflects the nature of the variables, as most users perceived service quality as above average. Moreover, with a large sample size, the detrimental effects of skewing, kurtosis, and non-normality will not produce substantive differences in the analysis, which suggests that further analysis is possible with careful interpretation of the results (Hair, Anderson, & Tatham, 1998). SEM is quite sensitive to data multicollinearity. According to Pallant (2007), multicollinearity exist when the independent variable are highly correlated (r ≥ 0.9). An examination of the correlation matrix showed no perfect correlation (>0.9) between measures. Thus, multicollinearity assumption was not violated. The model was empirically tested using SEM abilities of SPSS AMOS 17.0 (SPSS AMOS). First, the path from the latent construct to the observed variable is assumed equivalent to a CFA model. The construct reliability criterion is reported using three criteria: Cronbach's alpha for all constructs exceeding >0.7 (Nunnally, 1978), composite reliability (CR) values, which were above >0.7, and average variance extracted (AVE), which also exceeded >0.5, as shown in Table 5. 6.3. Service quality measurement model fit The measurement model of Web-based library-service quality contained three dimensions and eight sub-dimensions. Table 6 presents the results of the standardized factor loadings, construct reliability,

Fig. 2. Proposed measurement model for Web-based library service quality.

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

and fit indices used to assess measurement properties of the model. First, each dimension was examined; several items were eliminated based on the parameter estimates and modification indices to achieve a good model fit. A total of 25 items were retained. The outcome quality dimension emerged with three sub-dimensions, reliability, functional benefits, and emotional benefits. The CFA model was used to assess the convergent and discriminant validity. The proposed model was evaluated based on “goodness-of-fit” criteria. There is no single statistical test of significance that identifies a correct model, given the sample data (Schumacker & Lomax, 2004). According to Hair et al. (1998), no absolute test is available, so a researcher must ultimately decide whether the fit is acceptable. Using the overall goodness-of-fit criteria, a single-factor model and the

Table 5 Cronbach's alpha, composite reliability, and average variance extracted. Dimensions

Cronbach's alpha, α

Environment quality Access and collection Equipment Delivery quality Customer relationship Personalization Customer service Outcome quality Reliability Functional benefit Emotional benefit

0.899 0.900 0.756 0.941 0.915 0.918 .0899 0.930 0.861 0.875 0.891

CR

AVE

0.892 0.756

0.578 0.608

0.914 0.918 0.928

0.781 0.788 0.738

0.911 0.899 0.892

0.837 0.749 0.734

191

Table 6 Measurement model estimates. Dimension 1: Environment quality Items (retained)

Standardized factor loading

Access and collection The service provides trusted information compared to the Internet The service provides access to a wide range of electronic resources in my subject area Online information resources are clearly arranged by subject The Web site has links that are all working The Web site is easy to use The Web site is convenient to access Equipment There are enough ports for laptop use to access Web-based services There are enough working computers to access Web-based services Cronbach's alpha, α

.749 .797 .781 .740 .768 .726 .762 .797 0.899

Confirmatory factor analysis results χ2/df

TLI

AGFI

GFI

CFI

RMSEA

2.261

.966

.913

.959

.979

.074

Dimension 2: Delivery quality Items (retained)

Standardized factor loading

Customer relationship Online librarians interact with me in a courteous manner Online librarians are always willing to help me Online librarians understand my specific information needs Personalization The library system stores all my preferences to offer me extra information I am able to set up an alert for new materials in my discipline I am able to save my searches and display my search history Customer support The instructions on remote access are easy to follow There are clear, precise instructions at the point of use The site allows me the convenience of sending a query/comment online Cronbach's alpha, α

0.862 0.886 0.902 0.865 0.901 0.897 0.875 0.849 0.869 .941

χ2/df

TLI

AGFI

GFI

CFI

RMSEA

1.671

.986

.927

.737

991

0.054

Dimension 3: Outcome quality Item

Standardized factor loading

Reliability Online document delivery requests are dealt with in promised time Online interlibrary loan requests are dealt with in promised time Functional benefits Using Web-based services, I can easily get what I am looking for most of the time Using Web-based services, I can get the exact information I'm looking for Using Web-based services, I can get the information I am looking for in minimal time and effort Emotional benefits I feel very happy when I get what I want from the Web-based services The Web-based services have innovative features that are interesting to use Using Web-based services makes me feel the library is truly dedicated to fulfilling my needs Cronbach's alpha, α

0.909 0.921 0.88 0.88 0.836 0.785 0.894 0.887 .930

χ2/df

TLI

AGFI

GFI

CFI

RMSEA

2.709

.967

.909

.957

.980

.086

192

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

6.6. Nomological validity

Table 7 Model comparison. Models

χ2/df

TLI

AGFI

GFI

CFI

RMSEA

Single factor model Second-order factor

6.050 1.795

.687 .951

.496 .827

.573 .860

.713 .957

0.148 0.59

proposed measurement model were tested. As shown in Table 7, the ratio of chi-square to degree of freedom in the single-factor model was 6.05, the Tucker–Lewis index= 0.687, the adjusted goodness of fit index (AGFI) = 0.0496, and the comparative fit index (CFI)= 0.731 —all below the desired minimum acceptable 0.90 level (Hair et al., 1998). Thus, the single-factor model was rejected. The measurement model, which was the second-order factor model, showed a good fit to the data. Values of the fit indexes were: χ 2 = 470.310; p b 0.001; χ 2/df = 1.795; CFI = 0.957; Tucker–Lewis index (TLI) = 0.951; and the root mean square error approximation index (RMSEA) = 0.059. The CFI and TLI were well above the recommended values, and the RMSEA was below b0.6. The values of AGFI (0.827) and GFI (0.860), however, were lower than 0.90 (Hair et al., 2006). These values are sensitive to the degree of freedom and penalize a complex model. Since the measurement model is a complex model, and because of high correlation among variables, the lower value of AGFI was deemed acceptable. 6.4. Convergent validity Assessment of convergent validity was conducted by examining the AVE values for the eight sub-dimensions and three dimensions. The values ranged from 0.579 to 0.837 (>0.5), providing evidence of convergent validity (Fornell & Larcker, 1981). 6.5. Discriminant validity Fornell and Larcker (1981) suggested assessing discriminant validity by comparing the AVE estimates for each construct with the squared correlations with any other construct. The values of AVE are supposed to be greater than any squared correlation between all pairs of constructs. As shown in Table 8, it was found that this criterion was fulfilled in all 11 first-order factors. Each first-order sub-dimension was conceptually distinct. Fig. 3 shows the standardized parameter estimates for the second-order measurement model. Combining the results of the convergent and discriminant validities with the overall assessment of the psychometric properties provided evidence that the measurement model had sound measures. The squared multiple correlations for the second-order variables in Fig. 3 demonstrate the proportion of variance explained in the measurement model. The model explains 95% of the variance for the outcome quality dimension—making it the best predictor for service quality—87% of the variance for environment quality, and 78% of the variance for delivery quality dimension.

Table 8 Assessment of discriminant validity. Constructs

1. 2. 3. 4. 5. 6. 7. 8.

Emotional benefit Functional benefit Reliability Cust. support Personalization Cust. relationship Equipment Access and collection

AVE and squared correlations 1

2

3

4

5

6

7

8

0.73 0.66 0.49 0.56 0.42 0.41 0.36 0.56

0.75 0.53 0.60 0.45 0.44 0.39 0.60

0.84 0.44 0.34 0.33 0.29 0.44

0.75 0.66 0.65 0.36 0.56

0.79 0.49 0.28 0.43

0.78 0.27 0.42

0.61 0.50

0.58

Note: The AVE value is shown on the diagonal in italics.

To assess nomological validity, the model was subjected to model fit testing. Numerous studies have shown the direct positive effect of service quality on customer satisfaction (Cronin, Brady, & Hult, 2000; Fassnacht & Koese, 2006; Ladhari, 2009; Landrum, Prybutok, & Zhang, 2007), which in academic library Web site usage may lead to intention to use (Heinrichs, Lim, Lim, & Spangenberg, 2007; Kiran & Diljit, 2011). Overall customer satisfaction was represented by a single item measure. Nomological validity was supported, as the standardized coefficient for the path from service quality to overall customer satisfaction was 0.75 (significant at p b 0.01), with 0.57 variance explained.

7. Discussion The results of this study have provided two distinct important aspects of understanding and assessing service quality in Web-based library services in academic libraries. Firstly, Web-based libraryservice quality has been identified as a multilevel hierarchical construct that is more than just customers' judgment of the differences between their expectations and the performance of actual service delivery. The use of grounded data from library customers has contributed to providing a fresh insight to how library customers view quality of Web-based services. As only postgraduates participated, the implication is on the themes drawn from the focus groups to represent service quality dimensions. It is not only about the technology involved in delivering the service, but also the process and outcome of the service. Since library users are customers with specific needs for information, and they access the Web-based service to locate and access information materials, the outcome dimension is the key attribute of quality service judgment. What is most important is that library customers perceived the service to be functional. It got them the right information in minimal time and effort, and had an emotionally positive impact on them during their interaction with the Web service. This attribute is also evident in the delivery process when users perceived customer support services to be of importance. Online instructions, help with searching for the needed information, and the ability to communicate with the library are all part of the information search process that is important to the users. Another important attribute is the ability of the Web service to provide easy access to a well-organized collection of information resources. When a customer accesses the Web service, he or she approaches it by perceiving how easy it is to gain access to the Webbased service, in terms of the availability of the right equipment, and the convenience of using the Web site. The results suggest that during interaction with the service, the customer looks for online support to assist in the information search. This assistance will minimize the customer's time and effort, and the search process becomes more efficient. As the customer leaves the service, there are two main aspects of concern: whether the customer got what he came for, and whether the customer leaves with a pleasant feeling regardless of whether or not the information need has been fulfilled. Secondly, Web-based service quality is conceptualized as an attitude built on experience during service consumption, and the performance-only battery is a sound measure for service quality. Academic libraries should rely on reliable and validated assessment tools to help them make informed decisions about how the library service is perceived. Suzinor and Kiran (2009) found that among Malaysian academic libraries, including those surveyed here, no reliable and validated assessment tool was used to measure library service quality, although, a quality management system was being implemented at these institutions. Library managers must adopt empiricallytested tools in order to make informed decisions concerning service measurement and improvement.

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

193

Fig. 3. Modified measurement model.

Practical implication of these findings addresses academic library managers. First, library managers must realize that although Web services are sometimes defined as free from direct human-interaction, in library services the relationship between the librarian and the user cannot be ignored. Contrary to Hernon and Altman (1998), not all users in the Web environment are self-reliant. Quinn (1997), writing about adapting commercially-oriented service-quality models to the library environment, stated that the co-existence of a hybrid model between the commercial demand-oriented model and the library's didactic service model can benefit library services. Quinn found that librarians feel that their role is not to directly give the user what they want, but to educate them to enhance intellectual development. On the other hand, the user may not be satisfied because they did not immediately get what they wanted. This is where good customer relationships may help reduce user frustration, and prevent customers from going away dissatisfied (Quinn, 1997). Partridge, Menzies, Lee, and Munro (2010) found that in the 21st century library, librarians are still expected to have strong interpersonal communication and customer-service skills. A librarian's ability to develop new relationships with his or her users will help build communities in which the librarian's authoritative role will evolve into a more synergistic partnership with library users (Partridge et al., 2010).

Second, library managers cognizant of the competition from commercial information providers must find ways to encourage users to commit their loyalty to library resources. Clearly, library customers consider customer support to be an important quality attribute. In the electronic service literature, support has been assumed as “recovery service”—that is, service provided only when the customer faces a problem. In fact, Parasuraman et al. (2005) even developed a separate scale to measure service-recovery quality. In the library service context, however, this support may be equated to reference service or library instructions. Reference services and bibliographic instruction have always been key library services that help library users efficiently and effectively search, retrieve, and use the rich resources of the library. The results suggest that users consider online instruction to be important, and likewise help in selecting relevant resources. Therefore, library managers should focus on training service staff in communication and interpersonal skills in an online environment, so that they may clearly communicate with customers. This enhances the role of the library customer and builds a sense of belonging, which influences behavioral intentions such as loyalty. The importance that customers place on functional and emotional benefits shows that library users want to feel important, and they appreciate innovative features that make an interaction more interesting. There

194

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

has to be a conscious effort to make the interaction with Web-based services a useful and pleasant experience. It is strongly recommended that in any scale development exercise, the statistical analysis is carried out on properly-screened data, as required by each test. To avoid flawed assumptions, enough time must be allocated for the purification of the measures. Strong theoretical knowledge about the phenomenon under study becomes the base upon which the researcher interprets the intensive analysis of raw qualitative data, and further builds the relationships in the conceptual model. Another important recommendation is that this scale to be empirically tested in other Web services besides the six services in this study. The limitation was set because of the nature of the services provided at the university libraries surveyed. Although these four libraries generally represent the other university libraries in Malaysia, that may not be the case for academic libraries outside Malaysia. More recent library services that rely on social Web technologies may share many of the dimensions of the Web-based servicequality conceptualization, but the use of this scale must be empirically tested in each case. It is also recommended that the results of using this scale are not seen as indicators of good or bad services, but rather as indicators of understanding institutional and user differences and similarities (Kyrllidou, 2001). The results of the assessment can be used to review and improve services (Hernon, 2002) as an indication to library users that their feedback has benefited service delivery, which may further encourage greater response from library users in future assessment exercises. The primary benefits of this study are twofold. In terms of research, representative dimensions of Web-based service quality, and the relevant indicators that measure these dimensions, have been established. The cognitive and affective components of the conceptualization of service quality offer a more comprehensive understanding of Web-based library services, compared to earlier assumptions that evaluation of service quality is mostly cognitive (Parasuraman et al., 1988). Furthermore, by adhering to scale construction methodology suggested by Churchill (1979) and DeVellis (2003), a service quality model that fit the data with acceptable internal consistency and validity has been successfully constructed. The empirical validation of the measurement instrument enriches the theory building of service quality. In terms of theoretical conceptualization of service quality, it has been shown that “performance-only” is a sound measure for perceived library service quality. As for library practitioners, service environment, delivery, and outcome are nearly equally important facets of Web-based service quality. All three dimensions exert strong influence on overall service-quality perceptions, leading to customer satisfaction. The provision of any Web service should take into consideration not just the initiative of offering a technologically-advanced service, but also move one step further to ensure that the service serves its actual purpose. Dabholkar and Overby (2005) cautioned that if the outcome of the service is not what the customer wants, then they will remember it. This negative experience will influence further interaction with the service. 8. Limitations In terms of the research methodology, focus group samples were largely drawn from postgraduate students, thus not representative of all researchers in the selected universities. The measurement instrument developed is limited to assessing the service quality of the eight Web-based services selected. At the point of data collection, only these eight Web-services were actively offered in all four universities. Application of the scale to other Web-based library services is not recommended without testing the reliability and validity of the scale in each case. Finally, though only 25 out of 95 items remained in the scale, compared to SERVQUAL and LibQUAL®, this scale has built dimensions based on grounded data from Web-based library-

service users. This, therefore, has resulted in dimensions that are specific to Web-based library services. 9. Conclusion Library-service quality assessments have been mainly dependent on two major measurement tools, the SERVQUAL gap model of Parasuraman et al. (1988), and the LibQUAL® tool by Thompson and Cook (2002). Both measures were developed based on the conceptualization of service quality of traditional library services. Abundant research in replicating these measurement tools in various service settings has come to no conclusive decisions on the conceptual definition of electronic service quality or its dimensions, thus initiating more research into this problem. The findings contribute to the conceptualization of Web-based library-service quality from a fresh insight, specifically in academic libraries. The results provided empirical evidence that supports the multidimensional hierarchical nature of a Web-based library-service quality construct consisting of three second-order dimensions: environment quality, delivery quality, and outcome quality and its corresponding eight first-order subdimensions. The Web-based library service quality survey tool offers a basic user-centered approach to service assessment in academic libraries. It is built upon the performance-only measure, and the service quality construct is conceptualized as a multilevel hierarchical construct with second-order factors. The findings clearly support the idea that in addition to concentrating on the technical development of new services, libraries should not neglect basic library services such as reference and bibliographic instruction, in the Web environment. The primary role of the professional librarian as an instructor to facilitate the intellectual growth of the library user towards greater self-reliance (Quinn, 1997) is still pertinent in the Web environment. Because of the nature of Web services (reduced face-to-face interaction), there is a need for increased understanding of customer service and customer relations concepts. In the quest for quality services, the academic library should continually take steps to earn a reputation for quality and value, which will result in loyal customers in this highly competitive Internet age. Acknowledgments This project was funded by a grant from University of Malaya (FR187/2007A). References Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. Babakus, E., & Boller, G. (1992). An empirical assessment of the SERVQUAL scale. Journal of Business Research, 24(3), 253–268. Badri, M. A., Mohamed, A., & Abdelwahab, A. (2005). Information technology center service quality: Assessment and applications of SERVQUAL. International Journal of Quality & Reliability Management, 22(8), 819–848. Bertot, J. C. (2003). World libraries on the information superhighway: Internet based library services. Library Trends, 52(2), 209–228. Bertot, J. C. (2004, March). Assessing digital library services: Approaches, issues and considerations. (Paper presented at the International Symposium on Digital Libraries and Knowledge Communities in Networked Information Society DLKC'04, University of Tsukuba, Japan). Retrieved from http://www.kc.tsukuba.ac.jp/dlkc/ Brady, M. K., Cronin, J. J., & Brand, R. R. (2002). Performance-only measurement of service quality: A replication and extension. Journal of Business Research, 55(1), 17–31. Buttle, F. (1996). SERVQUAL: Review, critique, research agenda. European Journal of Marketing, 30(1), 8–32. Calvert, P.J. (2008). Assessing the effectiveness and quality of libraries (Unpublished doctoral dissertation). Victoria University of Wellington, New Zealand. Retrieved from http://researcharchive.vuw.ac.nz/handle/10063/1045 Carman, J. M. (1990). Consumer perspective of service quality: An assessment of the SERVQUAL dimensions. Journal of Retailing, 66(1), 33–55. Caruana, A., Ewing, M. T., & Ramaseshan, B. (2000). Assessment of the three-column format SERVQUAL: An experimental approach. Journal of Business Research, 49(1), 57–65.

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196 Chao, H. Y. (2002). Assessing the quality of academic libraries on the Web: The development and testing of criteria. Library and Information Science Research, 24, 169–194. Chowdhary, N., & Prakash, M. (2007). Prioritizing service quality dimensions. Managing Service Quality, 17(5), 493–509. Churchill, G. A. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16(1), 64–73. Collier, J. E., & Bienstock, C. C. (2006). Measuring service quality in e-retailing. Journal of Service Research, 8(3), 260–275. Cook, C. (2001). A mixed-method approach to the identification and measurement of academic library service quality constructs: LibQUAL+™ (Doctoral dissertation). Available from Proquest Dissertation Abstracts International. (UMI No. 3020024). Cook, C., & Heath, F. M. (2001). User's perceptions of library service quality: A LibQUAL+ qualitative study. Library Trends, 49(4), 548–584. Creswell, J. W. (2008). Educational research: Planning, conducting and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson Education. Cronin, J. J., Brady, M. K., & Hult, G. T. M. (2000). Assessing the effects of quality, value and customer satisfaction on consumer behavioral intentions in service environments. Journal of Retailing, 76(2), 193–218. Cronin, J. J., & Taylor, S. A. (1992). Measuring service quality: A re-examination and extension. The Journal of Marketing, 56(3), 55–68. Cronin, J. J., & Taylor, S. A. (1994). SERVPERF versus SERVQUAL: Reconciling performancebased and perceptions-minus-expectations measurement of service quality. Journal of Marketing, 58(1), 125–131. Dabholkar, P. A., & Overby, J. W. (2005). Linking process and outcome to service quality and customer satisfaction evaluations: An investigation of real estate agent service. International Journal of Service Industry Management, 16(1), 10–27. Dabholkar, P. A., Shephard, D. C., & Thorpe, D. I. (2000). Comprehensive framework for service quality: An investigation of critical conceptual and measurement issues through a longitudinal study. Journal of Retailing, 76(2), 139–173. Dabholkar, P. A., Thorpe, D. I., & Rentz, J. O. (1996). A measure of service quality for retail stores: Scale development and validation. Journal of the Academy of Marketing Science, 24(1), 3–16. DeVellis, R. F. (2003). Scale development: Theory and applications. (Applied Social Research Methods Series, 26). Thousand Oaks, CA: Sage. Edwards, S., & Browne, M. (1995). Quality in information services: Do users and librarians differ in their expectations. Library and Information Science Research, 17, 163–182. Fassnacht, M., & Koese, I. (2006). Quality electronic services: Conceptualizing and testing a hierarchical model. Journal of Service Research, 9(1), 19–37. Fornell, C., & Larcker, D. (1981). Evaluating structural equation models with observable variables and measurement error. Journal of Marketing Research, 18(1), 61–67. Gonçalves, M. A., Fox, E. A., Watson, L. T., & Moreira, B. L. (2007). What is a good digital library? A quality model for digital libraries. Information Processing and Management, 43, 1416–1437. Gounaris, S. (2005). Measuring service quality in b2b services: An evaluation of the SERVQUAL scale vis-à-vis the INDSERV scale. Journal of Services Marketing, 19(6), 421–435. Green, J.P. (2006). Determining the reliability and validity of service quality scores in a public library context: A confirmatory approach (Doctoral dissertation). Available from Proquest Dissertation Abstracts International. (UMI No. 3241793) Griffiths, J. R., & Brophy, P. (2005). Student searching behavior and the web: Use of academic resources and Google. Library Trends, 53(4), 539–554. Hair, J. F., Jr., Anderson, R. E., & Tatham, R. L. (1998). Multivariate data analysis (5th ed.). Upper Saddle River, NJ: Prentice-Hall. Hair, J. F., Jr., Black, B., Babin, B., Anderson, R. E., & Tatham, R. L. (2006). Multivariate data analysis (6th ed.). Upper Saddle River, NJ: Prentice-Hall. Heinrichs, J. H., Lim, K. S., Lim, J. S., & Spangenberg, M. A. (2007). Determining factors of academic library web site usage. Journal of the American Society for Information Science and Technology, 58, 2325–2334. Henderson, K. (2005). Marketing strategies for digital library services. Library Review, 54(6), 342–345. Hernon, P. (2002). Outcomes are key but not the whole story. Journal of Academic of Librarianship, 28, 54–55. Hernon, P., & Altman, E. (1998). Assessing service quality: Satisfying the expectations of library customers. Chicago, IL: American Library Association. Hernon, P., & Calvert, P. (2005). E-Service quality in libraries: Exploring its features and dimensions. Library and Information Science Research, 27, 377–404. Hill, F. M. (1995). Managing service quality in higher education: The role of the student as primary consumer. Quality Assurance in Education, 3(3), 10–21. Ho, C. -I. (2007). The development of an e-travel service quality scale. Tourism Management, 28(6), 1434–1449. Hong, M., & Bassham, M. W. (2007). Embracing customer service in libraries. Library Management, 28(1/2), 53–61. Hu, P. J. H., Brown, S. A., Thong, J. Y., Chan, F. K. Y., & Kar, Y. T. (2009). Determinants of service quality and continuance intention of online services: The case of eTax. Journal of the American Society for Information Science and Technology, 60, 292–306. Imrie, B. C., Cadogan, J. W., & McNaughton, R. (2002). The service quality construct on a global stage. Managing Service Quality, 12(1), 10–18. Jun, M., & Cai, S. (2001). The key determinants of Internet banking service quality: A content analysis. International Journal of Bank Marketing, 19(7), 276–291. Kaur, K., & Diljit, S. (2008). Exploring user experiences with digital library services: A focus group approach. In G. Buchanan, M. Masoodian, & S. J. Cunningham (Eds.), ICADL 08 Proceedings of the 11th International Conference on Asian Digital Libraries: Universal and ubiquitous access to information (pp. 285–293). Berlin, Germany: Springer-Verlag. Kiran, K., & Diljit, S. (2011). Antecedents of customer loyalty: Does service quality suffice? Malaysian Journal of Library and Information Science, 16(2), 95–113.

195

Kyrillidou, M., & Giersch, S. (2005, July). Developing the DigiQUAL protocol for digital library evaluation. (Paper presented at the MERLOT International Conference, Nashville, Tenn). Retrieved from http://www.arl.org/ststs/nemmeas/emetrics/ Ladhari, R. (2009). Service quality, emotional satisfaction, and behavioral intentions: A study in the hotel industry. Managing Service Quality, 19(3), 308–331. Lancaster, F. W. (1993). If you want to evaluate your library (2nd ed.). London: Library Association Publishing. Landrum, H., & Prybutok, V. R. (2004). A service quality and success model for the information service industry. European Journal of Operational Research, 156(3), 628–642. Landrum, H., Prybutok, V. R., & Zhang, X. (2007). A comparison of Megal's service quality instrument with SERVPREF. Information Management, 44(1), 104–113. Li, L. L. (2006). Leveraging quality web-based library user services in the digital age. Library Management, 27(6/7), 390–400. Loiacono, E. T., Watson, R. T., & Goodhue, D. L. (2002). WebQual: A measure of web site quality. In K. R. Evans, & L. K. Scheer (Eds.), 2002 AMA Winter Educator's Conference: Marketing theory and applications, February 22–25, 2002 (pp. 432–437). Chicago, IL: American Marketing Association. Retrieved from http://users.wpi.edu/~eloiacon/ WebQual/AMAPaper.pdf Mayasuki, Y. (2009). Engaging consumers through innovation: Measuring event innovativeness in spectator sports (Doctoral dissertation). Available from Proquest Dissertation Abstracts International, (UMI 3374055). Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. Nitecki, D. A. (1996). Changing the concept and measure of service quality in academic libraries. Journal of Academic of Librarianship, 22, 181–190. Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Page, T. J., & Spreng, R. A. (2002). Difference scores versus direct effects in service quality measurement. Journal of Service Research, 4(3), 184–192. Pallant, J. F. (2007). SPSS survival manual: A step-by-step guide to data analysis using SPSS for Windows (3rd ed.). Sydney, Australia: Allen & Unwin. Parasuraman, A., Berry, L. L., & Zeithaml, V. A. (1991). Refinement and reassessment of the SERVQUAL scale. Journal of Retailing, 67, 420–450. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and some implications for future research. Journal of Marketing, 49(4), 41–50. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multi-item scale for measuring customer perception of service quality. Journal of Retailing, 64(1), 12–40. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994). Reassessment of expectations as a comparison standard in measuring service quality: Implications for further research. Journal of Marketing, 58(1), 111–124. Parasuraman, A., Zeithaml, V. A., & Malhotra, A. (2005). E-S-QUAL: A multiple item scale for assessing electronic service quality. Journal of Service Research, 7(3), 213–233. Partridge, H., Menzies, V., Lee, J., & Munro, C. (2010). The contemporary librarian: Skills, knowledge and attributes required in a world of emerging technologies. Library and Information Science Research, 32, 265–271. Poll, R. (2005, August). Measuring the impact of new library services. (Paper presented at World Library and Information Congress: 71st IFLA General Conference and Council: Libraries: A voyage of discoveries, Oslo, Norway). Retrieved from http:// archive.ifla.org/IV/ifla71/Programme.htm Quinn, B. (1997). Adapting service quality concepts to academic libraries. Journal of Academic of Librarianship, 23, 359–369. Ross, L., & Sennyey, P. (2008). The library is dead, long live the library! The practice of academic librarianship and the digital revolution. Journal of Academic of Librarianship, 34, 145–152. Roszkowski, M., Baky, J., & Jones, D. (2005). So which score on the LIBQUAL tells me if library users are satisfied. Library and Information Science Research, 27, 424–439. Rust, R., & Lemon, K. N. (2001). E-service and the consumer. International Journal of Electronic Commerce, 5(3), 83–99. Schumacker, R. E., & Lomax, R. G. (2004). A beginners guide to structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Association. Shachaf, P., & Oltmann, S. M. (2007). E-quality and e-service equality. Proceedings of the Fortieth Hawaii International Conference on System Sciences (HICSS-40). Los Alamitos, CA: IEEE Press. Retrieved August 24, 2008, from https://scholarworks.iu.edu/dspace/ handle/2022/3717 SPSS AMOS 17.0 (Version 17) [Computer software]. New York, NY: IBM. Suzinor, K., & Kiran, K. (2009). Quality management of reference services in Malaysian public university libraries. International Journal of Libraries and Information Services, 5(2), 104–113. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston, MA: Pearson Education. Thompson, B., & Cook, C. (2002). Stability of the reliability of LibQUAL+™ scores: A “reliability generalization” meta-analysis study. Educational and Psychological Measurement, 62(4), 735–743. Van Dyke, T. P., Prybutok, V. R., & Kappelman, L. A. (1999). Cautions on the use of the SERVQUAL measure to assess the quality of information systems services. Decision Sciences, 30(3), 877–891. Waite, K. (2006). Task scenario effects on bank web site expectations. Internet Research, 16(1), 7–22. Wilkins, H., Merrilees, B., & Herington, C. (2007). Towards an understanding of total service quality in hotels. International Journal of Hospitality Management, 26(4), 840–853. Wolfinbarger, M., & Gilly, M. C. (2003). eTailQ: Dimensionalizing, measuring and predicting Etail quality. Journal of Retailing, 79(3), 183–198. Yang, Z., Cai, S., Zhou, Z., & Zhou, N. (2005). Development and validation of an instrument to measure user perceived service quality of information presenting Web portals. Information Management, 42, 575–589.

196

K. Kiran, S. Diljit / Library & Information Science Research 34 (2012) 184–196

Yen, H. R. (2005). An attribute-based model of quality satisfaction for internet selfservice technology. Service Industries Journal, 25(5), 641–659. Yoo, B., & Donthu, N. (2001). Developing a scale to measure the perceived service quality of Internet shopping sites (SITEQUAL). Quarterly Journal of Electronic Commerce, 2(1), 31–47. Yu, L., Hong, Q., Gu, S., & Wang, Y. (2008). An epistemological critique of gap theory based library assessment: the case of SERVQUAL. Journal of Documentation, 64, 511–551. Zeithaml, V. A., Bitner, M. J., & Gremler, D. D. (2006). Services marketing: Integrating customer focus across the firm (4th ed.). New York, NY: McGraw-Hill Irwin. Kiran K. holds a Ph.D. from the University of Malaya and is a senior lecturer at the Department of Library & Information Science, Faculty of Computer Science & Information Technology, University of Malaya. Her research publications have appeared in Collection Building, Electronic Library, Journal of Problem-based Learning, Libri, Library Review,

Library Management, Malaysian Journal of Library & Information Science, Kekal Abadi, and Lecture Notes in Computer Science. Her research interests include information services, service quality, quality management, CRM, academic libraries, social networking, and Library 2.0. She is an executive editor of the Malaysian Journal of Library & Information Science. Diljit S. received his doctor doctorate from Florida State University, and is a consultant at the Faculty of Computer Science & Information Technology, University of Malaya, where he was previously the deputy Dean for postgraduates. He is a member of the International Federation of Library Associations and Institutions (IFLA) and the president of the International Association of School Librarianship (IASL). His research areas include library and information science education, management of information services, information literacy, and school libraries.

Suggest Documents