Students' evaluation of learning management ... - Semantic Scholar

3 downloads 66449 Views 359KB Size Report
commonalities with PC software applications in terms of quality attributes, mobile .... supported all faculty members and students in purchasing Apple iPhones, ...
142

Int. J. Mobile Communications, Vol. 12, No. 2, 2014

Students’ evaluation of learning management systems in the personal computer and smartphone computing environments Wooje Cho, Yoonhyuk Jung* and Jin-Hyouk Im School of Technology Management, Ulsan National Institute of Science and Technology (UNIST), UNIST-gil 50, Ulsan Metropolitan City, Republic of Korea Fax: +82-52-217-3101 E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: The objective of this study is twofold: 1) to investigate an impact of software quality attributes of a learning management system (LMS) on students’ satisfaction in both the personal computer and the smartphone settings; 2) to examine whether LMS quality factors have different impacts in the smartphone context and the PC context. We explore five quality attributes of LMS (capability, usability, performance, reliability and documentation). Data from a survey of 193 students were analysed using ordered logit regression. Findings showed that while only usability and reliability significantly affected user satisfaction in the PC context; all the quality attributes except documentation had a significant influence on user satisfaction in the smartphone setting. We also found that reliability was twice as important to user satisfaction in the smartphone context as in the PC context. The results imply that LMS quality attributes have different impacts on students’ satisfaction in the smartphone context from the PC context. Keywords: LMS; learning management system; mobile learning; software quality; user satisfaction. Reference to this paper should be made as follows: Cho, W., Jung, Y. and Im, J-H. (2014) ‘Students’ evaluation of learning management systems in the personal computer and smartphone computing environments’, Int. J. Mobile Communications, Vol. 12, No. 2, pp.142–159. Biographical notes: Wooje Cho is an Assistant Professor in the School of Technology Management at the UNIST. He earned his Doctoral Degree from the University of Illinois at Urbana-Champaign and Master’s degree from the Carnegie Mellon University. His research interests lie in IT strategy, information security, software quality and e-education. Yoonhyuk Jung is an Assistant Professor in the School of Technology Management at the UNIST. He holds a PhD in Business Administration (Information Systems & Decision Sciences) from Louisiana State University. His main research interest is users’ sensemaking and adoption of emerging information technologies, with focus on social technologies, wireless technology applications and health information systems. Copyright © 2014 Inderscience Enterprises Ltd.

Students’ evaluation of learning management systems

143

Jin-Hyouk Im is a Professor in the School of Technology Management at the UNIST. He earned his Doctoral degree from the University of Nebraska-Lincoln and Master’s Degree from the University of Hawaii-Manoa. His research interests lie in e-education. e-government, e-community and software piracy.

1

Introduction

Currently, information technology (IT) is an essential component supporting teaching and learning. For example, a course website is now a vital way in which an instructor can communicate with students and distribute class materials. Furthermore, IT is regarded as a silver bullet that can fundamentally transform education, more than being merely a supplementary tool (Christensen et al., 2008). In an effort to reform educational systems, the Obama administration has established a new agency, called Advanced Research Projects Agency–Education, which is modelled after the Defence Advanced Research Projects Agency. Included in the federal budget for 2012 is a plan to invest $90 million in developing cutting-edge educational technologies (Parry, 2011). This news suggests that IT will be used to enable a restructuring of the educational sphere and that traditional educational systems will be incorporated into the newer IT-based educational environments. One of the most common web-based educational technologies is known as an LMS, with some examples being Blackboard, Moodle and Sakai. An LMS is defined as a web-based educational platform that “integrates a wide range of pedagogical and course administration tools” (Coates et al., 2005). Over 97% of universities in the USA have deployed LMSs (Green, 2009). LMSs not only allow students to have enriched learning experiences but also enable instructors to manage courses efficiently and effectively (Coates, 2005). Recently, as the number of open educational resources (e.g., MIT OpenCourseWare) has increased on the internet and textbook publishers have produced more online course contents that can be provided through LMSs, the role of LMSs has become more important in education than ever before. Furthermore, the advance of mobile technologies and networks makes LMSs more useful educational tools by enabling access to course materials and communications among users anywhere anytime. Students have increasingly more access to LMSs through advanced mobile devices, such as smartphones. More than two-thirds of universities in the USA have a plan to use mobile LMS applications in the future (Green, 2010). The significant impact of LMSs on the educational field has been studied extensively (e.g., Landry et al., 2006; Saade and Bahli, 2005; Sanchez-Franco, 2010). Much of the prior research has investigated LMSs by using a technology adoption framework, in which the perceived usefulness and ease of use of the system have been identified as the main factors influencing users’ intention to use it. This body of research contributes to current understanding of the importance of user adoption and the determinants affecting user adoption; however, previous findings have provided little information on how developers can improve the quality of LMSs to satisfy users’ needs. Furthermore, despite an increasing use of a mobile-version LMS, there is paucity of research on the quality of LMS software in the mobile environment. In an effort to provide more useful information to LMS developers as well as universities using LMSs, we examine quality

144

W. Cho et al.

dimensions of an LMS that may affect students’ satisfaction with the LMS, in both the PC and the smartphone contexts. More specifically, we examine five quality dimensions of software (capability, usability, performance, reliability and documentation) based on Kekre et al.’s (1995) work, which is a popular framework for evaluating software quality.

2

Theoretical foundations and hypotheses

2.1 Prior research on adoption of an LMS Because this study focuses on users’ assessment of software itself, satisfaction is used as an outcome variable rather than use intention, which can be affected by non-software factors (e.g., subjective norm, task relevance). Our exploration of factors influencing users’ satisfaction can be included in the adoption research in that satisfaction is a strong determinant for intention and actual use (Roca et al., 2006). The relation of satisfaction and adoption is supported by expectation-confirmation theory (Oliver, 1980) arguing that consumer satisfaction leads to repurchase behaviour. Thus, we introduce prior research on adoption of an LMS as relevant literature. Although a wide range of research has examined LMSs or web-based learning environments in general, much of this research has focused on users’ adoption of the technology. Much of this research is based on the technology acceptance model (TAM) of Davis et al. (1989), which has been widely used to explain user adoption of a new technology. The authors theorise that the behavioural intention of an individual to accept a new technology depends on two factors: it’s perceived usefulness, defined as the extent to which a person believes that using the technology will enhance his or her job performance, and its perceived ease of use, defined as the extent to which a person believes that using the technology will be free of effort (Davis et al., 1989). Together with the two antecedents of intention in the TAM, the research on adoption of web-based learning has explored other variables that have the potential to influence users’ adoption of the new technology. Liaw (2002) found enjoyment and self-efficacy to have significant roles in students’ intention to use web-based learning environments, and Saade and Bahli (2005) examined a research model combining the TAM and cognitive absorption. In addition, Roca et al. (2006) developed and examined a model integrating the TAM and expectation-confirmation theory (Oliver, 1980), and identified important factors affecting users’ adoption, such as perceived quality, subjective norms and cognitive absorption. More relevant to the context of this study, some researchers have investigated the role of system-related factors in the adoption of web-based learning technologies in addition to the TAM variables. For example, Pituch and Lee (2006) found that system characteristics, such as functionality, interactivity and responsiveness, had a significant role in students’ acceptance of an LMS, and Ngai et al. (2007) reported that technical support for LMS users was an important factor in their adoption of the LMS. Liu et al. (2009) researched the adoption of an LMS by types of media (e.g., text-audio presentation, audio-video presentation, text-video presentation). Even though the prior research has dealt partially with system-related factors in user adoption of web-based learning, almost no research has been conducted on users’ evaluation of software development. In other words, the prior adoption research has provided LMS developers with little information on how they might improve the LMS software.

Students’ evaluation of learning management systems

145

From a software development perspective, research is also lacking on mobile learning that support teaching/learning or education services with mobile computing (Trifonova and Ronchetti, 2006). Similarly with research on LMS users, prior research on users of mobile learning has also depended on the TAM (e.g., Chong et al., 2011; Ho et al., 2010). A few studies have suggested a design requirements framework (Parsons et al., 2007), usability guidelines (Seong, 2006), or an evaluation framework for mobile learning environments (Parsons and Ryu, 2006). Although these studies have provided conceptual frameworks for mobile learning development, few studies have included an empirical analysis. In fact, some empirical studies that have examined users’ perceptions and assessment of mobile learning have been conducted within a pseudo setting using mobile technology (e.g., Motiwalla, 2007), or have examined only one type of existing mobile service, such as a mobile data service (e.g., Stone et al., 2002; Uzunboylu et al., 2009), to assess the utility of mobile technologies in education. Hence, a gap still exists in the empirical research based on users’ actual experience of using an LMS with a mobile device. Considering that people increasingly access the internet through mobile devices, it is meaningful to investigate users’ actual experience with the mobile version of an LMS, which could offer valuable information to LMS developers.

2.2 Research model We developed a research model to identify drivers of user satisfaction with an LMS. In particular, we investigated potential LMS software quality factors in both a PC environment and a smartphone environment. By applying the research framework of Kekre et al. (1995), we examined the effects of the capability, usability, performance, reliability and documentation of the system on user satisfaction. Though installability and maintainability included in their original framework as well, we excluded them in our study because those quality attributes are not applicable to student users for LMSs. Our research framework is shown in Figure 1. Figure 1

Research framework

146

W. Cho et al.

To identify the relationship between software quality attributes and user satisfaction in the contemporary setting of LMS user environments today, we investigated the smartphone context as well as the PC context. With the advance of mobile devices and wireless networks, LMS vendors have provided mobile computing environments by releasing LMS mobile applications. Though mobile applications would have some commonalities with PC software applications in terms of quality attributes, mobile computing environments provide different settings compared with PC settings (Gafni, 2009). Our aim is to identify the drivers of user satisfaction with the mobile version of an LMS separately from their satisfaction with the PC version by understanding the unique characteristics of mobile computing.

2.3 Software quality of an LMS In the past, software quality meant that the execution of a program did not deviate from its intended behaviour (Osterweil, 1996), and the number of errors, or reliability, was the main criterion used to evaluate software quality. However, the concept of software quality has become broader today. In addition to reliability, other quality characteristics, such as usability, capability, maintainability and documentation, determine overall software quality. Boehm (1978) and McCall et al. (1977) identified various dimensions of software quality, and ISO 9126 specifies a software quality model that establishes a standard for software quality measurement. Prior studies in the software engineering field have developed applied research frameworks based on the ISO software quality model (Cavano and McCall, 1978; Kekre et al., 1995; Krishnan and Subramanyam, 2004; Losavio et al., 2003). The quality of a product or a service can be defined as the degree of ‘conformance to customers’ (Crosby, 1979). Thus, customer satisfaction can be considered a key indicator of product or service quality. To gain insight into the quality of an LMS from the standpoint of the end-users, we adopted the software quality model of Kekre et al. (1995), which has been used to explore users’ assessment of software. To identify the determinants of LMS quality, we hypothesised that capability, usability, performance, reliability and documentation would have an effect on user satisfaction. Capability refers to the number of key features offered, compared with the desired number of key features. It indicates the functionality of the product, which explains how key features are supported by the software product. Prior studies have found that the perceived usefulness of an LMS significantly determines students’ acceptance of it (Saade and Bahli, 2005; Selim, 2003; Park, 2009). For an LMS to be fully utilised in a course, a range of functions should be provided. The basic functions of an LMS include the ability to send announcements, post course content, post assignments and allow students access to their grades, and as IT has advanced, more diverse functions have been provided. If users cannot find the functions they expect, they are likely to be less satisfied. Thus, we hypothesise that a positive relationship exists between the capability of an LMS and overall user satisfaction. For the same reason, capability can also affect users’ satisfaction with mobile applications. Furthermore, capability may be a more vital factor influencing user satisfaction in the mobile context. Because mobile phone devices have a more limited capability than PCs, mobile applications usually provide fewer functions. Students who use both versions may not be able to find certain functions in the smartphone version that

Students’ evaluation of learning management systems

147

they can find in the PC version. Thus, capability will be a critical factor in user satisfaction with mobile LMS applications. H1a: In the PC-based LMS context, capability will have a positive effect on user satisfaction. H1b: In the smartphone-based LMS context, capability will have a positive effect on user satisfaction. Even though there are various definitions of usability, we used its meaning narrowed into the easiness of software in this study. Usability can be defined as the ease with which a system can be learned and used (Nielsen, 1993). Depending on usability, users will be able to complete their intended work at different times, so usability may determine the efficiency of their work or computing. The greater the number of features the software offers, the more complex the user interfaces tend to become and the more the importance of usability of the system will increase. In the context of students’ learning in higher education, advanced ITs enable instructors to include a variety of class activities, and for this reason, LMSs have provided new functions. Because the LMS should support students’ learning without requiring additional effort to use it, ease of use will considerably affect user satisfaction with the system. Among the software quality dimensions, the greatest amount of attention has been paid to usability in the mobile computing context because the screens on mobile devices are much smaller than those on PCs and because mobile systems are used in very diverse contexts (Kjeldskov and Stage, 2004; Kjeldskov and Skov, 2003; Danesh et al., 2001). Excellent usability of web commerce sites does not mean that the mobile commerce sites will have excellent usability (Venkatesh et al., 2003). In a learning environment, poor usability of mobile devices and applications will hinder the learning process (Chong et al., 2011). Thus, we hypothesised that the usability of an LMS mobile application will affect user satisfaction. H2a: In the PC-based LMS context, usability will have a positive effect on user satisfaction. H2b: In the smartphone-based LMS context, usability will have a positive effect on user satisfaction. Reliability indicates how little failure the software will have in executing tasks or functions. In the past, this attribute alone was regarded as representing software quality, as mentioned earlier. As with most software programs, no student user would expect to encounter errors when working with an LMS. For students, the reliability of the program is directly related to their academic performance. Thus, their satisfaction with the reliability of an LMS is more likely to affect their satisfaction with it. In mobile computing, data are transmitted through either wireless local area networks (e.g., Wi-Fi) or mobile communication networks (e.g., 3G), which are relatively less reliable than wired networks (Mitchell, 2011; Ghaderi et al., 2008). Wireless networks can be disconnected more often than wired local area networks, and wireless signals are subject to interference from other appliances using radio waves. Users of an LMS with smartphone applications would be concerned about their ability to complete operations or transactions. Thus, satisfaction with the reliability of the LMS is more likely to influence user satisfaction with it in the smartphone context as well.

148

W. Cho et al.

H3a: In the PC-based LMS context, reliability will have a positive effect on user satisfaction. H3b: In the smartphone-based LMS context, reliability will have a positive effect on user satisfaction. Performance refers to the response time required for an operation to be executed or for outputs to be displayed. Technically, the performance of a system can be influenced by program codes that determine the use of memory and storage space or other computing resources. It can also be influenced by the system infrastructure, such as the network speed. Disutility resulting from a low-speed system would discourage students from adopting the LMS. In particular, performance is a critical factor in an environment where the volume of users is large. On a large campus, performance is more likely to be an issue unless the software product has been developed well enough that the use of system resources is optimal. We, therefore, hypothesise that a relationship exists between performance and user satisfaction. The speed of an operation in smartphone applications can be reduced by an element anywhere in the hardware, software, or network. In particular, because the average speed of wireless networks is slower and the variance in speed is larger than in wired networks, users will be concerned about the speed of smartphone applications. Thus, performance will be a critical factor that determines students’ satisfaction with the smartphone application of an LMS. H4a: In the PC-based LMS context, usability will have a positive effect on user satisfaction. H4b: In the smartphone-based LMS context, usability will have a positive effect on user satisfaction. Documentation refers to the quality of documents provided with the software product, such as manuals, help documents and white papers. Because few schools have enough resources to provide training for students to learn the LMS or to help students by operating call service centres, when the students have problems working with the LMS, in most cases, they will need to solve the problems by themselves. Well-written help files and manuals will allow students to learn about the systems and solve problems. Thus, we hypothesise that documentation will influence user satisfaction with the LMS. In the same manner, manuals and help documents can help students learn about the LMS and solve problems by themselves in the smartphone context. Because the screen of a smartphone may be too small for users to look up documents, documents for the mobile application need to be designed both simply and intuitively. Thus, we hypothesise that a relationship exists between documentation and user satisfaction for smartphone users. H5a: In the PC-based LMS context, documentation will have a positive effect on user satisfaction. H5b: In the smartphone-based LMS context, documentation will have a positive effect on user satisfaction.

Students’ evaluation of learning management systems

3

149

Method

3.1 Data Survey data were used to identify drivers of user satisfaction with an LMS. Data were collected from undergraduate students at a university in South Korea in 2011. At the university, the Blackboard system (Blackboard Inc., Washington DC) is used extensively by almost all instructors and students according to school policy. Students were asked to answer questions about their experience with the PC version and the smartphone version of Blackboard. In the university, majority of students are using the smartphone version as well the PC version; since 2009, the university has financially supported all faculty members and students in purchasing Apple iPhones, in partnership with a leading Korean telecommunications company. The questionnaire asked students to provide responses regarding their overall user satisfaction with Blackboard, as well as responses for each of the five potential determinants of user satisfaction. We used the questionnaires used by Krishnan and Subramanyam (2004). Additional information was collected on individual respondents, including age, gender and major. The students’ participation in the survey was a voluntary action, not a compulsory assignment. We provide a link for the survey with description of the study through Blackboard announcement, with cooperation of the Center for Teaching and Learning at the university. Two hundred and thirteen students provided responses to the survey. After records that contained missing values were removed, our analysis used 193 of the 213 responses. Among the 193 students, 167 students reported they were using both the PC and the smartphone application versions, and they provided responses for both versions. The rest, 29 students, provided responses for only the PC version. Overall user satisfaction with the LMS was the dependent variable in our research model. We used a 5-level scale of respondents’ choices, which was an ordered ordinal variable. Because the integer numbers did not represent the consumers’ exact responses, the ordinal variables were neither numeric nor nominal.

3.2 Model An ordered logit regression is used to test our hypotheses. Ordinary least squares is not a proper tool to use when a dependent variable is not continuous but dichotomous (Kennedy, 2003). This may result in inefficiency of the regression, and the estimates would have values outside the range of the dependent variables. Instead, an ordered logit regression or ordered probit regression can be used for ordered ordinal dependent variables (Kennedy, 2003; Zavoina and McElvey, 1975). Thus, we used ordered logit dependent variables to examine the determinants of overall user satisfaction with the LMS because the logit model and the probit model can be used exchangeably in most cases (Agresti, 1996). The ordered logit model simultaneously estimates multiple equations based on categories of dependent values. Because the variables were measured on a 5-point scale, four equations were examined. Table 1 provides summary statistics for these variables.

150

W. Cho et al.

Table 1

Summary statistics PC context Mean

Variable

Smartphone context

SD

Min.

Max.

Mean

SD

Min.

Max.

User satisfaction

3.11

0.98

1

5

2.83

1.08

1

5

Capability (CAP)

3.18

0.94

1

5

2.94

1.05

1

5

Usability (USA)

3.25

1.04

1

5

3.23

1.00

1

5

Reliability (REL)

3.10

1.05

1

5

2.65

1.04

1

5

Performance (PER)

2.64

1.10

1

5

2.44

1.02

1

4

Documentation (DOC)

2.96

0.93

1

5

2.74

0.93

1

4

The ordered logit model had the following form: logit( p j ) = log

pj 1− pj

= α j + β′X, ( j = 1, 2,3, 4)

(1)

p j = P(Y > j ). Table 2 shows categories of the ordinal values of the dependent variables. The variable pj refers to the probability of being in the set of categories on the right vs. the set of categories on the left in equation (j) of Table 2. The vector X is a vector of explanatory variables, whereas β′ is the associated vector of parameters, and αj is a constant. The ordered logit regression yields results that are similar to those obtained by running a series of logistic regressions (Williams, 2006). It provides only one set of coefficients for each variable because it assumes the coefficients for variables in the equations are not significantly different. We tested the assumption of constancy of effects across categories by using the command omodel in Stata, and we found that the assumption was not violated in the model for either the PC or the smartphone version. If the assumption had been violated, we would have used a general ordered logit regression model instead of the ordered logit regression model. Table 2

Equations for the ordered logit model

Equation (j)

Pooled categories

Pooled categories

Equation (1)

1

2+3+4+5

Equation (2)

1+2

3+4+5

Equation (3)

1+2+3

4+5

Equation (4)

1+2+3+4

5

We choose the four models in Table 2 based on the method of ordered logit regression to incorporate discrete, ordinal nature of the dependent variables. We intend to examine effects of quality factors at a particular level of user satisfaction. For example, the impact of usability on the overall user satisfaction when satisfaction score of a user is 1 may be different from that impact when satisfaction score of a user is 4. Equation (1) can examine the former case and equation (4) can examine the latter case.

Students’ evaluation of learning management systems

151

Using equation (1), we tested two models. First, to identify drivers of user satisfaction for the LMS in the PC context and in the mobile context, we specified the models here:

logit( p j ) = α j + β′X = α j + β1CAPp + β 2 USA p + β3 REL p + β 4 PER p + β5 DOC p ,

logit( p j ) = α j + β′X = α j + β1CAPm + β 2 USA m + β3 RELm + β 4 PER m + β5 DOCm ,

(2)

(3)

where the five explanatory variables, CAP, USA, REL, PER and DOC, refer to capability, usability, reliability, performance and documentation, respectively, and the subscripts of the variables refer to the contexts, p for the PC context and m for the mobile context.

4

Results

4.1 Determinants of LMS quality in the PC-based context Estimates of the parameters were all positive, but only the estimates for usability and reliability were significant (p < 0.05) as shown in Figure 2. Capability and performance showed a marginal effect, with p-values of 0.055 and 0.066, respectively. It is interesting to note that documentation did not have a significant effect. In terms of the magnitude of influence, the effect of usability on user satisfaction was the greatest. The estimate of the usability parameter was much larger than that of any other variable. The estimation model was evaluated by a pseudo R2 of 0.180. Correlations between the five explanatory variables are shown in Table 3. All the correlation coefficients were significant. Figure 2

Coefficients of software quality attributes of the LMS for the PC environment

*p < 0.05, **p < 0.01, ***p < 0.001. The solid lines indicate a significant path, whereas the dashed lines indicate a non-significant path (N = 193).

152 Table 3

W. Cho et al. Pearson correlation coefficients between software quality attributes in the PC-based context CAP

USA

REL

PER

CAP

1.000

USA

0.495

1.000

REL

0.292

0.315

1.000

PER

0.251

0.267

0.405

1.000

DOC

0.454

0.410

0.227

0.265

DOC

1.000

CAP = capability; USA = usability; REL = reliability; PER = performance; DOC = documentation. All the correlation coefficients are significant (p-value < 0.01).

4.2 Determinants of LMS quality in the smartphone-based context In the smartphone environment, capability and performance as well as usability and reliability significantly affected user satisfaction at the 5% level as shown in Figure 3. However, documentation did not have a significant effect in the smartphone context. Regarding the magnitude of the effect, the coefficients for usability and reliability were greater than those for any other quality attribute. The pseudo R2 of the model was 0.251. Correlations between the five explanatory variables are shown in Table 4. Figure 3

Coefficients of software quality attributes of the LMS for the smartphone environment

*p < 0.05, **p < 0.01, ***p < 0.001. The solid lines indicate a significant path, whereas the dashed lines indicate a non-significant path (N = 167).

Students’ evaluation of learning management systems Table 4

153

Pearson correlation coefficients between software quality attributes in the smartphone context CAP

USA

REL

PER

CAP

1.000

USA

0.470

1.000

REL

0.401

0.529

1.000

PER

0.260

0.307

0.521

1.000

DOC

0.324

0.410

0.364

0.441

DOC

1.000

CAP = capability; USA = usability; REL = reliability; PER = performance; DOC = documentation. All the correlation coefficients are significant (p-value < 0.01)

5

Discussion

5.1 Drivers for user satisfaction in the PC-based LMS version It is interesting that capability had only a marginal effect on user satisfaction, with a p-value of 0.055, contradicting prior findings in the studies of Kekre et al. (1995) and Krishnan and Subramanyam (2004), who showed that capability strongly affected overall user satisfaction. A possible explanation for this result is the routinisation of students’ use of the LMS. Unlike instructors, who might need to find various functions of the LMS for their own course design, students are more likely to use functions that they are supposed to do. For instance, students download lecture notes and upload their assignments when their instructor encourages them to do so. Students might use a limited number of functions of the LMS repeatedly, allowing their usage activities to become routinised. Usability had the greatest impact on user satisfaction. In the past, computers were mainly used by individuals familiar with computer technologies, when the importance of usability was less important than it is today. In the study by Kekre et al. (1995), the estimated coefficient of usability was much less than the coefficient of capability. Today, however, most people use computer software for their work or entertainment, even when they may not be familiar with computer technologies. Thus, the importance of the ease of learning and using software applications has increased. With the advance of user-friendly interfaces of computer software, users’ expectations for intuitive, easy-to-use interfaces have greatly increased. This trend may be one reason for the strong impact of usability on user satisfaction in this study. Another interesting finding is that performance did not have a significant effect on user satisfaction. Like capability, performance was identified in earlier studies as a significant quality attribute that affected user satisfaction (Kekre et al., 1995; Krishnan and Subramanyam, 2004). A plausible explanation for this result is that the variation in internet speed today has become much smaller than in the past. Our finding may reflect the fact that the speed of the application was not critical in the PC-based LMS context. Documentation was far from being a significant determinant of user satisfaction, which was a significant factor according to earlier studies (Kekre et al., 1995; Krishnan and Subramanyam, 2004). This finding can be explained by the advanced user-friendly interfaces of software applications in the present day. Today, fewer users refer to manuals

154

W. Cho et al.

before beginning to use an application. In particular, because the LMS is a tool college students use to support their learning, the LMS is designed to minimise the effort required for students to learn how to use it.

5.2 Drivers of user satisfaction for the smartphone-based LMS version Our results for LMS users in the smartphone environment were different from those in the PC environment. The effects of usability, reliability and performance were highly significant. Capability showed a marginally significant effect (p = 0.047), and documentation had little effect on user satisfaction. A similar explanation as in the previous section can be offered for the mobile context because students use functions in an LMS based on instructors’ guidelines, rather than at their own discretion. Thus, capability may have had only a marginal effect on user satisfaction. The effects of usability, reliability and performance were highly significant. In view of the differences in smartphone interfaces compared with PC interfaces, such as small screens and touch-sensitive keypads, the importance of usability in mobile computing has been stressed in prior studies (Kiili, 2002; Kjeldskov and Skov, 2003). Because an operation in the smartphone context can be hindered by network problems, reliability and performance would be important factors. Wireless connections can be disconnected more frequently than wired local area networks. In addition, because of the small screen and sensitive keypads on smartphones, human errors in computing, such as typos, are more likely to occur.

5.3 Comparison between the PC and smartphone computing environments Figure 4 compares estimated coefficients of the five software quality attributes for the PC context and the smartphone context. The coefficients of capability and usability for both contexts are similar. However, some differences were observed in the effects of reliability, performance and documentation. Figure 4

Estimated coefficients of software quality attributes for the PC and smartphone computing environments (see online version for colours)

CAP = capability; USA = usability; REL = reliability; PER = performance; DOC = documentation.

The greater coefficient of reliability in the smartphone context can be explained in two ways. First, data transfer using smartphone computing is less reliable than in the PC environment. Smartphones use either mobile communication networks or wireless local

Students’ evaluation of learning management systems

155

area networks. Because users carrying a smartphone device may move around, the wireless connection is more likely to be disconnected or to slow down. In addition, when a number of students try to access the Wi-Fi simultaneously in a limited area, such as a classroom, the connection can be disconnected. Second, smartphone users may make errors in entering data more often than PC users because of the small size of the screen. For instance, smartphone users might hit the wrong buttons or make typos. The effect of performance on user satisfaction was not significant in the PC context but was highly significant in the smartphone context. There are two possible explanations for this result. First, multitasking is available in the PC environment but is not available in typical smartphone computing contexts. In the PC context, if a program or website responds late, a user can begin another task while waiting for the response. However, in the smartphone context, a user has to wait for a response, so the speed of an operation will be critical. Second, the variation in network speed in smartphone computing is much larger than that in the PC environment. Thus, poor performance in the smartphone computing context may cause an operation or transaction to fail, whereas poor performance in the PC context may not be as critical in completing a task. In both the PC and the smartphone contexts, the effect of documentation was not significant, but the estimated coefficient in the smartphone context was much smaller than that in the PC context. A small-sized smartphone screen would provide less favourable conditions for referring to manuals or help documents than would the screen size in the PC setting. Thus, compared with PC users, most smartphone users may be much less concerned about documentation.

6

Conclusion

In this study, we not only examined an impact of five quality attributes of an LMS on students’ satisfaction but also explored how those attributes have different influences on satisfaction according to the two contexts (i.e., PC and smartphone). The overall finding is that usability and reliability are commonly key determinants for students’ satisfaction in both the settings, and performance (i.e., system speed) is a significant factor only in the smartphone context. Considering the increasing adoption of LMSs and mobile LMSs in particular, this study makes significant contributions that could benefit both academic researchers and practitioners. This research makes three theoretical contributions to the field of LMS studies. First, our model extends the discussion of an LMS to quality dimensions taken from the technology adoption framework. Most of the prior studies on LMSs have investigated them by using the TAM framework. These studies have contributed to examining the importance of user adoption and have suggested the need to examine the determinants affecting adoption; however, those findings have provided little information on how developers might improve LMS quality. By taking an alternative view in investigating an LMS, we offer the vendor more information for quality management. Second, few empirical studies have analysed data on users’ actual experiences with an LMS in the mobile environment. We analysed data reflecting students’ actual experiences using mobile applications. Third, few studies have compared the quality of LMS software

156

W. Cho et al.

in a PC-based version vs. a smartphone-based version. Our analysis shows that factors affecting user satisfaction differ between these two distinctive computing environments. In the practice of LMS use, our study can help software vendors allocate their development resources more efficiently. Because every vendor has limited resources and timelines within which to develop software products, understanding the key drivers of user satisfaction can increase their efficiency. By understanding different priorities of software quality attributes in versions for PC and mobile apps, managers who are faced with resource constraints in the vendor can effectively allocate their resources for the different versions. In addition, the results of this study may help schools adopt an LMS more successfully or may help boost the adoption of the system. This study can be complementary to prior LMS studies that used the adoption framework for successful implementation of an LMS. This study also has limitations. Although we examined five main attributes of software derived from Kekre et al.’s (1995) framework for evaluating software, there may be more software attributes that can affect users’ satisfaction according to the context. For example, given that diverse types of software can be used simultaneously, interoperability can be a critical software attribute in both the PC and the smartphone contexts. An amount of batter power required by software can be an imperative software attribute affecting users’ evaluation particularly in the smartphone-version LMS. Accordingly, future research needs to develop and examine a research model including other potential software attributes, which are appropriate to the mobile environment. Another limitation of the study is that we examined an overall function of an LMS instead of specific functions (e.g., online test, discussion board, display course materials). Users may differently evaluate each LMS function, and those different evaluations may also be altered according to the context. Therefore, to provide a richer understanding of users’ assessment of the mobile LMS, future research is required to scrutinise their evaluation by mobile LMS functions. Finally, the low explanation power of our regression model is another limitation of this study. The model explained 18% of users’ satisfaction in the PC environment and 25% in the smartphone environment. We guess that our small sample size could be one reason for the limitation. The samples of 193 might be comparatively small to clarify the influence of software attributes on satisfaction. Another possible reason is uncontrolling a diversity of use contexts. In the PC environment, student could access the LMS by using different device environments (e.g., desktop or laptop, wireline or WiFi). Also, there might be differences in characteristics of their smartphone devices (e.g., screen size, CPU capacity, access network speed). Those differences may affect their satisfaction of the software. Therefore, we expect that if future research develops and examines a research model encompassing the diversity of use contexts, it will be able to offer a better understanding of users’ satisfaction of LMS software.

Acknowledgement This work was supported by research fund of UNIST.

Students’ evaluation of learning management systems

157

References Agresti, A. (1996) An Introduction to Categorical Data Analysis, John Wiley, New York. Boehm, B.W. (1978) Characteristics of software quality, North-Holland Pub. Co., Amsterdam and New York. Cavano, J.P. and McCall, J.A. (1978) ‘A framework for the measurement of software quality’, Proceedings of the Software Quality Assurance Workshop on Functional and Performance Issues, ACM, New York, NY. Chong, J-L., Chong, A.Y-L., Ooi, K-B. and Lin, B. (2011) ‘An empirical analysis of the adoption of m-learning in Malaysia’, International Journal of Mobile Communications, Vol. 9, pp.1–18. Christensen, C., Johnson, C.W. and Horn, M.B. (2008) Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns, McGraw-Hill, New York. Coates, H. (2005) ‘Leveraging LMSs to enhance campus-based student engagement’, EDUCAUSE Quarterly, Vol. 28, pp.66–68. Coates, H., James, R. and Baldwin, G. (2005) ‘A critical examination of the effects of learning management systems on university teaching and learning’, Tertiary Education and Management, Vol. 11, pp.19–36. Crosby, P.B. (1979) Quality is Free, McGraw-Hill, New York. Danesh, A., Inkpen, K., Lau, F., Shu, K. and Booth, K. (2001) ‘Geney: designing a collaborative activity for the palm handheld computer’, CHI, 2001, ACM, New York, pp.388–395. Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. (1989) ‘User acceptance of computer technology: a comparison of two theoretical models’, Management Science, Vol. 35, pp.982–1003. Gafni, R. (2009) Measuring Quality of M-Learning Information Systems, Information Science Press, Santa Rosa, CA. Ghaderi, M., Towsley, D. and Kurose, J. (2008) ‘Reliability gain of network coding in lossy wireless networks’, INFOCOM 2008. The 27th Conference on Computer Communications. IEEE, 13–18 April, pp.2171–2179. Green, K. (2009) ‘The 2009 campus computing survey’, EDUCAUSE [Online], Available from: http://www.campuscomputing.net/survey Green, K. (2010) ‘The 2010 campus computing survey’, EDUCAUSE [Online], Available from: http://www.campuscomputing.net/survey Ho, C-T.B., Chou, Y-T. and O’Neill, P. (2010) ‘Technology adoption of mobile learning: a study of podcasting’, International Journal of Mobile Communications, Vol. 8, pp.468–485. Kekre, S., Krishnan, M.S. and Srinivasan, K. (1995) ‘Drivers of customer satisfaction for software products: implications for design and service support’, Management Science, Vol. 41, pp.1456–1470. Kennedy, P. (2003) A Guide to Econometrics, Blackwell Publishing Ltd., Malden, MA. Kiili, K. (2002) ‘Evaluating WAP usability: ‘what usability?’’, Proceedings of IEEE International Workshop on Wireless and Mobile Technologies in Education-2002, pp.169, 170. Kjeldskov, J. and Skov, M.B. (2003) ‘Evaluating the usability of a mobile collaborative system: exploring two different laboratory approaches’, Proceedings of the 4th International Symposium on Collaborative Technologies and Systems 2003, SCS Press, San Diego, Orlando, FL, pp.134–141. Kjeldskov, J. and Stage, J. (2004) ‘New techniques for usability evaluation of mobile systems’, International Journal of Human-Computer Studies, Vol. 60, pp.599–620. Krishnan, M.S. and Subramanyam, R. (2004) ‘Quality dimensions in e-commerce software tools: an empirical analysis of North American and Japanese markets’, Journal of Organizational Computing and Electronic Commerce, Vol. 14, pp.223–241.

158

W. Cho et al.

Landry, B.J.L., Griffeth, R. and Hartman, S. (2006) ‘Measuring student perceptions of blackboard using the technology acceptance model’, Decision Sciences Journal of Innovative Education, Vol. 4, pp.87–99. Liaw, S.S. (2002) ‘An internet survey for perceptions of computer and World Wide Web: relationship, prediction, and difference’, Computers in Human Behavior, Vol. 18, pp.17–35. Liu, S.H., Liao, H.L. and Pratt, J.A. (2009) ‘Impact of media richness and flow on e-learning technology acceptance’, Computers & Education, Vol. 52, pp.599–607. Losavio, F., Chirinos, L., Lévy, N. and Ramdane-Cherif, A. (2003) ‘Quality characteristics for software architecture’, Journal of Object Technology, Vol. 2, pp.133–150. McCall, J.A., Richards, P.K. and Walters, G.F. (1977) ‘Factors in software quality’, Volumes I, II, and III. US Rome Air Development Center Reports, US Department of Commerce, USA. Mitchell, B. (2011) Wired vs. Wireless Networking: Comparing Wireless LAN Technology with Wired, About.com [Online] Available from: http://compnetworking.about.com/cs/home networking/a/homewiredless_2.htm Motiwalla, L.F. (2007) ‘Mobile learning: a framework and evaluation’, Computers & Education, Vol. 49, pp.581–596. Ngai, E.W.T., Poon, J.K.L. and Chan, Y.H.C. (2007) ‘Empirical examination of the adoption of WebCT using TAM’, Computers & Education, Vol. 48, pp.250–267. Nielsen, J. (1993) Usability Engineering, Academic Press, London. Oliver, R.L. (1980) ‘A cognitive model for the antecedents and consequences of satisfaction’, Journal of Marketing Research, Vol. 17, pp.460–469. Osterweil, L. (1996) ‘Strategic directions in software quality’, ACM Computing Surveys, Vol. 28, pp.738–750. Park, S. (2009) ‘An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e-learning’, Educational Technology & Society, Vol. 12, pp.150–162. Parry, M. (2011) ‘Why the Obama administration wants a darpa for education’, The Chronicle of Higher Education [Online], Available from: http://chronicle.com/blogs/wiredcampus/why-theobama-administration-wants-a-darpa-for-education/30179 Parsons, D. and Ryu, H. (2006) ‘A framework for assessing the quality of mobile learning’, The 6th IEEE International Conference on Advanced Learning Technologies, Kerkrade, Netherlands. Parsons, D., Ryu, H. and Cranshaw, M. (2007) ‘A design requirements framework for mobile learning environments’, Journal of Computers, Vol. 2, pp.1–8. Pituch, K.A. and Lee, Y-K. (2006) ‘The influence of system characteristics on e-learning use’, Computers & Education, Vol. 47, pp.222–244. Roca, J.C., Chiu, C.M. and Martinez, F.J. (2006) ‘Understanding e-learning continuance intention: an extension of the Technology Acceptance Model’, International Journal of HumanComputer Studies, Vol. 64, pp.683–696. Saade, R. and Bahli, B. (2005) ‘The impact of cognitive absorption on perceived usefulness and perceived ease of use in on-line learning: an extension of the technology acceptance model’, Information & Management, Vol. 42, pp.317–327. Sanchez-Franco, M.J. (2010) ‘WebCT-The quasimoderating effect of perceived affective quality on an extending Technology Acceptance Model’, Computers & Education, Vol. 54, pp.37–46. Selim, H.M. (2003) ‘An empirical investigation of student acceptance of course websites’, Computers & Education, Vol. 40, pp.343–360. Seong, D.S.K. (2006) ‘Usability guidelines for designing mobile learning portals’, 3rd International Conference on Mobile Technology. Applications & Systems, Bangkok, Thailand.

Students’ evaluation of learning management systems

159

Stone, A., Briggs, J. and Smith, C. (2002) ‘SMS and interactivity-some results from the field, and its implications on effective uses of mobile technologies in education’, IEEE International Workshop on Wireless and Mobile Technology in Education, Vaxjo, Sweden. Trifonova, A. and Ronchetti, M. (2006) ‘Hoarding content for mobile learning’, International Journal of Mobile Communications, Vol. 4, pp.459–476. Uzunboylu, H., Cavus, N. and Ercag, E. (2009) ‘Using mobile learning to increase environmental awareness’, Computers & Education, Vol. 52, pp.381–389. Venkatesh, V., Ramesh, V. and Massey, A.P. (2003) ‘Understanding usability in mobile commerce’, Communications of the ACM, Vol. 46, pp.53–56. Williams, R. (2006) ‘Generalized ordered logit/partial proportional odds models for ordinal dependent variables’, Stata Journal, Vol. 6, pp.58–82. Zavoina, T. and McElvey, W. (1975) ‘A statistical model for the analysis of ordinal level dependent variables’, Journal of Mathematical Sociology, Summer, pp.130–120.