The Pennsylvania State University The Graduate ... - ETDA - Penn State

2 downloads 0 Views 530KB Size Report
Joel Haight. Assistant Professor ... Results varied based on which of the three dependent variables were being measured ...... San Francisco: Morgan Kaufmann.
The Pennsylvania State University The Graduate School College of Engineering

HUMAN FACTORS CONSIDERATIONS IN QUALITY OF SERVICE METRICS FOR HEALTHCARE DELIVERY

A Thesis in Industrial Engineering by Lesley Strawderman

 2005 Lesley Strawderman

Submitted in Partial Fulfillment of the Requirements for the Degree of

Doctor of Philosophy

December 2005

The thesis of Lesley Strawderman was reviewed and approved* by the following:

Richard J. Koubek Professor of Industrial Engineering Head of the Department of Industrial and Manufacturing Engineering Thesis Advisor Chair of Committee

M. Jeya Chandra Professor of Industrial Engineering

Andris Freivalds Professor of Industrial Engineering

Joel Haight Assistant Professor of Industrial Health and Safety

*Signatures are on file in the Graduate School

iii ABSTRACT The purpose of this study was to examine the use of human factors techniques in the service sector and to develop a modified measurement instrument for service quality that includes human factors considerations. To model service quality, six dimensions were proposed. Tangibles, reliability, responsiveness, assurance, and empathy were dimensions commonly used to measure service quality. These five dimensions are measured through a survey instrument termed SERVQUAL. A sixth dimension, usability, was added in a modified survey instrument termed SERVUSE. To examine the predictive power of both instruments, 200 patients at an oncampus health clinic, University Health Services, were surveyed. The survey measured subject expectations and perceptions regarding the service system. Gap scores were calculated as the difference between these two measures. Positive gap scores reflected exceeding customer expectations. Negative gap scores reflected failing to meet these expectations.

Three response variables were examined: perceived quality, satisfaction,

and behavioral intention. The analysis completed in the project allowed for the examination of service quality at University Health Services. The system obtained an overall gap score of -0.357, showing that customer expectations were not met. The largest problems faced included convenience of operating hours and empathy of the UHS staff. Ratings on the three dependent variables were high. This shows that patients were relatively satisfied, although the system’s performance fell short of their expectations.

iv Both measurement tools, SERVQUAL and SERVUSE, were found to be significant predictors of service quality, satisfaction, and behavioral intention in this healthcare setting. Results varied based on which of the three dependent variables were being measured, showing that they are indeed independent responses. Usability was found to be a significant predictor of service quality, satisfaction, and behavioral intention. It also adds significant predictive value to the regression models when the dependent variable is behavioral intention. Therefore, usability should be included as a factor when measuring service quality.

v TABLE OF CONTENTS LIST OF FIGURES .................................................................................................vii LIST OF TABLES ...................................................................................................viii ACKNOWLEDGEMENTS......................................................................................x Chapter 1 Introduction ............................................................................................1 Chapter 2 Literature Review....................................................................................3 2.1 What are services?.......................................................................................3 2.2 Human Factors ............................................................................................7 2.3 Human Factors in Service Industries............................................................9 2.4 Service Quality............................................................................................11 2.4.1 Dimensions of Service Quality ..........................................................12 2.4.2 Measuring Service Quality.................................................................17 2.4.3 Service Quality Gap Models ..............................................................18 2.4.4 Benefits and Limitations of SERVQUAL ..........................................21 2.5 Human Factors and Service Quality.............................................................24 2.6 Usability......................................................................................................27 2.6.1 Measuring Usability ..........................................................................30 2.6.2 Benefits and Limitations of Usability.................................................32 2.7 Usability and Service Quality ......................................................................32 2.8 Service Quality in Healthcare ......................................................................34 Chapter 3 Model Development................................................................................37 Chapter 4 Hypotheses..............................................................................................40 Chapter 5 Methodology...........................................................................................42 5.1 Variables .....................................................................................................42 5.2 Survey Development ...................................................................................42 5.3 Subjects.......................................................................................................44 5.5 Statistical Analysis ......................................................................................45 Chapter 6 Results ....................................................................................................47 6.1 Survey Validation........................................................................................47 6.2 Descriptive Analysis....................................................................................50 6.3 Hypothesis Testing ......................................................................................61 6.3.1 Hypothesis 1......................................................................................61 6.3.2 Hypothesis 2......................................................................................63 6.3.3 Hypothesis 3......................................................................................66

vi Chapter 7 Discussion...............................................................................................70 7.1 University Health Services ..........................................................................70 7.2 Model Analysis ...........................................................................................74 7.2.1 Scoring Methods................................................................................74 7.2.2 Dependent Variables..........................................................................75 7.2.3 Survey Comparison ...........................................................................77 7.3 Research Limitations ...................................................................................78 7.4 Future Research...........................................................................................79 Bibliography ............................................................................................................81 Appendix A Human Factors in Service Industry Examples......................................88 Appendix B Original Survey Questions...................................................................91 Appendix C Subject Materials.................................................................................93 Appendix D Regression Models ..............................................................................102 Appendix E Raw Data.............................................................................................114

vii LIST OF FIGURES Figure 1: Bitner’s Service Encounter Model (Edvardsson, Thomasson, & Øvretveit, 1994) ................................................................................................17 Figure 2: GAP Model of Service Quality (Zeithaml, Parasuraman, & Berry, 1990) ..21 Figure 3: Attributes of System Acceptability (Nielsen, 1993) ...................................27 Figure 4: Graphical Combination of Human Factors and Service..............................37 Figure 5: Graphical Combination of Usability and Service .......................................38 Figure 6: Proposed Service Quality Model ...............................................................39 Figure 7: Survey Factor Coverage ............................................................................39 Figure 8: Gap Scores Based on Gender ....................................................................56 Figure 9: Dependent Variables Based on Gender......................................................56 Figure 10: Gap Scores Based on Ethnicity................................................................57 Figure 11: Dependent Variables Based on Ethnicity .................................................58 Figure 12: Gap Scores Based on University Status ...................................................59 Figure 13: Dependent Variables Based on University Status ....................................59 Figure 14: Gap Scores Based on Visit Frequency .....................................................60 Figure 15: Dependent Variables Based on Visit Frequency ......................................60

viii LIST OF TABLES Table 1: Service Sector Examples ............................................................................6 Table 2: SERVQUAL Dimensions ...........................................................................13 Table 3: Grönroos’s Dimensions ..............................................................................14 Table 4: Gummesson’s Dimensions .........................................................................15 Table 5: Human Factors and SERVQUAL ...............................................................26 Table 6: Usability and Human Factors......................................................................29 Table 7: Summary of Usability Methods (Nielsen, 1993) .........................................31 Table 8: Usability and SERVQUAL.........................................................................33 Table 9: Dimension Classification of Survey Items ..................................................43 Table 10: Demographics Overview ..........................................................................44 Table 11: Cronbach Alpha Coefficients....................................................................47 Table 12: Inter-Item Average Correlations ...............................................................48 Table 13: Item to Dimension Correlations ................................................................49 Table 14: Overall Survey Scores ..............................................................................50 Table 15: Overall Scores by Dimension ...................................................................51 Table 16: Overall Scores by Question ......................................................................52 Table 17: Dependent Variable Results......................................................................52 Table 18: Dependent Variable Correlations ..............................................................53 Table 19: Dependent Variable to Score Correlations ................................................54 Table 20: Demographic Correlations ........................................................................55 Table 21: Hypothesis 1: Regression Model Results ..................................................62 Table 22: SERVQUAL Regression Model Parameters .............................................63 Table 23: Usability to Outcome Correlations............................................................64

ix Table 24: Hypothesis 2: Regression Model Results ..................................................65 Table 25: SERVUSE Regression Model Parameters.................................................66 Table 26: Regression Model Comparison .................................................................67 Table 27: Dimension Correlations ............................................................................68 Table 28: Hypothesis 3: Regression Model Results ..................................................69

x ACKNOWLEDGEMENTS First and foremost, I would like to thank my advisor, Dr. Koubek, for his guidance and support throughout my graduate studies. I am also grateful to my committee members, Dr. Chandra, Dr. Freivalds, and Dr. Haight, for their insight and contributions to this project.

I would like to thank my husband for always providing me

with support and encouragement, and for knowing when I need it most. Finally, I would like to dedicate this dissertation to my mother, who has always been my biggest fan.

Chapter 1 Introduction According to the Bureau of Labor Statistics, nearly 79 million Americans are employed in the service sector. This includes jobs that are derived from the performance of services, rather than the production of product. In the United States, services represent nearly 74% of the gross domestic product (Albrecht & Zemke, 2002). These statistics demonstrate the prevalence of service industries in our society and the need to apply scientific knowledge to aid in the success of service. Human factors has traditionally focused on product and process improvement in manufacturing, product design, and human computer interaction. A logical extension of human factors would be into the service sector. There is a large human component in most service systems, including both the customer and company ends of the spectrum. Because of this large human involvement, the use of human factors in these systems appears justified. By utilizing human factors theories, tools, and techniques, service systems can be improved to be more productive, effective, safe, and comfortable for both employees and customers. The overall goal of this project is to examine the use of human factors techniques in the service sector. Specific objectives are: o to examine the role of human factors in service industries. o to examine the dimensionality of service quality.

2 o to develop a modified measurement instrument for service quality that includes human factors considerations.

Chapter 2 Literature Review

2.1 What are services? Examples of service systems include hotels, hospitals, and restaurants. Larger scale networks such as transportation are also service systems (Gautam, 2003). Services can be provided for either external or internal customers (Fisher & Schutta, 2003). Banks, hotels, and education systems all provide services for external customers. A company’s accounting and human resource department provide services for customers internal to their own company. A customer can be a single individual or a group of people. Similarly, a service system can provide a service through a number of service providers (Gautam, 2003). The supplier or the customer may be represented at the interface by personnel or equipment. Services consist of two components, a technical outcome and a functional outcome. The technical outcome is often referred to as the “what” of service. It is that which is delivered to the customer. The functional outcome is often referred to as the “how” of service. It consists of the service delivery process (Brown, Gummesson, Edvardsson, & Gustavsson, 1991; Schneider & White, 2004). Consider a service system in a restaurant. The technical outcome is the meal that is eaten by the customer. The functional outcome consists of being seated and placing an order. All service systems can be defined by these two components.

4 Many researchers have tried to develop a definition for service. Lovelock and Wirtz (2004) present the following definitions: o A service is an act or process offered by one party to another. Although the process may be tied to a physical product, the performance is transitory, often intangible in nature, and does not normally result in ownership of any of the factors of production. o A service is an economic activity that creates value and provides benefits for customers at specific times and places by bringing about a desired change in, or on behalf of, the recipient of the service. o A service is something that can be bought and sold, but which cannot be dropped on your foot. A similar definition was developed by Lashley (1997): o Service is the production of essentially intangible benefit, either in its own right or as significant element of a tangible product, which through some form of exchange satisfies an identified consumer need. Through these definitions, a number of important characteristics emerge. These characteristics are the definitive features of services. Intangibility, inseparability, and heterogeneity are what separate services from goods. Services cannot be seen, touched, or held. They are intangible in the sense that they have no physical manifestation (Schneider & White, 2004). Service industries supply the needs of the customer without producing tangible goods (Stebbing, 1990). Similarly, services are perishable. They cannot be stored, resold, or returned. Consumption occurs immediately following production (Ziethaml & Bitner, 2003).

5 Not all service systems produce “pure services.” A service may be linked with a tangible product (Wetzels & de Ruyter, 2001). Many operations have varying degrees of both production and service content (Murphy, 1993), falling at different places along a continuum of intangibility (Schneider & White, 2004). At one end of the continuum, companies produce only goods. At the other end of the continuum, companies produce only services. The majority of companies fall somewhere in between. A computer manufacturer is a good example of a company that falls in the middle of the continuum. They produce goods (the computer) as well as services (customer support). The production and consumption of a service cannot be separated. Therefore, services are inseparable (Schneider & White, 2004). There is no way to make a service, inspect it, fix any problems, and then deliver it to a customer. The customer is present while the service is being produced. Therefore, the customer views and often takes part in the production process (Ziethaml & Bitner, 2003). Customer activities at the time of service delivery are often essential to the completion of the service transaction (Wetzels & de Ruyter, 2001). Based on the notion that production and consumption of a service occur simultaneously, companies strive to ensure that a maximum number of customers are available to consume the service as it is being produced (Schneider & White, 2004). Examples of this can be seen in entertainment venues, airline flights, and education systems. Services are heterogeneous. The human element in the production and delivery of services results in no two service instances being identical (Schneider & White, 2004). Customers have different demands from one another. Additionally, different service personnel will deliver the same service in different manners. This high degree of person-

6 to-person interaction lends to the heterogeneity of services. Services can also be different each time that individual experiences the service (Schwantz, 1996). Services are highly people and behavior dependent. Each individual has unique needs and expectations that they are seeking to satisfy (Sarkar, 1998). Based on these definitions, a number of industries can be defined as service industries. The broad array of industries is shown in Table 1. Sample service sectors include entertainment, health, and financial institutions (Brown, Gummesson, Edvardsson, & Gustavsson, 1991; Rosander, 1985; Stebbing, 1990). Table 1: Service Sector Examples Service Sector

Example Industry

Rentals

Rental Car

Use of Facilities

Parking Lots

Safety and Protection – Private Insurance Safety and Protection - Public

Police

Energy and Water

Energy

Health

Hospital

Keeping Products Fit for Use

Auto Repair

Personal Service

Restaurant

Recreation and Entertainment

Movie Theatre

Business Services

Duplicating (Kinko’s)

Distribution of Goods

Retail Store

Financial

Banks

Education

Schools

7 2.2 Human Factors Many industries have begun engineering service processes to make them more efficient. The area of “service engineering” is becoming prevalent. This field focuses on the design of customer interfaces and interaction as well as the design of service processes (Wetzels & de Ruyter, 2001). In service industries, immediate human needs and human performance dominate. The success of a service is dependent on human performance. The quality of human decisions and performance are critical. Due to this high amount of human involvement, the exposure to human error is very large (Rosander, 1985). Service industries must strive to minimize these errors to maximize service quality. Because of the high amount of human involvement in service industries, the field of human factors should be considered in service engineering. The International Ergonomics Association (http://www.iea.cc) has adopted the following definition for human factors: “Human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data, and methods to design in order to optimize human well-being and overall system performance.” The emphasis of human factors is on humans and their interactions. Unlike most engineering disciplines, human factors emphasizes the role of humans in a system. Human factors seeks to design or change systems to match the capabilities, limitations, and needs of people (Sanders & McCormick, 1993).

8 Human factors attempts to optimize the human and technical aspects of a system jointly (Drury, 2000). The goal is to enhance safety, health, comfort, effectiveness, and quality of life. Human factors is used to enhance the effectiveness and efficiency of systems. Examples include reducing errors, increasing productivity, and increasing convenience. Human factors is also used to enhance human values such as safety, satisfaction, and quality of life (Sanders & McCormick, 1993). Throughout the remainder of this paper, the definition of human factors presented by Sanders and McCormick (1993) will be the operational definition. “Human factors discovers and applies information about human behavior, abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, jobs, and environments for productive, safe, comfortable, and effective human use.” This definition encompasses all facets of the human factors field: the focus on the role of humans, the system being analyzed, and the goals of the field. The field of human factors is often divided into specialization areas. According to the International Ergonomics Association (http://www.iea.cc), the most common areas within the field are: o Physical Ergonomics is concerned with human anatomical, anthropometric, physiological and biomechanical characteristics as they relate to physical activity. o Cognitive Ergonomics is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system.

9 o Organizational Ergonomics is concerned with the optimization of Sociotechnical systems, including their organizational structures, policies, and processes. The outcome of human factors work can include the design of products, systems, interfaces, and environments (Karwowski & Salvendy, 1998).

2.3 Human Factors in Service Industries Human factors has been utilized in a number of areas. In product design, human factors can make the product user friendly and satisfying for the customer. In manufacturing, processes are deigned to increase safety as well as productivity and quality (Karwowski & Salvendy, 1998). Human factors has also been used in many service industries, including hospitals and healthcare, package delivery, and air traffic control. These service applications have looked primarily at the employee’s workplace environment. The internal operations of a service industry are often studied. However, human factors has not been used to improve the service delivery process. The customer relation and involvement in the service process could benefit from the use of human factors. The potential use of human factors in service industries can be described using the definition of human factors presented previously. “Human factors discovers and applies information about human behavior, abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, jobs, and environments for productive, safe, comfortable, and effective human use (Sanders & McCormick, 1993).”

10 By decomposing this definition into goals, systems, and attributes, we are able to see how human factors is well matched with service industries. Examples for service industries are shown in Appendix A. The elements of the table focus on the customer end of the service spectrum. The goal of human factors, as described in the above definition, is to create systems that are productive, safe, comfortable, and effective. A productive service is one that provides the service requested by the customer. There is a high output of services provided when compared to the amount of input required. The customer accomplishes his or her goal with a productive service. A safe service is one that keeps customers and their assets protected. A comfortable service is one which is easy for the customers to access and use. It also makes the customers feel at home and put trust into the service organization. An effective service meets a customer’s requirements and needs. It accomplishes its goals for service delivery. The systems addressed by human factors include tools and machines, tasks and jobs, and environments. Tools and machines are items that are utilized in the service process. This may include computers, interfaces, or office equipment. Tasks and jobs are completed throughout the service process. These may be completed by the customer or the company. Environments are the surroundings of both the customer and the company. They may include an office setting, home, restaurant, or even a website. Attributes of human factors include human behavior, limitations, and abilities. In a service system, it is important to understand the prevalence of human action in the system. Humans dominate the system, both on the customer as well as the company end of the transaction. Human behavior in a service system includes actions taken by the

11 customer or company representative that affects the service transaction. Limitations include impairments in knowledge, communication, or resources that may restrict the success of the service. Abilities define the attributes the customer and company representative have that allows them to complete the service transaction. Common limitations and abilities could include computer and communication skills.

2.4 Service Quality Quality is one’s ability to achieve innate excellence (Schneider & White, 2004). In manufacturing, this is measured technically through product specifications, conformance measures, and objective standards. In services, however, quality is much more subjective. Service quality is the ability of an organization to meet the needs, wants, and expectations of the customer (Albrecht & Zemke, 2002; Edvardsson, Thomasson, & Øvretveit, 1994; Martin, 2003). The quality of a service is dependent on the individual perceptions of the customer. These perceptions are formed over time, with customers basing their opinion on past experience, the service process, as well as service delivery (Albrecht & Zemke, 2002; Zeithaml, Parasuraman, & Berry, 1990) The customer is the only person that can judge service quality (Zeithaml, Parasuraman, & Berry, 1990). Quality indicators are often quantified to measure service excellence. For example, if a phone is answered within three rings at a call center, then quality is achieved. Using these measures, however, may limit quality of service in areas that cannot be objectively measured, such as customer satisfaction. An objective measure may be best for measuring the technical component of service, whereas user based

12 judgments are best for measuring the quality of service delivery. Services should be aimed at meeting customer requirements while preventing non-quality characteristics (wasted time, delays, unsafe conditions, unnecessary service) (Rosander, 1991). Although they may seem identical at first inspection, there are differences between service quality and customer satisfaction. While service quality is a consumer’s judgment about the service itself, customer satisfaction is a consumer’s evaluation of specific experiences. A consumer can make quality judgments about a service system even if they have never used the system (Schneider & White, 2004). To measure quality of a service system, a user compares their perceptions to preconceived expectations. To measure satisfaction, however, a user simply evaluates a single service encounter. The exact relationship between the two measurements has not yet been determined (Asubonteng, McCleary, & Swan, 1996).

2.4.1 Dimensions of Service Quality As previously noted, the quality of a service is in the eye of the user. They consider personal preferences, expectations, and experiences to judge the quality of a service. Users often base their evaluations on multiple aspects, or even multiple occurrences, of particular service experience. Many researchers have proposed characteristics of service that are essential in assessing the quality of service. These critical quality characteristics are the fundamental issues that impact service quality. Parasuraman, Zeithaml, and Berry identified ten service quality dimensions: tangibles, reliability, responsiveness, competence, courtesy, credibility, security, access,

13 communication, and understanding the customer. These dimensions were identified through focus groups with service systems executives (Zeithaml, Parasuraman, & Berry, 1990). They proceeded to create a 200 item survey that scored these dimensions for any given service experience. Through factor analysis, they narrowed the list to five dimensions (see Table 2). Finally, a 22 item survey, SERVQUAL, was created to measure these five dimensions (Parasuraman, Berry, & Zeithaml, 1991). The SERVQUAL tool has been used in many service industries. Some researchers have adapted the survey to fit an industry specific need, such as DINESERV for restaurants and LODGSERV for lodging properties (Schneider & White, 2004). Table 2: SERVQUAL Dimensions Dimension Reliability Tangibles Responsiveness Assurance (combination of competence, courtesy, credibility, security) Empathy (combination of access, communication, understanding the customer)

Definition Delivering the promised performance dependably and accurately Appearance of the organization’s facilities, employees, equipment, and communication materials Willingness of the organization to provide prompt service and help customers Ability of an organization’s employees to inspire trust and confidence in the organization through their knowledge and courtesy Personalized attention given to a customer

Grönroos derived six criteria for service quality experienced. These dimensions (Table 3) focus on the interaction between the customer and the service provider during a service transaction. The characteristics are similar to those in SERVQUAL, with the addition of a dimension regarding recovery. This characteristic addresses the service system’s ability to recover after a poor service experience (Gummesson, 1991).

14 Table 3: Grönroos’s Dimensions Dimension Professionalism and Skills

Attitudes and Behaviors

Accessibility and Flexibility

Reliability and Trustworthiness

Recovery

Reputation and Credibility

Definition Do the employees, physical resources, and operational systems of the organization have the knowledge and skills to solve customer problems in a professional way? Do the service employees (contact persons) show concern for customers and interest in solving their problems in a friendly and spontaneous way? Is the service provider (location, operating hours, employees, operational systems) designed so that customers can access the service easily and so that the provider can adjust to the demands and wishes of a customer in a flexible way? Do the customers know that they can rely on the service provider, its employees, and its systems to keep promises and perform with the best interest of the customer at heart? Do the customers realize that whenever something goes wrong or something unpredictable happens, the service provider will immediately take steps to keep the customer in control and to find an acceptable new solution? Do the customers believe that the operations of the service provider can be trusted and give adequate value for the money, and that it stands for good performance and values which can be shared by customers and the service provide?

Gummesson presented the idea that service offerings could be evaluated in terms of three elements: the service element, the tangibles element, and the information technology (or software) element. Each element had particular dimensions associated with it (Table 4). For example, when a customer evaluates the quality of an airline flight, they rate each element of the system. The interaction between passengers and crew members (service element), physical aircraft (tangibles element), and computers that

15 assist in the reservation (IT element) are all factors analyzed by the user (Schneider & White, 2004). Table 4: Gummesson’s Dimensions Dimensions of Customer-Perceived Quality of Total Offering For Service Elements Reliability Responsiveness Assurance Empathy For Tangible Elements Environmental Goods Perspective Psychological Perspective Perspective Ambient factors Reliability Viability (seeing all (background features (probability of important aspects of a customers may or may malfunctioning) product properly) not be aware of) Performance Mapping (relation between Functionality (factors (primary a control and the reaction contributing to use of characteristics of core to the control) product) product) Aesthetics (factors Affordance (the purposes contributing to use of Features (extras) the product allows) product) Conformance (match Constraints (factors Service personnel (the between limiting what can be done number, appearance, specifications and with a product) behavior of people) performance) Serviceability (ease Customer control (control of repair and Other customers over products functioning) maintenance) Aesthetics (refers to Knowledge needed exterior design, task, (information necessary to Other people smell, touch, etc.) use product) Feedback (confirmation of results of actions) For Software Elements Reliability (ability to function correctly under different circumstances) Extendibility (ability of software to adapt to new specifications) Integrity (ability to protect against unauthorized access) User friendliness (ease of learning to operate software)

16

An integrated model was developed by Grönroos and Gummesson that combined their individual models. Six types of quality are identified as contributing to overall quality: design quality, production quality, delivery quality, relational quality, technical quality, and functional quality. The customer uses these factors to create their expectations and experiences with a service system. Finally, the customer perceives the quality of the system (Edvardsson, Thomasson, & Øvretveit, 1994). A model that focuses on the encounter between the service provider and the customer was developed by Bitner. The model emphasizes the customer’s perception of interaction with the service provider. Novel factors in this model include a customer’s pre-attitude towards a service system, a system’s marketing mix (product, price, place, and promotion), and physical interaction between the customer and the system (Edvardsson, Thomasson, & Øvretveit, 1994). A graphical representation of Bitner’s model is shown in Figure 1.

17

Physical evidence

Participants

Contextual cues

Traditional marketing mix

Perceived service performance

Word of mouth

Disconfirmation

Service encounter satisfaction

Attributions

Contextual cues Pre-attitude

Service expectation

Contextual cues Physical evidence

Physical evidence

Perceived service quality

Traditional marketing mix

Service switching

Service loyalty

Participants

Traditional marketing mix

Participants

Figure 1: Bitner’s Service Encounter Model (Edvardsson, Thomasson, & Øvretveit, 1994)

2.4.2 Measuring Service Quality The assessment of service quality is traditionally done by speaking with customers. This can be done through focus groups, interviews, or surveys. Three system outcomes are traditionally focused on: perceived service quality, satisfaction, and

18 behavioral intention (Baker & Taylor, 1997; Cronin & Taylor, 1992; Zeithaml, Berry, & Parasuraman, 1996). These dimensions are measured by asking a customer questions relating to these outcomes. To determine perceived service quality, a customer is generally asked if they felt the level of service was high quality or poor quality. Additionally, customers are asked if they are satisfied or dissatisfied with the service system. To examine behavioral intention, customers are often asked if they would return to the same system for service. They are also asked if they would refer a friend to the same service system. Measurement tools often question a customer’s perceptions of specific system characteristics, as well as how they relate to the three outcome dimensions. While the three outcomes are related to one another, the exact relationship has not been clarified. They each measure desirable features in a service system from a different perspective. Therefore, all three should be included when measuring service quality (Baker & Taylor, 1997).

2.4.3 Service Quality Gap Models When a customer enters into a service experience, they bring expectations of that service with them. There are four types of expectations that a customer may consider: predictive, normative, excellence, and adequate (Schneider & White, 2004). A predictive expectation describes what customers believe will actually happen during the service experience. A normative expectation is what people believe should happen, regardless of whether or not they believe it actually will. An excellence expectation is a customer’s belief of how an excellent service experience should perform. The entity a customer uses

19 as the excellent experience does not have to be the current process. Often times, it does not exist. Rather, customers may have an ideal experience in mind that they measure a service experience to. Adequate expectations describe the minimum level of performance a customer would be willing to accept. Customers use a variety of inputs to form expectations about a service system. Past experience, current needs and requirements, and communications with the system all factor into the development of customer expectations (Morgan, 1992). How can a service industry use these expectations to improve service quality? Parasuraman, Zeithaml, and Berry proposed a “zone of tolerance” in which service organizations should try to operate. The zone is the difference between someone’s view of how an excellent organization should perform and the minimum he or she is willing to accept. If a service is delivered within this zone, the customer will assess a high quality rating for that service experience (Schneider & White, 2004). A customer’s perceptions are another key factor in their judgment of service quality. A customer compares their perceptions of the current service process to expectations they created prior to the service experience. The basis for evaluating service from the customer’s perspective is the comparison between expected and perceived service (Edvarsson & Gustavsson, 1991). The gap between perceptions and expectations are used by the customer to judge service quality. Gap models are a tool that is commonly used to describe service quality. People base their service quality judgments on the gap that existed between their perceptions of what happened during the service transaction and their expectations for how the service transaction should have occurred. When these gaps exist, quality is compromised

20 (Murphy, 1993). Therefore, a quality control strategy in services is to narrow and eventually close these gaps. Parasuraman, Berry, and Zeithaml (1991) identified five gaps that are present in service systems. The first gap is the difference between consumer expectations and management’s perception of these expectations. Service managers are often unaware or misinterpret a customer’s needs in the service system. Gap 2 exists between management’s perception and quality specifications. Even when management is aware of customer expectations, they may choose not to set quality specifications. Gap 3 describes the difference between quality specifications and service delivery. When a specification is established for a service, personnel may fail to meet the specification. Gap 4 is the difference between service delivery and external communications. This gap may be caused by advertising, inadequate communication, and misinformation. The final gap is the difference between expected and perceived service on behalf of the customer. Gap 5 is a function of the other four gaps. Figure 2 is a graphical representation of these five gaps in the service system (Zeithaml, Parasuraman, & Berry, 1990).

21

Consumer Word of Mouth Communications

Personal Needs

Past Experience

Expected Service GAP 5 Perceived Service

Service Provider

Service Delivery

GAP 4

GAP 3

GAP 1

External Communications to Customers

Translation of Perceptions into Service Quality Specifications GAP 2 Management Perceptions of Consumer Expectations

Figure 2: GAP Model of Service Quality (Zeithaml, Parasuraman, & Berry, 1990)

2.4.4 Benefits and Limitations of SERVQUAL SERVQUAL is the most widely recognized and utilized method of measuring service quality by both researchers and practitioners (Newman, 2001). This is largely due to the generalized nature of SERVQUAL. It can be modified and revalidated for any

22 service industry. Additionally, it offers a large amount of information to managers, which can facilitate benchmarking and other quality strategies (Lam & Woo, 1997). Therefore, SERVQUAL will be the measurement tool utilized for this study. Questions from the original SERVQUAL instrument are shown in Appendix B. While SERVQUAL is the most commonly used service quality measurement tool, it has not been immune to scrutiny. First and foremost, SERVQUAL does not appear to be applicable to all service industries or situations (Schneider & White, 2004). Many researchers also argue that service quality cannot be classified by a single measurement, particularly because of the fact that SERVQUAL is concerned only with service delivery, not the final outcome of the service (Albrecht & Zemke, 2002; Cuthbert, 1996). Specific problems found with SERVQUAL relate to the measurement of perceptions, expectations, and gap scores. There have also been concerns raised regarding the survey’s validity, dimensions, and interactions. The survey itself produces different results based on the time which it is administered. The customer’s perception of a service is often altered over a period of time (Palmer & O’Neill, 2003). Expectations are also affected by time. A customer’s expectations of a particular service may be more or less influenced by past services, depending on the time that has lapsed since the service experience (Andaleeb, 2001). Additionally, variance found between customers may actually be due to various interpretations of the survey questions, as opposed to measuring the customers’ attitudes (Teas, 1993). Research has shown that gap scores tend to suffer from poor statistical properties. The scores often exhibit poor reliability and validity (Lam & Woo, 1997; Schneider &

23 White, 2004). Therefore, researchers often choose to measure only a customer’s perceptions, not taking their expectations into account. The estimation of perceptions might already include a mental process in which subjects determine a perception minus expectation score (Llosa, Chandon, & Orsingher, 1998). Gathering only a customer’s perceptions is much simpler and more understandable than collecting both perception and expectation feedback (Newman, 2001). Although the use of perceptions alone has greater predictive value, this practice does not provide as much value to the service organizations, however. The measurement of expectations provides information to service managers. They are able to identify which aspects of service on which they are expected to perform exceptionally highly versus those where lower levels of expectations exist (Parasuraman, Zeithaml, & Berry, 1994). SERVQUAL is used to examine the perceptions of both novice and expert users. This could lead to difficulties because of the variation in expectations among these users (Llosa, Chandon, & Orsingher, 1998). A seasoned airline traveler who is accustomed to delays, for example, will have much lower expectations of the airline industry than a novice traveler. If a delay occurs, the gap score for these subjects will be vastly different. According to the gap score, it would appear that the seasoned traveler is satisfied (if the perception equaled expectation). However, the novice traveler would appear greatly unsatisfied if they expected their flight to be on time. Therefore, how are gap scores interpreted? There appears to be no practical interpretation of a gap score of 1 as opposed to a gap score of 3, for example. The only information we can gain from these scores is where the service met or failed to meet customer expectations (Cuthbert, 1996).

24 The basic form of SERVQUAL utilizes five service quality dimensions. However, the number of valid dimensions has been shown to differ among various service industries (Andersson, 1992; Edvardsson, Thomasson, & Øvretveit, 1994; Llosa, Chandon, & Orsingher, 1998; O’Neill & Palmer, 2003; Schneider & White, 2004). Therefore, the survey must be revalidated for each industry (Cronin & Taylor, 1992). Finally, it is questionable whether a limited number of dimensions can take into account all factors that affect service quality (Chase & Bowen, 1991). The dimensions defined by Paramurasan, Ziethaml, and Berry in the SERVQUAL instrument are very closely related. Many dimensions have definitions that overlap one another (Asubonteng, McCleary, & Swan, 1996). Increasing the quality level on one dimension may also increase the level of perceived quality on the other dimensions. Additionally, a high score in one dimension can compensate a low score in another dimension (Llosa, Chandon, & Orsingher, 1998). These relationships are difficult to describe, but their existence is shown is shown in most SERVQUAL studies (Andersson, 1992; Hughey, Chawla, & Khan, 2003).

2.5 Human Factors and Service Quality The dimensions of service quality utilized in the SERVQUAL instrument are closely tied to human factors goals discussed previously. These comparisons are displayed in Table 5. The human factors goal of achieving a productive service is similar to the SERVQUAL dimensions reliability and responsiveness. These definitions focus on the ability of a customer to complete the service dependably and promptly. The goal

25 of achieving a safe service is similar to the assurance dimension in SERVQUAL. The focus is placed on a customer’s feeling of security within a service system. The goal of achieving a comfortable service is similar to the empathy dimension in SERVQUAL. The definitions focus on a feeling of familiarity and ease with the system. Finally, the human factors goal of achieving an effective service is similar to the SERVQUAL dimensions reliability, tangibles, and responsiveness. These definitions focus on the ability of a service system to provide all that is needed to accomplish a customer’s goals.

26 Table 5: Human Factors and SERVQUAL Human Factors Goal A productive service is one that provides the service requested by the customer. There is a high output of services provided when compared to the amount of input required.

SERVQUAL Dimension Reliability: Delivering the promised performance dependably and accurately Responsiveness: Willingness of the organization to provide prompt service and help customers

A safe service is one that keeps customers and their assets protected.

Assurance: Ability of an organization’s employees to inspire trust and confidence in the organization through their knowledge and courtesy

A comfortable service is one which is easy for the customers to access and use. It also makes the customers feel at home and put trust into the service organization.

Empathy: Personalized attention given to a customer

Reliability: Delivering the promised performance dependably and accurately An effective service meets a customer’s requirements and needs. It accomplishes its goals for service delivery.

Tangibles: Appearance of facilities, employees, communication materials Responsiveness: Willingness of the organization to provide prompt service and help customers

27 2.6 Usability Within human factors, usability of products and systems is often analyzed. Usability is a measurable characteristic that describes how easy a system is to use (Mayhew, 1999; Wickens, Gordon, & Liu, 1998). As shown in Figure 3, the acceptance of a system depends on both practical and social acceptance. Practical acceptance includes factors such as usefulness, cost, compatibility, and reliability. A system’s usefulness is described by its utility and usability. A system’s utility describes whether the system accomplishes what is needed. The usability of a system describes how well the users can use this functionality (Nielsen, 1993).

Easy to Learn Social

Utility Usefulness

System Acceptability Cost Practical

Usability

Compatibility

Reliability

Efficient to Use Easy to Remember Few Errors Subjectively Pleasing

Figure 3: Attributes of System Acceptability (Nielsen, 1993) Usability is tailored around a number of user characteristics. A user’s cognitive, perceptual, and motor capabilities affect a system’s usability. Characteristics of the physical and social environment that contains the system also have an impact. Finally,

28 the characteristics of the task and system itself impact usability (Mayhew, 1999). Nielsen (1993) identified five usability factors. o Learnability: the system should be easy to learn so that the user can rapidly start getting some work done with the system o Efficiency: the system should be efficient to use, so that once a user has learned the system, a high level of productivity will be possible o Memorability: the system should be easy to remember, so that the casual user is able to return to the system after some period of not having used it, without having to learn everything over again. o Errors: they system should have a low error rate, so that users make few errors during the use of the system, and so that if they do make errors they can easily recover from the. Catastrophic errors must not occur o Satisfaction: the system should be pleasant to use, so that users are subjectively satisfied when using it; they like it These five factors are all critical in assessing a system’s usability. These factors can be applied to either product usability or service system usability. These five usability factors are closely related to the four human factors goals addressed previously. Table 6 shows the relationship between usability factors and human factors goals. Efficiency is similar to the goal of a productive service. Both factors focus on the ability to complete a task with relatively low input compared to the output. Usability errors are related to the human factors goals of safe and effective services. Keeping errors low allows a user to accomplish a task safely, without losing any input. Additionally, a system with few errors allows customers to meet their needs

29 and accomplish their service goals. Usability satisfaction is similar to the goal of providing comfortable services. Two usability dimensions did not map to human factors goals: learnability and memorability. While these dimensions are related to productivity, their emphasis is on the learning process, as opposed to task completion. Table 6: Usability and Human Factors Usability Dimension

Human Factors Goal

Learnability: The system should be easy to learn so that the user can rapidly state getting some work done. A productive service is one that provides Efficiency: The system should be efficient the service requested by the customer. to use so that once the user has learned the There is a high output of services system, a high level of productivity is provided when compared to the amount of possible. input required. Memorability: The system should be easy to remember so that the casual user is able to return to the system after some period of not having used it, without having to learn everything all over again. Errors: The system should have a low error rate so that users make few errors during the use of the system, and so that if they do make errors, they can easily recover from them. Further, catastrophic errors must not occur.

A safe service is one that keeps customers and their assets protected.

Satisfaction: The system should be pleasant to use so that users are subjectively satisfied when using it; they like it.

A comfortable service is one which is easy for the customers to access and use. It also makes the customers feel at home and put trust into the service organization.

An effective service meets a customer’s requirements and needs. It accomplishes its goals for service delivery.

30 Assessing a system’s usability presents many benefits for the user. Increased productivity, decreased task time and cost, decreased errors, and increased accuracy are all benefits of improved usability. A company that creates usable products can benefit from greater profits, increased business, decreased support costs, and an increase in customer satisfaction (Mayhew, 1999).

2.6.1 Measuring Usability Usability testing is the process of having users interact with a system to determine the system’s adherence to the five usability factors addressed above (Wickens, Gordon, & Liu). Table 7 displays current methods that are used to measure usability. The methods range from expert evaluations to focus groups and interviews. Assessing a system’s usability can be also accomplished by setting quantitative usability goals. These goals can be related to ease of use, ease of learning, performance, and satisfaction. A variety of metrics are available to assess usability goals. Task completion time, error rates, completion rates, user satisfaction, workload, and information retention are all used to examine a system’s usability. These metrics are then compared to minimum acceptable, target, or optimal levels to asses the usability (Mayhew, 1999; Wickens, Gordon, & Liu, 1998).

31 Table 7: Summary of Usability Methods (Nielsen, 1993) Method Name Heuristic Evaluation

Performance Measures Thinking Aloud

Observation

Lifecycle Stage Early design, “inner cycle” of iterative design Competitive analysis, final testing Iterative design, formative evaluation Task analysis, follow-up studies

Questionnaires Task analysis, follow-up studies Task Interviews analysis

Focus Groups

Logging Actual Use

Task analysis, user involvement Final testing, follow-up studies

User Feedback Follow-up studies

Users Needed None

Main Advantage Finds individual usability problems. Can address expert user issues.

Does not involve real users, so does not find “surprises” relating to their needs.

At least 10

Hard numbers. Results easy to compare. Pinpoints user misconceptions. Inexpensive test.

Does not find individual usability problems. Unnatural for users. Hard for expert users to verbalize.

3 or more Ecological validity: reveals users’ real tasks. Suggests functions and features. At least Finds subjective 30 user preferences. Easy to repeat.

Appointments hard to set up. No experimenter control.

5

Time consuming. Hard to analyze and compare.

3-5

6-9 per group

At least 20

Flexible, in-depth attitude and experience probing. Spontaneous reactions and group dynamics.

Finds highly used (or unused) features. Can run continuously. Hundreds Tracks changes in user requirements and views.

Main Disadvantage

Pilot work needed (to prevent misunderstandings).

Hard to analyze. Low validity.

Analysis programs needed for huge mass of data. Violation of users’ privacy. Special organization needed to handle replies.

32 2.6.2 Benefits and Limitations of Usability Usability is a valuable tool within human factors that allows for product task improvement. However, it is often restricted to interface and computer design. Additionally, usability is often difficult for users and researchers alike to quantify. One usability method that is commonly used is a survey created by James Lewis at IBM (1995). Questions from this survey are shown in Appendix B. The computer usability satisfaction questionnaire allows users to assess a system based on a set of usability characteristics. The survey was initially created for the assessment of computer systems. However, it has since been used on interfaces, products, and tasks (Lewis, 1995). This survey will be used as the primary usability measurement tool for the remainder of the project.

2.7 Usability and Service Quality A comparison of usability dimensions and SERVQUAL dimensions is shown in Table 8. Usability dimensions that match with SERVQUAL dimensions include efficiency, errors, and satisfaction. Efficiency is similar to the SERVQUAL dimension responsiveness in that they both relate to a customer’s ability to use a system promptly. The errors factor is similar to SERVQUAL’s reliability dimension. Both dimensions refer to a system’s ability to provide a service with no mistakes. Finally, satisfaction is related to the SERVQUAL dimension of tangibles. Both dimensions refer to the subjective evaluation a customer gives to a service system.

33 Table 8: Usability and SERVQUAL Usability Dimension

SERVQUAL Dimension

Learnability: The system should be easy to learn so that the user can rapidly state getting some work done. Efficiency: The system should be efficient Responsiveness: Willingness of the to use so that once the user has learned the organization to provide prompt service system, a high level of productivity is and help customers possible. Memorability: The system should be easy to remember so that the casual user is able to return to the system after some period of not having used it, without having to learn everything all over again. Errors: The system should have a low error rate so that users make few errors during the use of the system, and so that if they do make errors, they can easily recover from them. Further, catastrophic errors must not occur.

Reliability: Delivering the promised performance dependably and accurately

Satisfaction: The system should be pleasant to use so that users are subjectively satisfied when using it; they like it.

Tangibles: Appearance of facilities, employees, communication materials

Assurance: Ability of an organization’s employees to inspire trust and confidence in the organization through their knowledge and courtesy Empathy: Personalized attention given to a customer

34 Two usability dimensions did not match any of the SERVQUAL dimensions. Learnability and memorability are two important usability factors that are not considered in service quality. Additionally, two SERVQUAL dimensions did not match any usability dimensions. Assurance and empathy are unique to the SERVQUAL instrument when compared to usability. Usability factors were used in a recent study examining the quality of online shopping experiences (Song & Zhang, 2004). The researchers added two factors to SERVQUAL that addressed technology acceptance: ease of use and usefulness. The dependent variables examined were perceived fulfillment and usability. The study found that usability is a key factor affecting quality perceptions. Additionally, usability perception had a positive influence on overall perception of the websites (Song & Zhang, 2004). While usability and human factors are related, they re two separate constructs. A similar relationship is found between usability and service quality. Therefore, usability is a concept that is not completely accounted for in current service quality metrics. The benefits of improving a system’s usability support the inclusion of usability as a factor in service quality.

2.8 Service Quality in Healthcare Quality in healthcare systems can focus on a number of aspects within the system. The technical aspects of care, relationships between practitioner and patient, and the amenities provided are all important factors in a quality consideration (Andaleeb, 2001).

35 Service quality in healthcare has been defined as the “provision of appropriate and technically sound care that produces the desired effect” (McAlexander, Kaldenberg, & Koenig, 1994). More recently, however, the definition has come to include the delivery of the service and how it relates to customer needs and expectations (Self & Sherer, 1996). Measuring quality in healthcare has a number of benefits. For consumers, it allows them to make informed decisions regarding practitioner and provider selection. Healthcare providers also benefit from examining quality. They are able to identify areas that need improvement within their system (Self & Sherer, 1996; Yasin & Green, 1995). The profitability of a system may also be impacted by improving service quality, as customer satisfaction is directly related to profitability. Additionally, satisfying patients can save money by reducing the amount of resources spent resolving customer complaints (Pakdil & Harwood, 2005). Studies have shown that perceived quality of healthcare services has a greater influence on patient behaviors than other factors such as access and cost. These behaviors include satisfaction, referrals, and usage (Andaleeb, 2001). There are some difficulties in evaluating service quality within healthcare systems. Unlike other services, the customer is highly involved in healthcare services. They often have a very intimate relationship with their provider that extends over long periods of time (McAlexander, Kaldenberg, & Koenig, 1994). Some attributes of the system are difficult for the customer to comprehend and assess. The high degree of complexity associated with healthcare tasks may make a customer’s assessment of the system invalid (Wong, 2002). While customers may lack the medical knowledge to

36 assess particular aspects of the system, their input regarding perceptions of the system are still an invaluable tool for providers (Yasin & Green, 1995). The use of SERVQUAL in healthcare systems has produced varied results (Asubonteng, McCleary, & Swan, 1996). Some studies have found that SERVQUAL was not successful in measuring patient expectations and perceptions in the healthcare domain (McAlexander, Kaldenberg, & Koenig, 1994). However, the overwhelming majority of service quality studies in the healthcare domain have shown SERVQUAL to be an accurate and valid measure of service quality (Babakus & Mangold, 1992; Dean, 1999; Lam, 1997; Reidenbach & Sandifer-Smallwood, 1990; Scardina, 1994; Taylor & Cronin, 1994; Vandamme & Leunis, 1993; Wong, 2002). SERVQUAL has been shown to be useful in revealing the differences between patients’ preferences and their actual experience, thus identifying areas in need of improvement (Pakdil & Harwood, 2005). Studies have also shown that dimensions may vary within the healthcare industry, depending on the specific application area (Dean, 1999; Lam, 1997).

Chapter 3 Model Development The convergence of human factors and service quality is displayed graphically in Figure 4. All human factors goals are matched with at least one service quality dimension. This shows that the two areas are compatible, often striving to accomplish the same goals. The comparison of usability and service quality is shown in Figure 5. Three usability factors are matched with three service quality dimensions. However, two usability factors (learnability and memorability) and two service quality dimensions (assurance and empathy) do not pair with other factors. This demonstrates the possibility of combining these two areas to improve service systems. The improvement of a system’s service quality leads to increased customer satisfaction and return behavior. This eventually translates into increased profits for a service company.

Reliability Productive Responsiveness Safe

Human Factors

Assurance Comfortable Empathy Effective Tangibles

Figure 4: Graphical Combination of Human Factors and Service

Service Quality

38

Usability

Efficiency

Responsiveness

Errors

Reliability

Satisfaction

Tangibles

Learnability

Assurance

Memorability

Empathy

Service Quality

Figure 5: Graphical Combination of Usability and Service The proposed model of service quality is shown in Figure 6. The five SERVQUAL dimensions (reliability, responsiveness, assurance, empathy, and tangibles) and usability are shown to be factors that impact service quality. It is theorized that customers consider all six of these factors when judging a service system’s quality. Figure 7 displays the coverage of service quality through three measurement tools. Usability and SERVQUAL cover a large portion of service quality factors. A modified survey, SERVUSE, examines the overlap between these two constructs as a predictor of service quality.

39

Responsiveness

Assurance

Service Quality

Reliability

Usability

Learnability

Empathy

Tangibles

Memorability

Figure 6: Proposed Service Quality Model

Usability

SERVQUAL SERVUSE

Service Quality Figure 7: Survey Factor Coverage

Chapter 4 Hypotheses Based on the literature review, there is no current model that demonstrates the relationship between usability and service quality. The purpose of this research is to unify these two constructs and develop an improved measurement tool, SERVUSE. The purpose of this research is to examine the use of human factors in the service sector, particularly in the measurement of service quality. This will be accomplished through the following research hypotheses:

H1: SERVQUAL is a significant predictor of service quality, satisfaction, and behavioral intentions. The significance of the five original service quality dimensions is examined through this first hypothesis. If SERVQUAL is found to be a significant predictor, this would indicate that it is applicable in the area being studied. This would be congruent with previous findings in the literature.

H2: SERVUSE is a significant predictor of service quality, satisfaction, and behavioral intentions. The significance of the additional usability factor is the second hypothesis being examined. If SERVUSE is found to be a significant predictor, this would indicate that usability should be considered when examining service quality.

41

H3: SERVUSE is a superior predictive tool to SERVQUAL. To further assess the importance of usability, it is important to distinguish this factor from the five other SERVQUAL dimensions. If it is found to be significantly different, evidence is shown for including usability in a service quality assessment. This hypothesis compares the new survey, SERVUSE, to the original SERVQUAL instrument. If this hypothesis is supported, it would suggest that the additional usability factor adds significant predictive value to the survey.

Chapter 5 Methodology

5.1 Variables The five SERVQUAL dimensions (responsiveness, reliability, tangibles, assurance, and empathy) and one usability dimension served as independent variables. Three dependent variables were examined: service quality, satisfaction, and behavioral intention.

5.2 Survey Development The creators of SERVQUAL advocate the modification of their survey instrument for various industries. They suggest that the survey must be adapted to account for a particular system’s characteristics and needs (Zeithaml, Parasuraman, & Berry, 1990). Using the original SERVQUAL instrument as a guide, a survey was developed that combines SERVQUAL questions and usability questions. The survey, SERVUSE, is shown in Appendix C. Wording of the questions was adapted for the healthcare domain, with application specific information on the provider name, location, and amenities included. Some questions were omitted from the original SERVQUAL and IBM Usability surveys. The selection of items relevant to healthcare was based on past use of the SERVQUAL

43 instrument in the healthcare field (Babakus & Mangold, 1992; Dean, 1999, McAlexander, Kaldenberg, & Koenig, 1994; Pakdil & Harwood, 2005). The survey consists of five sections. The first section consists of six demographics questions. The second section consists of 18 items that measure the subjects’ expectations with the system. The third section consists of six items that assess the weights subjects place on the dimensions (Pena, 1999). The fourth section consists of 18 items that measure the subjects’ perceptions of the system. These items are paired with those from the first section. The final section consists of three items that evaluate the subjects’ judgment of perceived quality, satisfaction, and behavioral intention. These measurements were used as the dependent variables for the analysis. All six service quality dimensions are reflected in the second and fourth sections, as shown in Table 9. Table 9: Dimension Classification of Survey Items Dimension Number Name 1 Tangibles 2 Reliability 3 Responsiveness 4 Assurance 5 Empathy 6 Usability

Survey Items (Sections 2 and 4) 1-3 4-5 6-8 9-11 12-14 15-18

Survey items in sections two, four, and five utilized a 7-point Likert scale with anchors “Strongly Agree” (7) and “Strongly Disagree” (1). In section three, subjects were asked to rate the importance of the six dimensions from 1 (not at all important) to 7 (extremely important). All items throughout the survey were positively worded to avoid confusion.

44 5.3 Subjects Subjects for the experiment consisted of patients of University Health Services (UHS) on the Pennsylvania State University campus. Any patient that was treated at UHS during the study was eligible to participate. To achieve necessary statistical power (α=0.05, 1-β=0.90), 200 subjects were surveyed. The power calculations used to arrive at this subject number are shown in Appendix C. The average age of the respondents was 25.36. Table 10 contains additional subject demographics. The breakdown of respondents into various demographic categories is consistent with the population of patients at UHS during June, when testing was completed. Table 10: Demographics Overview Demographic

Percent of Subjects (N=200) Gender Male 42.0 % Female 58.0 % Ethnicity African American 10.5 % Asian 25.0 % Caucasian 52.0 % Latino/Hispanic 6.5 % Other 6.0 % University Status Undergraduate Student 40.5 % Graduate Student 52.5 % Spouse 7.0 % Visit Frequency One (First Visit) 3.5 % Two to Five 39.0 % Six to Ten 26.5 % More than Ten 31.0 %

45 5.4 Survey Administration The survey was administered throughout the month of June in Ritenour Building, the on-campus location of University Health Services. An empty exam room was used to administer the survey. After visiting UHS, each patient processed through check-out. At this time, they received information regarding the study and its location. If they showed interest in participating, they were given directions to the exam room. When a subject arrived at the location, the implied consent form was reviewed with them. The survey was then distributed. Upon completion of the survey, the subject was given a coupon for free ice cream at the Penn State Creamery. The implied consent form and subject survey are shown in Appendix C.

5.5 Statistical Analysis Data from collected surveys was collected and six scores were calculated. An unweighted expectation, perception, and gap score were calculated. Additionally, information from section three of the survey was used to calculate weighted expectation, perception, and gap scores. The scores were calculated by averaging all questions for each of the six dimensions. Therefore, an unweighted perception score, for example, consisted of six separate scores, one for each dimension. To calculate the unweighted expectation score, an average of responses from section two of the survey was calculated. This was also done to calculate the perception score by averaging the responses from section four. The unweighted gap score was found by first calculating the difference (“gap”) between perception and expectation

46 scores for each question. These gaps were then averaged to create the unweighted gap score. To calculate weighted scores, the weights assigned to each dimension in section three were utilized. An average weight was found for each dimension. The normalized weight was multiplied by the dimension scores to calculate the weighted scores. This was completed for perception, expectation, and gap scores.

Chapter 6 Results

6.1 Survey Validation The reliability of the survey instrument, SERVUSE, was assessed by examining Cronbach’s alpha coefficient for each dimension. Table 11 displays the coefficient values. A value of at least 0.70 indicates internal consistency (Nunnally, 1994), verifying that the same survey items within a single dimension yield similar results.

As shown in

Table 11, some alpha coefficients fall below the acceptable value of 0.70. This was primarily apparent in the expectation scores. Low alpha coefficients suggest that survey items should be combined into a single item or placed into multiple categories. The low alpha coefficients are indicative of a multi-dimensional factor structure. These lower values in expectation scores are consistent with what is found in literature (Lam & Woo, 1997). Table 11: Cronbach Alpha Coefficients Dimension Expectation Alpha Perception Alpha 0.582 0.775 1 0.464 0.685 2 0.497 0.708 3 0.689 0.802 4 0.558 0.724 5 0.827 0.864 6 0.863 0.926 Overall

48 To further assess internal consistency reliability, inter-item correlations were also examined. Table 12 displays the average inter-item correlations for each dimension. Correlation values of 0.35 or higher were considered indicative of internal consistency (Nunnally, 1994). As shown in Table 12, the vast majority of correlation values are acceptable. The only unacceptable levels were once again found with expectation scores. These findings are once again consistent with values found in literature, therefore verifying internal consistency of the survey instrument. Table 12: Inter-Item Average Correlations Correlation* Expectation Perception Gap 0.317 0.535 0.360 1 0.303 0.521 0.301 2 0.245 0.448 0.291 3 0.425 0.579 0.400 4 0.295 0.474 0.402 5 0.544 0.615 0.577 6 *All correlations are significant at the 0.001 level Dimension

The item to dimension correlations were examined to ensure accurate assignment of items to dimension categories. Correlation values of 0.35 or higher were considered acceptable (Nunnally, 1994). Table 13 displays the item to dimension correlations for each survey item. All correlation values fall well above the acceptable level of 0.35. Therefore, the survey items are assigned to the correct dimensions.

49 Table 13: Item to Dimension Correlations Correlation* Expectation Perception Gap 0.644 0.826 0.717 1 1 0.798 0.857 0.765 1 2 0.765 0.809 0.790 1 3 0.797 0.875 0.825 2 4 0.817 0.869 0.787 2 5 0.797 0.799 0.703 3 6 0.710 0.808 0.753 3 7 0.598 0.778 0.722 3 8 0.792 0.861 0.798 4 9 0.806 0.838 0.792 4 10 0.757 0.845 0.732 4 11 0.755 0.776 0.760 5 12 0.788 0.838 0.806 5 13 0.637 0.803 0.756 5 14 0.810 0.842 0.828 6 15 0.826 0.840 0.805 6 16 0.810 0.844 0.841 6 17 0.798 0.847 0.831 6 18 * All correlations are significant at the 0.001 level

Dimension Question

Content validity of SERVUSE was verified by representatives from University Health Services. Their expert review of the survey instrument ensured the use of useful and applicable survey items. Based on consultations with these representatives, certain survey items were rewritten to strengthen the applicability of the survey to University Health Services. The survey content was not changed, only the wording of select survey items.

50 6.2 Descriptive Analysis A descriptive analysis of the data was completed to identify relevant results for University Health Services. The overall scores from the survey are shown in Table 14. The unweighted score is simply an average of the responses from all 200 subjects. The weighted scores take into account the importance each subject placed on a specific dimension. A subject’s response is multiplied by the weight they place on that particular dimension. All weighted responses are then averaged to find the mean weighted score. The overall expectation score was 5.250, with a score of 7 being the highest possible. The overall perception score was 4.893. The difference of these scores, the overall gap score, was -0.357. Therefore, University Health Services fell short of meeting their customers’ expectations by a value of -0.357. Table 14: Overall Survey Scores Overall Score Unweighted Weighted Mean Variance Mean Variance 0.218 5.250 0.614 Expectation 6.046 0.497 4.893 0.771 Perception 5.666 -0.380 0.492 -0.357 0.373 Gap Measure

The overall scores were also examined based on the six dimensions. Table 15 displays these results. Dimension 1, tangibles, was the only dimension with a positive gap score of 0.095. The remaining dimensions received negative gap scores. Dimension 5, empathy, received the lowest gap score of -0.677. The weights for each dimension are also shown in Table 15. Each dimension was rated on a scale of importance from 1 (not at all important) to 7 (extremely important).

51 Subjects found dimension 2, reliability, to be the most important aspect of quality of service for the health clinic. Dimension 1, tangibles, was rated as the least important. Table 15: Overall Scores by Dimension Dimension 1 2 3 4 5 6

Weight Mean Variance 5.435 1.031 6.425 0.527 6.180 0.631 6.255 0.663 6.130 0.687 5.665 1.189

Expectation Mean Variance 5.557 0.543 6.325 0.341 6.242 0.304 6.123 0.469 6.113 0.375 5.919 0.573

Perception Mean Variance 5.652 0.711 5.768 0.781 5.723 0.675 5.783 0.748 5.437 0.810 5.635 0.884

Gap Score Mean Variance 0.095 0.930 -0.558 0.817 -0.518 0.718 -0.340 0.930 -0.677 1.027 -0.284 1.321

The average scores for each survey item are displayed in Table 16. Only two survey questions, 2 and 3, averaged a positive gap score. Both of these questions belong to the tangibles dimension. The rest of the survey items produced negative gap scores, showing that customer expectations were not met.

52 Table 16: Overall Scores by Question Question

Dimension

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

1 1 1 2 2 3 3 3 4 4 4 5 5 5 6 6 6 6

Expectation Mean Variance 6.030 0.743 4.825 1.281 5.815 0.966 6.440 0.499 6.210 0.549 6.125 0.803 6.125 0.612 6.475 0.411 6.150 0.751 6.210 0.750 6.010 0.784 5.920 0.838 6.090 0.736 6.330 0.544 5.895 0.858 5.900 0.905 5.955 0.847 5.925 0.874

Perception Mean Variance 5.620 1.001 5.460 1.074 5.875 1.014 5.805 1.052 5.730 1.002 5.785 1.024 5.525 1.115 5.860 1.066 5.610 1.003 5.965 0.918 5.775 1.210 4.925 1.467 5.680 1.093 5.705 1.214 5.675 1.306 5.585 1.279 5.515 1.266 5.765 1.125

Gap Score Mean Variance -0.410 1.530 0.635 1.771 0.060 1.564 -0.635 1.369 -0.480 1.145 -0.340 1.261 -0.600 1.508 -0.615 1.313 -0.540 1.516 -0.245 1.432 -0.235 1.728 -0.995 2.146 -0.410 1.489 -0.625 1.552 -0.220 1.821 -0.315 2.056 -0.440 1.976 -0.160 1.894

The results for each dependent variable were also examined. Table 17 displays the average values and their standard deviations. Behavioral intention produced the highest results with an average of 6.300. Perceived quality and satisfaction produced similar results with averages of 5.650 and 5.585, respectively. Table 17: Dependent Variable Results Dependent Variable Perceived Quality Satisfaction Behavioral Intention

Mean Variance 5.650 0.420 5.585 1.169 6.300 0.573

Correlations were calculated for the three dependent variables. Table 18 displays the correlation values, all of which were significant. The strongest correlation was found

53 between perceived quality and satisfaction. The weakest correlation was found between perceived quality and behavioral intention. Paired t-tests were also conducted to test whether the dependent variables were significantly different. Behavioral intention was found to be significantly different from both perceived quality and satisfaction (p