quality indicators in a service environment: some field experiences. Bart Van Looy. Co-ordinator, Service Management Center, Vlerick School of. Management ...
Dealing with productivity and quality indicators in a service environment: some field experiences
Dealing with productivity and quality indicators 359
Bart Van Looy Co-ordinator, Service Management Center, Vlerick School of Management and Senior Researcher, Catholic University of Leuven, Belgium
Paul Gemmel Assistant Professor, University of Gent and Vlerick School of Management, Belgium
Steven Desmet Researcher, University of Gent and Service Management Center, Vlerick School of Management, Belgium
Roland Van Dierdonck President, Service Management Center and Partner, Vlerick School of Management, Belgium, and
Steven Serneels Partner, S&V Management Consultants and Part-time Associate, Vlerick School of Management, Belgium Service companies find themselves in an always changing environment. Competition is increasing, leading to smaller margins. The number of services offered to the customers is augmenting, leaving them more and more alternatives. Making sure that customers become and stay satisfied is crucial. To do so, being informed about one’s own performance in terms of productivity and quality is crucial. Productivity and quality are both an appreciation of how well the resources in any activity or transformation function are used. Productivity relates the output of a transformation process to the input, while quality refers to an evaluation of process and outcomes by internal or external customers. The kind The authors wish to thank the partners of the Service Management Center for their support: Asea Brown Boveri Services, Digital Multi-vendor Customer Services, Electrabel, The Generale Bank and Schindler Services Europe. The co-operation of Koen Heylen, former researcher at the Service Management Center, is also appreciated. Finally the authors appreciate the remarks made by the editors.
International Journal of Service Industry Management, Vol. 9 No. 4, 1998, pp. 359-376, © MCB University Press, 0956-4233
IJSIM 9,4
360
of appreciation, as well as the relative emphasis put on productivity or quality, depends on how output is defined and whose view is taken into account (Hornbrook, 1982). Here it will be argued that both can be best looked upon simultaneously. From the customer’s view, output is the result of the exchange process between a service firm and the customer. From the firm’s view, output is the result of a particular transformation process (Hornbrook, 1982). Firms have a tendency to look at the utilization of resources during the transformation process. Customers, when evaluating the service process and outcomes, will take into account considerations other than productivity and its implications for the price level. Those other elements can be organized around the notion of service quality. In order to be effective, one must look for a balance between the customer’s view and the firm’s view (Gemmel et al., 1994). Integrating both views implies that one takes into consideration simultaneously productivity and quality. Similar remarks can be found in the work of Gummesson and the quality, productivity and profitability (QP&P)-programme where conceptual overlaps between service productivity and quality are delineated. Moreover, quality can have a positive impact on productivity, whereas both can influence positively profitability (Bylund and Lapidoth, 1994; Gummesson, 1993; 1994a) – a relationship that has found general acceptance within the manufacturing literature (Chase and Aquilano, 1995; Schonberger and Knod, 1994). So assessing service output will require approaches where productivity and quality are looked upon simultaneously. It has been recognized that traditionally, the service productivity literature has focused more on the industry level and paid far less attention to the level of operational service processes (Gummesson, 1994b). On this micro level, an output analysis in terms of productivity and quality must be preceded by a process analysis (McLaughlin and Coffey, 1990). This is even more true in a service environment, where the transformation processes have a complex range of inputs and outputs which might be configured in different ways (Sherman, 1984). Our contribution towards the literature on productivity and quality can be situated at the level of service activities within the boundaries of a service firm. At this level, we will argue that it is necessary to develop a scorecard integrating both quality and productivity indicators. The main part of this paper describes how one can get such an integrated scorecard. The approach is illustrated using two case studies. Process analysis forms the base for the developed methodology; in a next step, elements from activity-based management will be brought in to document productivity issues. This will be illustrated by the first case, where it will become clear that productivity as a perspective is too narrow to include all views involved. In a next step, the same approach is enriched by introducing quality function deployment principles resulting in a scorecard on service productivity as well as quality. A second case study will illustrate this.
As such, this article attempts to operationalise the principles reflected in Dealing with Kaplan and Norton’s (1996) work on developing more balanced scorecards for productivity and service environments by developing the framework found in Figure 1. quality indicators However, when discussing the second case, it becomes clear that tensions exist between consequences of working on productivity and realizing quality objectives. So here, a second contribution can be found; the straightforward 361 positive relationship between quality and productivity seems to need further refinement. The multidimensional nature of service quality, capacity issues – especially when related to productivity indicators and productivity gains – as well as ways of organizing, all make up a complex field one has to deal with in the operational service delivery process. Some tentative propositions towards integrating these elements in a service operations strategy are developed in the discussion. Process mapping In traditional manufacturing companies, process mapping is widely spread and understood. The raw materials are gradually combined and transformed, until the product is finished and sold or stored. The product is tangible and the production process can be more easily visualized. In service organizations, however, no such clear product exists. The fact that “the process is the product” (McLuhan, 1964) makes process mapping more complex. Services are at least partially intangible which make them more difficult to trace. The heterogeneity of services due to the customer interaction in the service process makes it more difficult to draw a process map that captures all the different situations. Modelling a service process can be difficult and sometimes time consuming. An increasing commitment of resources is required if one wants to bring in more detail in the design (Congram and Epelman, 1995). Within the service management literature, some process mapping techniques have been proposed, such as blueprinting (Shostack, 1984; 1987), service template (Staughton and Williams, 1994) or structured analysis and design (Congram and Epelman, 1995). Shostack (1984) recognized the existing process mapping techniques developed in the manufacturing literature (e.g. critical path method, line balancing) and in the operations side of service management, and extended them by explicitly taking into account the interaction with the customer. Blueprinting makes the process visible and distinguishes between front and
ABM: drivers process mapping
productivity
activities QFD: quality indicators
quality
Figure 1. Overall framework
IJSIM 9,4
362
back office (Shostack, 1984). It allows improvement to the service encounter, the crucial point of interaction with the customer and can be used as a tool in service positioning (Shostack, 1987). Other scholars have focused on process mapping as a necessary tool to improve the service design. Staughton and Williams (1994) have developed a simple graphical model, the service template, to represent the fit between the organization’s offerings and the market’s needs. Recently, the structured analysis and design technique (SADT) was used by Congram and Epelman (1995) to describe the service process. SADT is a top down model for decomposing the service process into successive, more detailed levels. For each activity, inputs and outputs can be clearly defined by following different steps and prescriptions. Also, here it is argued that in order to satisfy the needs of customers, you must understand the service process. So one can see a steady evolution and refinement of process mapping techniques, reflecting different points of attention and viewpoints. Little attention, however, has been given to the applicability of process mapping as a first step in a methodology to develop relevant productivity and quality indicators. In the following paragraphs, we illuminate the role process mapping can play in such a methodology. In order to do so, we first introduce some notions of activity-based management and point out their importance and relevance with regard to productivity assessment. In a next step, the service process is linked with the notion of service quality; transferring some principles underlying quality function deployment will result in relevant quality indicators. The relevance of activity-based management for service productivity Service organizations have been slower in developing productivity measures than manufacturing companies. This can be attributed to the specific nature of services, which makes it much more difficult to measure the output (Fitzsimmons and Fitzsimmons, 1994). Despite these difficulties, a number of productivity methodologies already exist for service organizations, such as output-input ratios, work measurement methods, statistical comparisons and deterministic models (McLaughlin and Coffey, 1990). Those methodologies can be enriched by integrating activity-based management principles. Activity-based management (ABM) originated out of activity-based costing (ABC), which itself is rooted in a growing dissatisfaction with the traditional cost accounting systems. The sometimes arbitrary allocation of overhead costs, causing distorted cost information, led to the introduction of activity-based costing (Cooper and Kaplan, 1988; 1991). Instead of using approximate or sometimes even arbitrary measures, the amount of resources used by the activities during the transformation process, determines the allocation of overhead costs to products. Therefore, a driver should be established for each activity, indicating what is causing the consumption of resources for that activity. Within ABC-approaches, the cost driver thus is the factor that causes the costs for a certain activity, such as the number of bills for payment activity or the number of shipments for the order administration.
Activity-based management is a broader concept than ABC. “It is a business Dealing with process approach that focuses on the activities required to support the business productivity and process for getting goods and services to the market” (Campi, 1992). “It refers to quality indicators the fundamental management philosophy that focuses on the planning, execution and measurement of activities as the key to competitive advantage” (Roberts and Silvester, 1996). 363 Like ABC, ABM was originally used more in manufacturing companies, but it can be seen as highly relevant in a service environment (Antos, 1992; Hussain and Kock, 1995; Rotch, 1990; Serneels and Van Looy, 1994). Applying activitybased management principles, however, requires an adequate view on the activities or processes underlying the service delivery process. Techniques such as blueprinting or SADT seem ideal to draw the service process and its different activities. Once the different activities are distinguished, and mapped within the service process, drivers can be identified for each activity within the service process. Indicators on productivity for all activities within the service process are then derived by relating the resources spent on each activity with the appropriate driver. This will be further illustrated in the following case. Case illustration: developing productivity indicators within a Belgian hospital This case study illustrates the integrated application of process mapping and activity-based management principles in order to arrive at productivity indicators within a large Belgian hospital. Management wanted to deploy the existing personnel as efficiently as possible within and across the different hospital departments. A general consensus among management and staff existed that a fair allocation of the personnel should be based on the existing workload and not, for instance, on the budget figures of each department. So, productivity indicators based on this principle were needed. To generate these indicators, an analysis was conducted that consisted of three steps (see Figure 2): Mapping activities
Delineating ‘drivers’
Assessment of resources
Assessment of workload
Productivity Indicators
Figure 2. Framework for developing productivity indicators
IJSIM 9,4
364
(1) Mapping the service process within the different departments and delineating the activities within the service process. (2) Determining the drivers for each activity, as well as the resources (actual time spent by the employees on each activity within the service delivery process) spent on each activity. (3) Developing productivity indicators by relating the amount of each driver with the resources spent on each activity. In total, this analysis was conducted within 34 ambulatory care departments covering the administrative, nursing, paramedical and technical personnel. Omitted from the analysis were doctors and personnel working in the central staff such as payroll administration. A total of 800 employees, which represented about 700 full-time equivalents, were involved in the analysis. Mapping the service process There were no process descriptions or job definitions available in the beginning of the project. Therefore, pilot studies were conducted in some departments to delineate the different activities. The process maps derived from these pilot departments were tested and refined within the other departments. In Figure 3 a simplified example of a process map is given. Activities start with appointments for patients. A next step implies file preparation as well as reception activities. If people have to wait for diagnosis, treatment or analysis, this can lead to activities for the personnel in terms of monitoring patients. When examinations are going on, the technical and nursing personnel supports the medical staff. Afterwards, activities include Basic Activities (∗) Appointments
Reception
Waiting room
Appointment Administration
File Patient Preparation Counselling
Consultation/ Examination
Nursing/ Technical Support
Support Activities
• Purchase & External Invoicing • Personnel administration • Maintenance & Cleaning
Figure 3. Process map
(∗) The activities surrounding the line of visibility are in italic
• Billing • Reporting/ typewriting • Appointments • Information Distribution • Specific Analysis • Planning of Operations
File Handling
billing, administrative handling of patient reports, making new or follow-up Dealing with appointments, distributing information, conducting specific analysis and productivity and finally taking care of administrative file handling. quality indicators As these activities are depicted within the service process, it becomes possible to determine the appropriate “driver” of the workload for each activity. Determining the “drivers” for each activity, as well as the time spent on each activity Table I presents us with some examples of activities and their related drivers. The appropriate driver is defined by looking at what causes an increase in the resources needed to accomplish the activity. This means, for example, that the workload of people handling the administration of the appointments will be based not on the number of patients but on the number of appointments. It should be noted that the relationship between activities and drivers is not always univariate. Bi-or multivariate relationships exist, meaning that a change in resources for an activity is dependent on several elements or drivers. For instance, the amount of resources needed to handle files at reception, is not only influenced by the number of files but also by the nature of the file. Whether the file is related to a new or an already known patient makes a difference in terms of time spent on file preparation. To deal with such situations, appropriate weights were calculated and used in the final determination of the workload. An alternative can be found in the use of multiple regression models to establish the relationships between resources and their drivers. Starting from the framework of activities, the time spent (i.e. the actual resources) on each activity was registered within the 34 departments. This was done by means of questionnaires, cross-validation by management, as well as samples of time registration.
365
Developing productivity indicators After having all the activities related to their “drivers” and information collected on the actual time (resources) spent on each activity, data on the actual workload were gathered for each activity and for each department. Part of this Activity
“Driver” of the workload
Appointments
Number of appointments concerning consultations, examinations
Reception
Number of patient visits, type of patient (old vs. new)
Billing
Number of forms to enter, number of items/form
Reporting/typewriting
Number of pages
Purchase
Number of order forms
Planning of operations
Number of operations
Maintenance and cleaning
Number of square meters, nature of surface
Table I. Overview of some activities and their related drivers
IJSIM 9,4
366
information was available centrally, part was available in the departments. Techniques of double checking and sampling were used to verify the collected data. In this stage, arriving at productivity indicators implies relating the actual resources used for each activity with the amount of the driver related to this activity. The actual amount of a driver can be defined as the workload for the related activity. By dividing the time spent by the actual workload, one obtains an indicator on how much resources (here being time), are spent for each unit of workload. These indicators form the base for developing a scorecard on productivity for each department. This scorecard was enriched by adding average productivity results for all departments, resulting in indications on relative performance. Whereas this approach gives an insight in to actual productivity performance, nothing has yet been said about a more “optimal” productivity level one can achieve for each activity. To arrive at such “norms” three different approaches are possible: (1) determining absolute norms; (2) developing relative norms in a cross-sectional way; (3) developing relative norms in a longitudinal way. The first approach consists of determining some “absolute” norm for each activity. This implies a time and motion analysis to determine the norm time to perform that task. Multiplying optimal time standards with the actual workload results in needed capacity. By comparing needed capacity with the actual capacity, one can verify whether there is over or under capacity for that particular task. The drawbacks of such an approach are its time-consuming nature and the risk of creating employee resistance. The second approach consists in the development of “relative” norms for the different activities. Here indicators for the same activities are compared over different departments. One can calculate the “average” relationship between the workload and the time spent and use the average or the “best performer” as the guideline. A rationale completely in line with benchmarking. Within this case, “internal” benchmarking was applied. By means of a regression analysis the average relation between workload and the time spent is quantified. This mean can be considered as an “average” norm. An alternative would be to use the best performing department as a norm or take into consideration external best practices. Figure 4 shows an example of such a regression analysis. It establishes for the activity “file handling”, the relationship between the workload (number of files) and the time spent on this activity (expressed in number of full-time equivalents). Each number in Figure 4 represents a department. Relative norms are then calculated, based on the average regression analysis. These norms are integrated into the scorecard and allow the departments to assess their relative performance (Table II).
Dealing with productivity and quality indicators
number of full time equivalents 4
17
3
24
367
13
29
2
26
9
16
14
19
1
25
2 12 22
1
6
28
31
18
10
27
3
0 0 5000 10000 15000 Work load: number of files
20000
25000
30000
Department: obstetrics Full-time equivalent (FTE) Relative norm
35000
+/– (FTE vs. relative norm)
Appointments
3.00
1.29
+1.71
Reception
2.40
2.08
+0.32
Lab
0.90
0.78
+0.12
Invoicing
1.40
0.72
+0.68
Planning
0.30
0.25
+0.05
Administration patients
1.40
0.73
+0.67
Purchase
0.50
0.17
+0.33
External invoicing
0.15
0.13
+0.02
Maintenance
2.10
1.84
+0.26
Library
0.20
0.13
+0.07
1.45
1.17
+0.28
13.80
9.29
+4.51
General management Total relative norm
Figure 4. Internal benchmarking of the file handling activity by means of regression analysis
Table II. A productivity scorecard for a hospital department
IJSIM 9,4
368
One important condition when using internal (or external) relative norms is that the tasks have to be homogeneous, which means that they must be quasi identical between the different departments. For some activities however, the diversity included in the activity does not allow for comparison and thus for the development of “relative” norms. This means that no comparisons could be made between departments, so a relative norm could not be specified. For instance, assistance of nurses during consultations or examinations can vary heavily depending on the department and the nature of the consultation. Here a third approach can be used by looking at the evolution of the indicators over time, eventually refined by defining groups or clusters within the activity (e.g. cardiac operations). As such these indicators can play a supportive role towards efforts related to continuous improvement. An alternative is to look for external “benchmarking” possibilities. This latter option avoids one arriving at sub-optimization due to the fact that one stays within the same organization; it may be that none of the units performs well in handling files; as such, external benchmarking can complement internal performance measurement. So this case illustrates how process mapping in combination with activitybased management principles can result in a methodology to develop a comprehensive set of productivity indicators. A profound process analysis allows the delineation of appropriate drivers and leads in a second step to an assessment of the workload. Relating workload with resources results in productivity indicators rooted in the operational service delivery process. However, after insight in the actual performance on productivity is gained, the discussion on quality almost becomes inevitable. Differences in terms of productivity can be caused by opposite differences in terms of quality. Also, looking at and assessing only productivity bears the risk of creating dysfunctional side-effects. One way to improve the scores on the productivity indicators – at least in the short run – is to pay less attention to the quality realized during the transformation process. To create a more balanced view on the service process, as well as to avoid these side-effects, quality should enter the stage. Adding quality indicators As argued above, assessing service output means paying attention to both productivity and quality. The notion of quality is multi-dimensional (Garvin, 1984; 1987). Several authors have investigated the quality determinants of services. Grönroos (1978, 1984) made the distinction between technical and functional quality, to which he later added corporate image. An important breakthrough in the service quality literature came with the Servqual model by Parasuraman et al. (1985). Their original model contained ten dimensions which were later organized around five notions : tangibles, reliability, responsiveness, assurance and empathy (Parasuraman et al., 1988). Johnston and Silvestro (1990) defined quality as consisting of 12 dimensions; later extending it towards 18 dimensions.
These quality dimensions form a sound base to root distinctive quality Dealing with indicators. Linking these dimensions with the different activities within the productivity and service process can lead to the development of quality indicators. Quality quality indicators indicators should be understood here as an operational measure of a given quality dimension. This operational measure is rooted in the specific activities of the service delivery process. 369 Similarities can be seen with the rationale behind quality function deployment (Hauser and Clausing, 1988) in which customer attributes are linked to engineering characteristics within the context of product design. Such an analysis allows one to take into account systematically the preferences of customers in the design and manufacturing process. Efforts to translate QFDprinciples towards the field of service management can be found in the era of customer problem resolution (Stauss, 1992), as well as in the domain of quality improvement (Lemmink and Behara, 1992). Here it will be illustrated that the same principles can be fruitful as a step in the methodology to design adequate performance indicators on service quality. Therefore one needs to substitute customer attributes and engineering characteristics with, respectively, quality dimensions and the activities within the service process. An illustration of this approach is elaborated in the following case. Case illustration: adding quality indicators to the productivity scorecard This case deals with the development of indicators on productivity, as well as quality, for the network of a health insurance company. Within 43 offices approximately 90 people provide services for their members related to health care insurance. This network can fall back on a central staff of 60 people. In a first step, the different service activities performed by the front office people in the network were analyzed. A summary of these activities, as well as the relevant drivers, can be found in Table III. Activities are composed of different sub-tasks which are grouped into homogeneous blocks, here labelled activities. For instance, the activity “new Activity
Driver of the workload
Payment of health care interventions
Number of tickets Number of lines/ticket
New membership
Number and type of insurance
Office and desk activities
Number of openings
Training and development
Number of employees
Billing hospital expenditures
Number of beneficiaries
Insurance affairs
Number and type of case
Table III. Overview of activities and drivers for a health insurance company
IJSIM 9,4
370
membership” implies several sub-tasks: reception and control of application forms, preparing and sending documents, gathering and controlling additional information, preparing and sending several documents…. This grouping of different sub-tasks into activities was done in close collaboration with the involved employees based on two principles: logical coherence in terms of the final outcome of the sub-task, on the one hand, and meaningfulness in terms of resources required, on the other hand. The former criterion allowed the grouping of sub-tasks whereas the latter avoided that analyses were conducted on a nano-level (e.g. analyzing telephone calls separately). The appropriate level of analysis can however differ, depending on the questions the description wants to answer (Congram and Epelman, 1995). Adopting a similar approach of using relative norms, as in the case of the hospital, resulted in the following productivity scorecard for each office (see Table IV). As one can notice, for each activity the required manpower is calculated based on the actual amounts of the respective drivers. This results in a fraction of full-time equivalents for each activity. The total time needed for the activities is then compared with the available time in the office, resulting in a productivity index which gives an overall assessment of the utilization rate of human resources for each office on a monthly basis. This index can be compared with the average index of all offices. Adding quality indicators was realized in two steps. First, at the level of the health insurance organization, the relative importance of the different quality dimensions was discussed. This discussion was enriched by adding customer opinion and satisfaction survey data. As such, one realized it is possible to overcome possible differences that might exist between internal perceptions of quality dimensions and how they relate to customer satisfaction and the actual preferences and experiences of customers. A consensus was reached to work on four dimensions; reliability, responsiveness, access and communication (Parasuraman et al., 1985). Weights for the different dimensions were determined in line with the preferences as expressed by customers. In a next step, these determinants were linked with the different activities of the service offering. For each activity the question asked was whether the output could influence the customers’ perception of the different quality dimensions. If so, an adequate indicator related to the specific quality dimension was developed. As such, the activities labelled here as payment of health care interventions, were linked with reliability and responsiveness, as well as communication. As for reliability, the relevant indicator for this activity consists of the percentage adjustments that occur within the finance department. The speed with which customers receive their money influences heavily their perception of responsiveness. As such, the time between the reception of the tickets and the payment by means of bank accounts was calculated. This average as well as the percentage of cases extending a timeperiod of two days were included in the quality indicators. Also, information given to customers can affect these activities. Informing customers correctly about their rights will result in low rates of unjustified claims. In Table V
0.187 0.102 0.057 0.025 0.032 0.000 0.044 0.447 0.507 0.060 88.17 77.65
0.244
0.028
0.058
0.026
0.033
0.000
0.050
0.439
0.555
0.116
79.10
78.70
Billing hospital expenditures
Insurance affairs
PI for all offices (%)
Productivity index (PI) (%)
Difference
Available time
Total time needed
Miscellaneous
Training and development
Office and desk activities
New membership
Payment of health care interventions
February
January
76.65
75.91
0.119
0.494
0.375
0.047
0.000
0.033
0.022
0.058
0.023
0.192
March
80.64
88.42
0.066
0.570
0.504
0.055
0.000
0.034
0.030
0.060
0.077
0.248
April
80.81
84.38
0.090
0.576
0.486
0.054
0.000
0.035
0.028
0.061
0.050
0.258
May
80.50
85.16
0.077
0.519
0.442
0.048
0.000
0.034
0.025
0.059
0.039
0.237
June
87.95
132.38
–0.091
0.281
0.372
0.034
0.000
0.018
0.015
0.031
0.066
0.208
July
81.32
83.55
0.088
0.535
0.447
0.049
0.000
0.036
0.025
0.063
0.039
0.235
August
75.96
76.86
0.115
0.497
0.382
0.049
0.000
0.031
0.024
0.055
0.000
0.223
September
81.48
86.18
0.077
0.557
0.480
0.061
0.000
0.032
0.025
0.056
0.041
0.265
October
December
0.225 0.021 0.051 0.024 0.029 0.000 0.051 0.401 0.455 0.054 88.13 80.81
Dealing with productivity and quality indicators
November
0.265 0.030 0.054 0.024 0.031 0.000 0.056 0.460 0.502 0.042 91.63 78.33
371
Table IV. Required resources/ manpower for each activity, expressed in FTE
IJSIM 9,4
Activities
Reliability
Responsiveness
Payment of health care interventions
% adjustments
Average time between receivement and payment % of cases over two days
New membership
% of additional requests made by head office
Average time between the first request and file completion
Office and desk activities
Number of corrections of cash register
372
Billing of hospital % adjustments expenditures Table V. Some indicators developed within the “activity-quality” framework
Insurance affairs
Access
Communication % of unjustified claims
Number of deviations between actual hours and announced hours Average time between initiating request and information towards customer % of unjustified claims
extracts of indicators developed within this “activity-quality” framework can be found. The indicators mentioned are worked out and provided to the front office employees, together with the productivity scorecard, on a monthly basis. Integrating both elements allowed for a better and more balanced understanding of the performance of front office personnel within the different units and lead to improvements efforts where both output elements are taken into account. However, it became apparent during this process that a more balanced insight also can imply a more troublesome or even conflicting view of the relation between service quality and productivity. The productivity indicators developed in this case started to figure as an input for manpower planning; given the workload for each activity and the productivity indicators, one could monitor and plan more accurately the required capacity, e.g. front office employees. Also, productivity improvements were translated into manpower planning practices. Where this may sound good practice, it can become troublesome in a service environment. Given the co-presence of customers during the service delivery process, the absence of stocks and – to a certain degree – the inherent uncertain nature of customer behaviour and hence arrival patterns, one can face negative side-effects in terms of quality objectives by following this approach. When staff are assigned to different branches based on
“optimal” productivity figures, handling peaks might become problematic. An Dealing with extension of waiting lines will occur at peak moments, affecting the customers’ productivity and evaluation of responsiveness negatively. So when linking productivity to quality indicators capacity planning within a service environment characterized by fluctuations in demands, the relationship between productivity and quality is no longer a reinforcing one, but takes more the character of a trade-off.
373
Discussion and limitations – suggestions for further research The developed methodology allows one to assess both productivity and quality at the level of the concrete service delivery process. This integrated methodology is built up by combining tools and insights from process mapping, activity-based management and quality function deployment and has proven to be useful in a service environment. The methodology has some particular advantages over other techniques to measure service productivity. Moreover, the exercise of developing operational performance measures is a condition sine qua non for a meaningful discussion about service (operations) strategy. Both of these points will be discussed. The activity-based management approach to develop productivity indicators has some advantage over data envelopment analysis. Whereas DEA treats the different activities and processes as a “black box”, and links (multiple) inputs with (multiple) outputs without considering the mediating process, here one starts seeing more in detail where problems are situated within the process. Moreover, our approach can be considered as a way to increase the discrimination among efficient units in DEA, a problem which has been extensively discussed (O’Neill, 1997). On the other hand, it should be observed that DEA allows one to look at multiple inputs and outputs simultaneously; here the different activities are analyzed separately, neglecting possible interconnectedness. Juxtaposing both approaches might become fruitful. Experiences from the case studies teaches one that developing a scorecard with productivity indicators only, always results in a discussion about differences in quality level. In order to prevent such discussions, we strongly advise one to build a scorecard including quality as well as productivity measures. The development of such an integrated scorecard will reveal the relationship between the various measures. The case material shows that this relationship is not always straightforward. If management tracks productivity indicators and starts using them for manpower planning decisions, this might lead to lower quality levels. The occurrence of peaks is inherent in the variable nature of services. Stressing too much productivity, might lead to a deterioration in responsiveness. Here the relationship between productivity and quality gets a trade-off character. There exists more empirical evidence that productivity and quality do not necessarily improve simultaneously. According to a recent study of Pue (1996) on high-contact services, productivity efforts to reduce the time allocated per transaction may lead to lower quality levels. It is important to note that the quality of a transaction in Pue’s is here a function of time allocated to this
IJSIM 9,4
374
transaction compared to the customer’s expectations. In high-contact services, these expectations may be higher than in other services. The point we want to make here is that the relationship – eventually the trade-off – between quality and productivity must be looked upon within a strategic perspective. Models such as the balanced scorecard (Kaplan and Norton, 1996) and the customer account based matrix (Roth and Van der Velde, 1991) may be useful in positioning the operational indicators within a service strategic framework. Such a strategic orientation allows one to take explicitly into account the variety of customer needs. This variety can lead to a different emphasis on the importance of quality dimensions and productivity indicators. There is already some evidence that the availability of capacity – Miller and Adam (1996) talk about “slack” – is an intermediary variable in the relationship between quality and productivity (Armistead and Graham, 1994; Miller and Adam, 1996). For instance, in high-contact services, flexibility may be necessary to realize high levels of productivity and quality simultaneously (Roth and Van der Velde, 1991; Siferd et al., 1992). But this can increase the cost of service delivery, which might affect profitability. It is clearly necessary to dig further into the relationship between productivity and quality within a service strategic framework. The main contribution of our study to the development of service operations strategy is that an integrated scorecard at least delivers concrete and operational indicators in order to start a strategic discussion. A final comment is related to the process of developing and using these types of indicators. As the final objective is to arrive at a better insight of actual performance in order to improve it, attention to the development process is always important. One bears the risk of creating indicators that are perceived and/or used as one-sided control mechanisms leading to possible defensive reactions by employees (Argyris, 1993). As no real process can be documented to cover everything, the indicators will always be an approximate of the real process and there will be always room for “cooking the figures”. Avoiding such defensive or even contra-productive ways of working with the indicators can only be achieved by involving employees in the development process and by situating these tools in a constructive and collaborative working relationship between management and employees. References and further reading Antos, J. (1992), “Activity-based management for service, not-for-profit and governmental organizations”, Journal of Cost Management, Summer, pp. 13-23. Argyris, C. (1993), On Organizational Learning, Blackwell, Oxford. Armistead, C. and Graham, C. (1994), “The ‘coping’ capacity management strategy in services and the influence on quality performance”, International Journal of Service Industries Management, Vol. 5 No. 2, pp. 5-22. Bylund, E. and Lapidoth, J. Jr (1994), “Service quality and productivity: a post-industrial approach”, Proceedings from the 3rd International Research Seminar in Service Management, Aix-en-Provence, France, May. Campi, J.P. (1992), “It’s not as easy as ABC”, Journal of Cost Management, Summer, pp. 5-11.
Chase, R. and Aquilano, N. (1995), Production and Operations Management: Manufacturing and Services, Irwin, Homewood, IL. Cooper, R. and Kaplan, R.S. (1988), “Measure costs right: make the right decisions”, Harvard Business Review, September-October, pp. 96-103. Cooper, R. and Kaplan, R.S. (1991), “Profit priorities from activity-based costing”, Harvard Business Review, May-June, pp. 130-5. Congram, C. and Epelman, M. (1995), “How to describe your service: an invitation to the structured analysis and design technique”, International Journal of Service Industry Management, Vol. 6 No. 2, pp. 6-23. Fitzsimmons, J.A. and Fitzsimmons, M.J. (1994), Service Management for Competitive Advantage, McGraw-Hill, New York, NY. Garvin, D.A. (1984), “What does product quality really mean?”, Sloan Management Review, Fall, pp. 25-43. Garvin, D.A. (1987), “Competing on the eight dimensions of quality”, Harvard Business Review, November-December, pp. 101-09. Gemmel, P., Heylen, K. and Van Dierdonck, R. (1994), “Measuring hospital productivity: a review and the case of Belgian hospitals”, Working Paper, EIASM Conference on Service Productivity, Brussels, pp. 1-22. Grönroos, C. (1978), “A service-oriented approach to marketing of services”, European Journal of Marketing, Vol. 12 No. 8, pp. 588-601. Grönroos, C. (1984), “A service quality model and its marketing implications”, European Journal of Marketing, Vol. 18 No. 4, pp. 36-44. Gummesson, E. (1993), “Service productivity, service quality and profitability”, Keynote Address at the Eighth International Conference of the Operations Management Association: Service Superiority – The Design and Delivery of Effective Service Operations, Warwick Business School. Gummesson, E. (1994a), “Service quality and productivity in the imaginary organization”, Proceedings from the 3rd International Research Seminar in Service Management, Aix-enProvence, France, May. Gummesson, E. (1994b), “A perspective on service productivity”, Introductory Address, The First International Research Workshop on Service Productivity, hosted by the Stockholm University and EIASM, Brussels, Belgium, October 3-4. Hauser, J.R. and Clausing, D. (1988), “The house of quality”, Harvard Business Review, May-June, pp. 63-73. Hornbrook, M.C. (1982), “Hospital case mix: its definition, measurement and use: part I – the conceptual framework”, Medical Care Review, Vol. 39 No.1, Spring, pp. 1-42. Hussain, M.M. and Kock, C. (1995), “Activity-based costing in service management”, in Lemmink, J. and Kunst, P. (Eds), Managing Service Qual ity, Paul Chapman, Maastricht, pp. 167-76. Johnston, R. and Silvestro, R. (1990), “The determinants of service quality – a customer based approach”, Proceedings of the 1st International Research Seminar in Service Management, Aix en Provence, France, June. Kaplan, R. and Norton, S. (1996), The Balanced Scorecard, Harvard Business Press, Boston, MA. Lemmink, J. and Behara, R. (1992), “Q-Matrix: a multi-dimensional approach to using service quality measurements”, in Kunst, P. and Lemming, J. (Eds), Quality Management in Services, Van Gorcum, Assen Maastricht, The Netherlands, pp. 79-87. McLaughlin, C.P. (1995), “Why variation reduction isn’t everything: a new paradigm for service operations”, Paper Prepared as Keynote Presentation for the International Research Symposium on Services Management, Dorset Business School, September 8.
Dealing with productivity and quality indicators 375
IJSIM 9,4
376
McLaughlin, P.C. and Coffey, S. (1990), “Measuring productivity in services”, International Journal of Service Industry Management, Vol. 1 No. 1, pp. 46-63. McLuhan (1964), Understanding Media, McGraw-Hill Book Company, New York, NY in Shostack, G.L. (1987). Miller, J.L. and Adam, E.E. Jr (1996), “Slack and performance in health care delivery”, International Journal of Quality and Reliability Management, Vol. 13 No. 8, pp. 63-74. Morris, B. and Johnston, R. (1987), “Dealing with inherent variability: the difference between manufacturing and service”, International Journal of Operations and Production Management, Vol. 7 No. 4, pp. 13-22. O’Neill, L. (1997), “Measuring hospital performance with DEA: an alternative approach”, Proceedings of the 1997 Annual Meeting, 22-25 November, San Diego, CA, Vol. 1, pp. 55-7. Parasuraman, A., Zeithaml V.A. and Berry, L.L. (1985), “A conceptual model of service quality and its implications for future research”, Journal of Marketing, Fall, pp. 41-50. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), “Servqual: a multiple item scale for measuring consumer perceptions of service quality”, Journal of Retailing, Vol. 64 No. 1, Spring, pp. 12-39. Pue, R.O. (1996), “A dynamic theory of service delivery: implications for managing service quality”, PhD dissertation, MIT, Cambridge, MA. Roberts, M.W. and Silvester, K.J. (1996), “Why ABC failed and how it may yet succeed”, Journal of Cost Management, Winter, pp. 23-35. Rotch, W. (1990), “Activity-based costing in service industries”, Journal of Cost Management, Summer, pp. 4-13. Roth, A.V. and Van Der Velde, M. (1991), “Operations as marketing: a competitive service strategy”, Journal of Operations Management, special issue on linking strategy formulation in marketing and operations, empirical research, Vol.10 No.3, pp. 303-28. Schonberger, R. and Knod, E. (1994), Operations Management: Continuous Improvement, Irwin, Homewood, IL. Serneels, S. and Van Looy, B. (1994), “Activity-based management: ontwerp van een praktisch beleidsinstrument voor kostenbeheersing in een ziekenhuis”, Acta Hospitalia, No. 3, pp. 26-35. Sherman, S.D. (1984), “Hospital efficiency measurement and evaluation”, Medical Care, Vol. 22 No. 10, October, pp. 922-37. Shostack, G.L. (1984), “Designing services that deliver”, Harvard Business Review, JanuaryFebruary, pp. 133-9. Shostack, G.L. (1987), “Service positioning through structural change”, Journal of Marketing, Vol. 51 No. 1, pp. 34-43. Siferd, S., Benton, W. and Ritzman, L. (1992), “Strategies for service systems”, European Journal of Operational Research, Vol. 56, pp. 291-303. Staughton, R.V.W. and Williams, C.S. (1994), “Towards a simple, visual representation of fit in service organizations: the contribution of the service template”, International Journal of Operations and Productions Management, Vol. 14 No. 5, pp. 76-85. Stauss, B. (1992), “Service problem deployment: transformation of problem information into prevention activities”, International Journal of Service Industry Management, Vol. 4 No. 2, pp. 41-62.