using a recognition and reward initiative to improve service quality

3 downloads 2289 Views 34KB Size Report
QUALITY: A QUASI-EXPERIMENTAL FIELD STUDY IN A PUBLIC HIGHER ... Service Excellence Initiative Recognition and Reward (SEIRR) Intervention.
USING A RECOGNITION AND REWARD INITIATIVE TO IMPROVE SERVICE QUALITY: A QUASI-EXPERIMENTAL FIELD STUDY IN A PUBLIC HIGHER EDUCATION INSTITUTION Richard E. Kopelman, Baruch College, NY, NY 10010 [email protected] Naomi A. Gardberg, Baruch College, NY, NY 10010 [email protected] Ann Cohen Brandwein, Baruch College, NY, NY 10010 [email protected]

ABSTRACT We describe a service quality initiative that focused on improving the work behavior and job attitudes of employees in a job category that is often overlooked, yet which is integral to the success of most public sector organizations—administrative assistants. Data on service quality were collected on an ongoing basis by an independent entity, Educational Benchmarks, Inc. (EBI). Faculty ratings of administrative assistance improved by > 5 Standard Errors of Measurement compared to other EBI measurements. Survey data from administrative assistants were highly favorable. The present action research suggests that a recognition and reward intervention can improve service excellence in a public sector higher education organization INTRODUCTION One approach to enhancing service quality is via employee recognition and reward programs. Although such programs have been found effective in the private sector [3], virtually no research has directly examined the effectiveness of such programs in a public sector organization. It might be assumed that recognition and reward programs will transfer from private to public sector organizations, as they share some characteristics; but they vary in important ways. As Ruben [10] has noted in connection with a longstanding service enhancement initiative at Rutgers University, public sector higher education institutions are characterized by: (1) “ultrastability” of the workforce; (2) limited availability of incentives and disincentives; and (3) complex bureaucratic structures. In the present investigation, we examine the effects of a service excellence recognition and reward program on perceptions of service quality in a public higher education institution. In addition, the present investigation is contributory in focusing on an often overlooked job category, administrative assistants. Although usually located at lower levels of organization charts, administrative assistants can be integral to an organization’s performance. They frequently serve in a boundary-spanning capacity—e.g., as the receptionist or clerical person who is first encountered by a client/customer. In this role, they help create the initial impression and set the tone for the service/product encounters that ensue. Of course, administrative assistants also handle the myriad details that are essential for office efficiency (e.g., ordering supplies, scheduling meetings, completing various forms, etc). At one extreme, an administrative assistant can (passive aggressively) simply tell visitors “Not in; stop back later;” or, at the other extreme the administrative assistant can actively listen, apply initiative, and assist the caller in solving his/her query or problem.

INTERVENTION AND HYPOTHESES In this section we describe the Service Excellence Initiative (SEI) that was undertaken and advance four hypotheses pertinent to evaluating one component. In the summer of 2003, the Dean of a very large business school (hereafter VLBS) articulated the organization’s mission and issued a call for creating a culture of service excellence encompassing both technical and administrative support. The SEI focused on enhancing the administrative support provided by frontline personnel in assisting faculty, department chairs, students, and prospective students. A primary goal of the SEI was to recognize and reward outstanding work performance by administrative assistants, and thereby enhance the overall quality of service provided. To implement this goal, a SEI Task Force was created that was comprised of faculty, students, administrators, and administrative assistants (the focal group to be recognized and rewarded). At the initial “kick-off” meeting the Dean presented his vision for the project and communicated his enthusiastic support, which included the provision of financial resources to recognize and reward outstanding administrative staff members—a population that previously had never received any accolades. The Service Excellence Initiative Recognition and Reward (SEIRR) intervention and effectiveness evidence are described below. Service Excellence Initiative Recognition and Reward (SEIRR) Intervention A web-based form was developed for soliciting nominations that could be accompanied by narratives of specific examples of excellent service (i.e., critical incidents) provided by the nominated administrative assistant(s). Requests for nominations (and narratives) were sent via email to all faculty, students, and staff of VLBS. It should be noted that all administrative assistants were first contacted to assure their willingness to be included in the pool of potential awardees before nomination notices were sent. Award recipients were determined by the Task Force based on the number of nominations received along with the poignancy of the narratives provided. Four awards were given out in year 1 and five in the next two years. Upon receiving the SEIRR Award, an individual was excluded from eligibility during the subsequent two academic years. The number of eligible employees fluctuated over the three-year period from 36 to 49. Between the first and second years, the total number of nominations increased by more than 350 percent. Although the total number of nominations dipped in year 3, nominations per employee increased from 4.3 to 10.5 to 12.7. This pattern suggests that members of the VLBS community were not only becoming more aware of the SEIRR program but perceived it as a way to recognize excellent service. During its first year the SEIRR initiative was begun with no prior notice; consequently there should have been no incentive effect, just a possible reward effect. In fact, it was first mentioned during the Honors and Recognition Ceremony—when the first recognition and reward cycle concluded—that awardees would be receiving a payment of $1,000 and a plaque. The Honors and Recognition Ceremony was well attended because more than 100 students were inducted into the Beta Gamma Sigma and Sigma Iota Epsilon honor societies, and faculty members were recognized for their outstanding achievements. In addition to having their names appear in the Award booklet, administrative assistant awardees subsequently received additional forms of recognition: a group photo and a news story appeared prominently on the College’s web site; articles appeared in the student newspapers; and a personal letter of appreciation was sent by the Dean. Importantly, the Award ceremony entailed having the Dean read some of the most

poignant comments provided by (the anonymous) nominators on behalf of each Award recipient. This form of public acknowledgement made the ceremony particularly meaningful. A few excerpts from the Dean’s recent comments follow, and provide an overall sense (Gestalt) of the ceremony: “As you can see on the program, the first order of business is our Service Excellence Awards. I want to tell you why we have decided to give service awards to administrative staff members. These are the folks you see when you first enter offices [or] who work behind the scenes in order to get students served, systems working… It is with these Awards that [VLBS] is able to acknowledge those members of our community who provide outstanding service….I would like to quote some of what has been written…[Name] is unfailingly cheerful and always ready to help you with a problem. Who else could find a box of old-fashioned transparency blanks?—or would take the time to find them?” After the first year, it was hoped that this combination of recognition and rewards would have incentive value insofar as future potential awardees might be more fully aware of what to expect. Because the SEIRR intervention was intended primarily to influence the work behavior and job attitudes of administrative assistants at VLBS we advanced the following hypotheses: H1: EBI faculty ratings of “secretarial assistance” will increase from pre-intervention to postintervention. H2: Faculty ratings of “secretarial assistance” will increase more than faculty ratings on the other 83 EBI items. H3: Faculty ratings of “secretarial assistance” will increase more at VLBS than will be the case among peer/aspirant schools. Administrative assistants at VLBS performed jobs that were often stressful, and that did not offer high pay. In public organizations there often are rigid classification systems whereby longevity primarily drives compensation. Consequently, we anticipated that the recognition and reward initiative would be positively received by the intended recipients. Thus, we posited: H4: Administrative assistants at VLBS will have positive attitudes regarding the SEIRR intervention. METHOD Samples and Procedures We utilized two data sets to test our hypotheses. Hypotheses 1 – 3 were tested using data from Educational Benchmarking, Inc. (EBI), an independent survey entity. There are two key advantages to using EBI data. First, scores were collected at two points in time allowing for a longitudinal design. Second, the fact that EBI is an independent source mitigates the potential confound of common method variance. We also conducted our own survey of the target population, the administrative staff. Measures EBI. Evaluative data provided by Educational Benchmarking, Inc. (EBI) were obtained from faculty during the spring of 2004, when the first year’s initiative was just begun, as well as during the spring of 2006, when two years of the SEIRR intervention had been completed and the third year’s effort was underway. The EBI survey is comprised of 84 items grouped into ten major categories, including faculty development, faculty teaching, and so forth. Nearly all of the 84 EBI survey items were conceptually unrelated to the behavior and performance of the focal

population, administrative assistants in the VLBS. However, one item should have been directly affected by the SEIRR intervention, namely Question #10—faculty satisfaction with “secretarial assistance.” (The term “secretarial assistance” is how Question #10 is worded on the EBI survey.) The other 83 items logically should not have been affected by the SEIRR intervention. For example, the 17 items on satisfaction with faculty development pertained to such matters as classroom technology and salary. That most EBI items were unrelated to the SEIRR intervention enabled us to employ a program evaluation procedure that parallels Chen’s theory-driven evaluation methodology [4], i.e., incorporating the basic concepts of convergent and discriminant validity as explicated by Campbell and Fiske [2] 50 years ago. In 2006 we supplemented the EBI survey with four items that asked about faculty satisfaction with practices that should have been affected directly by the SEIRR intervention: the friendliness/courtesy and knowledge/professionalism of administrative assistants. (Business schools are permitted to add up to 10 institution-specific questions to the EBI survey.) The EBI survey feedback process enabled VLBS to compare itself with six peer (and/or aspirant) business schools—entities that essentially served as a comparison condition. It is important to note that EBI restricts the use and reporting of its data to protect the confidentiality of participating institutions. Accordingly, we report comparisons in the form of index numbers, with ratings in 2003-2004 set at 100. SEIRR Questionnaire. During the summer and fall of 2006 the SEIRR coordinators distributed a 28-item questionnaire that was completed anonymously and voluntarily by administrative assistants. Key attitudinal questions concerned whether the respondent: (1) thought prior recipients were deserving; (2) had previously been an Award recipient; (3) thought the nomination process was fair; (4) thought the Award process might encourage their colleagues to improve the service they provide; (5) thought the Award process might encourage improvement in their own service; and (6) thought that the Award program should be continued. Comments were elicited regarding attitudinal questions. RESULTS EBI Data for Hypotheses 1 - 3 We tested hypothesis 1- 3 using the EBI measures described above. EBI data were provided by 57 and 95 faculty members at VLBS during the spring of 2004 and 2006, respectively. Hypothesis 1 predicted that faculty evaluations of secretarial assistance will increase following the intervention. Scores on Question 10—satisfaction with “secretarial assistance”— increased by 11.3%, however the improvement did not achieve statistical significance (t = .98, p = .16, one-tailed.) The difference did correspond to a noticeable, yet small, effect size (d = .17) based on Cohen’s criteria [6]. Consequently, we performed a power analysis using Power & Precision Software to determine the probability of finding statistical significance given the effect size and sample size [1]. Statistical power is defined as the probability of rejecting the null hypothesis when it is false and should be rejected [5]. Large sample sizes are required to detect small effects. Given that our sample size was only n = 57 at T1 and n = 95 at T2, the probability of achieving statistical significance was only 17 percent (assuming alpha = .05).

Importantly, it has been noted previously that a phenomenon may have a small effect size, statistically, yet be highly meaningful in a practical sense. For example, Meyer et al. [8] reported that the effect size associated with ever smoking and the onset of lung cancer within 25 years is .08—an effect size one-half as large as that observed in the present intervention. Hypothesis 2 posited that faculty ratings of “secretarial assistance” would increase more than faculty ratings on the other 83 EBI items. To test H2, we examined EBI data on an ipsative basis—viz., in terms of the magnitude of relative change across all 84 EBI items. On this basis, the index of the change for Question 10 was the 5th largest among the 84 items. Using the testretest correlation between items as an indicator of reliability, the change in Question 10 corresponded to a change of 5.49 Standard Errors of Measurement (SEM); p < .001. Additionally, as noted above, VLBS added four new items in the 2006 EBI survey. The items related to the friendliness/courtesy and knowledge/professionalism of administrative assistants. Scores on the four new items were compared to mean scores in 2006 on the 83 items, excluding Question 10. The mean score on the four new items was significantly greater than on the other 83: ZSEM = 2.99, p < .001, one-tailed. These findings provide strong support for H2. Hypothesis 3 posited that the change in Question 10 would be greater at VLBS than at peer/aspirant schools. This hypothesis was supported using raw data: the difference between sample means in terms of magnitude of change was t =3.84, p < .001. It was also supported using index data. The index of change was significantly larger at VLBS than at peer/aspirant schools, VLBS ZSEM = 5.49 as compared to ZZEM= -1.24 at peer/aspirant schools. Survey data for Hypothesis 4 The response rate to the survey was high, at 76%. Hypothesis 4 suggested that administrative assistants at VLBS would have positive attitudes regarding the SEIRR intervention. We measured perceptions of how deserving recipients were using the item “On the whole, how deserving were the recipients of the Service Excellence Award?” The names of the 14 Award recipients to date appeared above the item. All respondents (100%) found prior recipients to be deserving of the Award: 75 percent rated prior recipients as “very deserving” and 25 percent rated them “somewhat deserving”. There was no statistically significant difference in deservedness ratings between respondents who had received the Award and those who had not. The Fisher Exact Test yielded p levels of close to 1 and .41, respectively. Sixty-nine percent of respondents felt the process was fair (p < .01), and there was no relationship between fairness responses and whether the respondent was an Award recipient (p = .41). To gauge whether the award influenced service quality, we asked two questions: (1) whether the Award initiative has improved the work of one’s colleagues; and (2) whether the Award initiative improved the respondent’s own work effort. Fifty-nine percent of respondents felt that the Award encouraged colleagues to improve service, (p < .05), but only 28 percent of respondents thought that the SEIRR intervention had improved their own work performance, a significant difference (p < .05). Evidently, respondents attributed their own work behavior to internal causes, whereas their colleagues were seen as more influenced by external factors such as recognition and rewards. Sample comments included: “I don’t work to get rewards;” “I work hard because I like what I do;” and “I always provide the best service that I can.”

Finally we sought to find out if administrative assistants thought that the program should be continued. Eighty-nine percent of respondents provided an affirmative response (p < .001). Using the Fisher Exact Test, affirmative responses were unrelated to having been an Award recipient (p close to 1.0). DISCUSSION AND CONCLUSION Summarizing results, in connection with the hypothesized improvement in faculty perceptions of “secretarial assistance” (H1), we found that the SEIRR intervention yielded a non-significant positive change using raw EBI data. Examining data on an ipsative (within school) basis, the percentage change in perceptions of “secretarial assistance” at VLBS was the 5th largest of 84 changes. Compared to the mean change in 83 faculty perceptions at VLBS, the change in this item was statistically significant, a finding supportive of H2. Likewise, the four new questions added to the EBI survey that specifically addressed administrative assistants’ work behavior and job performance were rated more highly than the average rating at VLBS during T2, adding further support to the predicted relative improvement in “secretarial assistance.” We predicted that faculty ratings of “secretarial assistance” would show a more positive change at VLBS than at comparison schools. H3 was supported based on raw EBI data. In terms of relative changes, VLBS showed a significant increase in the focal index (+11.3 percent) whereas the comparison schools showed a non-significant decrease (-6.4 percent). Administrative assistants at VLBS had a positive attitude toward the SEIRR intervention: they unanimously thought that prior recipients were deserving; most thought that the process was fair, and that the program should be continued. As per Lachance’s [7] comment, perceptions of the program as legitimate and fair are necessary for staff support, and that participants know who received and merited awards. Somewhat surprisingly, the majority of respondents (59 percent) thought that the SEIRR program would spur their colleagues on to improved work behavior and job performance, but only a minority of respondents (28 percent) thought the initiative had a positive affect on their own service provided—, Z = 2.52; p < .05, 2-tailed test. From a research perspective, several limitations of the SEIRR intervention deserve mention. First, as noted above, the small number of faculty responding to the EBI survey made it unlikely (p = .16) that a statistically significant change would be found. Second, post-intervention data were obtained only two years after the SEIRR intervention commenced. We believe that a twoyear pre-post measurement interval may have been insufficient time for administrative assistants to change the levels of service they provide, and for faculty to perceive an improvement. Third, we acknowledge that the SEIRR program might be characterized as a modest intervention. On average only 12 percent of eligible administrative assistants were granted an award annually, and with such a low proportion of award recipients, many, if not most, eligible employees probably viewed their chances of “winning” as low, even if they provided much improved service. Yet, arguably, the decision to provide superior service is not based solely on rational calculations. We believe that a key feature of the SEIRR intervention at VLBS was the high level of involvement and support provided by the Dean. Indeed, senior leadership involvement has been widely cited as a prerequisite condition for the success of a recognition program in all sectors of the economy [11]. Illustrative of this, Nelson [9] attributes the success of the recognition

program at Marriott International to the fact that the CEO, Bill Marriott, leads the company’s recognition and appreciation program. Little prior research has examined the effects of recognition programs in a nonprofit or a public sector context. In light of the evidentiary support reported here, we conclude that an employee reward and recognition initiative can be effective in improving service quality. Given that the present intervention was neither costly nor particularly time-consuming, we believe that it might readily be transported to other public sector organizations. The key ingredients, in our opinion, are two-fold: 1) a widely accessible and publicized nomination process; 2) a well-attended Award ceremony where the organization’s Director plays a prominent role. In conclusion, our research suggests that a recognition and reward intervention can improve service quality in a public sector higher education institution, and probably most large public sector organizations. REFERENCES [1] Borenstein, M., Rothstein, H., & Cohen, J. Power and Precision. Teaneck, NJ: Biostat, 1997. [2] Campbell, D. T., & Fiske, D. W. “Convergent and discriminant validation by the multitraitmultimethod matrix,” Psychological Bulletin, 1959,112, 81-105. [3] Cassidy, E., & Ackah, C. “A role for reward in organizational change?” Irish Business and Administrative Research, 1997, 18, 52-62. [4] Chen, H. Theory-driven evaluations. Newbury Park, CA: Sage, 1990. [5] Cohen, J. Statistical power in the social sciences. New York: Academic Press, 1977. [6] Cohen, J. “A power primer,” Psychological Bulletin, 1992, 112, 155-159. [7] Lachance, J. R. “International symposium of the International Personnel Management Association,” Public Personnel Management, 2000, 29(3), 305-313. [8] Meyers, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R., Eisman, E. J., Kubiszyn, T. W., & Reed, G. M. “Psychological testing and psychological assessment: A review of the evidence and issues,” American Psychologist, 2001, 56, 128-165. [9] Nelson, B. “Recognition programs need support foundation,” Denver Business Journal, 2006, www.bizjournals.com/extraedge/consultants/return_on_people/2006/06/05/_column360.html, [10] Ruben, C. D. “The Center for Organizational Development and Leadership: A case study,” Advances in Developing Human Resources, 2005, 7, 368-395. [11] Saunderson, R. “Survey findings of the effectiveness of employee recognition in the public sector,” Public Personnel Management, 2004, 33(3), 255-275.

Suggest Documents