Federal Merit Pay, Round II: An Analysis of the Performance Management and Recognition System James L. Perry; Beth Ann Petrakis; Theodore K. Miller Public Administration Review, Vol. 49, No. 1. (Jan. - Feb., 1989), pp. 29-37. Stable URL: http://links.jstor.org/sici?sici=0033-3352%28198901%2F02%2949%3A1%3C29%3AFMPRIA%3E2.0.CO%3B2-5 Public Administration Review is currently published by American Society for Public Administration.
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aspa.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.
The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact
[email protected].
http://www.jstor.org Wed Oct 3 10:20:33 2007
Federal Merit Pay, Round 11:
An Analysis of the Performance
A decade ago, Congress passed the U.S. Civil Service Reform Act of 1978 (CSRA). The merit pay provisions of the 1978 reforms were hailed as a means for making federa1 managers and their organizations more responsive, efficient, and effective. Merit pay proved instead to be demoralizing and counterproductive. Among its shortcomings were inadequate funding, pay inequities between managers and nonmanagers, and invalid performance appraisals.' Congress sought to remedy these problems in 1984 by creating the Performance Management and Recognition System (PMRS), which covers grades 13, 14, and 15 supervisors and managerial officials and which was intended to strengthen pay-for-performance principles. This article assesses the effectiveness of PMRS. It reports the results of a study of PMRS in the U. S. General Services Administration (GSA). The empirical analysis in the present study focuses on two questions: (1) to what extent do performance ratings and distribution of rewards conform to what would be expected from an effective merit pay system?; and (2) to what degree does merit pay influence employees' future performance?
Design of PMRS Pay-for-performance programs are usually predicated on Vroom's expectancy theory.* Expectancy theory posits that if individuals expect to receive a valued reward for high performance, they are more likely to strive to perform at high levels than when no such payoff is anticipated. Merit pay is expected to increase effort by changing
the probability that performance will lead to a monetary reward that is assumed to be positively valued by most managers. Thus, two elements important for realizing the results predicted by expectancy theory are rewards sufficient to motivate high performance and a system for discriminating among differing levels of employee performance. Although the CSRA Merit Pay System (MPS) did not take effect for most federal managers until 1981, it quickly became apparent that it did not meet these requirements. Managers who performed satisfactorily but were exposed to the greater financial risks and opportunities of merit pay often found themselves achieving lower rewards than their nonmanagerial counterparts. By early 1984, MPS had created so much controversy that it was described by Congresswoman Mary Rose Oakar as a . disincentive millstone on the back of the Federal Govemment."3
"..
Relief from the CSRA merit pay program grew out of legislation introduced in 1984 by Senators Trible and Warner and Representative Wolf that proposed a Performance Management and Recognition System (PMRS). PMRS was enacted on November 8, 1984 but the first payout under PMRS was made retroactive to the Fiscal Year 1984 performance cycle. The drafters of the PMRS legislation sought to eliminate the dysfunctions of MPS. Under PMRS, employees are required to be rated at one of five levels, with two levels above and two levels below fully successful. PMRS consists primarily of three monetary components.
Pay-for-pei$ormance programs have become increasingly popular in recent years. The federal government introduced the Merit Pay System (MPS)for managers under the Civil Service Reform Act of 1978. The failure of MPS led, in 1984, to creation of the Performance Management and Recognition System (PMRS). The effectiveness of PMRS is analyzed using a sample of managers from the U . S. General Services Administration. The results indicated that PMRS had some positive effects on performance in 1986 but that they were substantially reduced in 1987. Thus, PMRS has not demonstrably achieved its ultimate goal--to improve performance in the federal sector. Continuing research is needed about the consequences of merit pay systems for organizational performance.
PUBLIC ADMLNISTRATION REVIEW
30
Employees who are rated fully successful or better are assured of receiving the full general pay or comparability increase. They are also eligible for merit increases that are equivalent to within-grade increases. The size of the merit increase depends on an employee's position in the pay range and performance rating. In addition to these monies, employees who are rated fully successful or above also qualify for performance awards or bonuses. Beginning in Fiscal Year 1986, performance awards of no less than 2 percent and no more than 10 percent became mandatory for employees rated two levels above fully successful. An upper limit of 1.5 percent of payroll for all performance awards was placed on agency payouts under the system.
In functional forin: EPRt = f (PAb1, YP,YG, D, EPRG1)where:
R,,, SIt-i,
is the year, EPR is employee performance rating, PA is performance award,
MI is merit increase, R is recognition award, SI is the amount of salary increase, YP is the years in position, yG is the years in grade, and D is employee demographic/background characterisPMRS also provided for creation of Performance tics. Standards Review Boards, modeled after the Performance The dependent variable used in the analysis was an Review Boards in the Senior Executive Service, to review employee's performance rating. The performance ratings performance standards within an agency to assure their theoretically could take on five values, corresponding to validity and to perform other oversight functions. At least ratings of "unacceptable," "marginally acceptable," "fully half of each board is required to be made up of merit pay successful," "highly successful," and "outstanding." employees. The number and functioning of these boards Because there were so few observations for the unacceptwere left to agency discretion. able and minimally acceptable ratings, only the other three performance appraisal outcomes were used in the analysis. Methods Three types of independent variables were measured: Research Design reward, experience, and demographic. The reward variables measured monetary and recognition awards received This research used a sample of federal managers to test by an employee in the preceding year based upon performance. Four variables were used: (1) whether the whether s~ecified~erformanceawarh andother characteristics could predict performance ratings in subseemployee had received a recognition award; (2) whether quent performance periods. On l9g5? a the employee had received a merit increase; (3) whether sample merit Pay employees was drawn randomly from the employee had received a performance award; and (4) GM 13-15 personnel employed by the U.S. General the percentage of salary increase. It was expected that Services Administration (GSA). Data prior receipt of a recognition award, for the same employees were obtained a merit increase, or performance award at three-month intervals (December 1, would increase the probability for March 1, June 1, September 1) from high performance in the next September 1985 through December Merit pay is expected to appraisal cycle. Also anticipated was 1987. All data were taken from the a positive relationship between the increase effort by changing the GSA central personnel data file. size of salary increases and future probability that performance performance. Four-hundred and ninety-six PMRS were will lead to a monetary reward experience variables, y e m in were assigned a special case identifier position and years in grade*were that is assumed to be positively for the study, and all identifying informeasured. The likely relationship mation was eliminated from the valued by most managers. between these variables and the datafile. Because PMRS was initiated dependent variables is difficult to retroactively, it was not possible to predict because of several counterobtain baseline data prior to initiation vailing forces associated with tenure. of PMRS. However, data collection For instance, one might expect that commenced in September 1985, one as employees spend more time in a position or grade they month before the first payout after which employees were become more adept at their jobs and, therefore, they might aware of the new procedures. have a better chance for a higher performance rating. In contrast, longer-term employees may be less interested in their work and may have plateaued in their careers, makVariables and Hypotheses ing them less likely to attain a higher performance rating. In any case, it is reasonable to control for these factors in The analysis strategy was to assess the consequences of this assessmentparticular personnel actions (for example, merit increases, performance awards) on employee performance (as meaA third set of variables represented demographic or sured by supervisory performance ratings). background characteristics of the employee. These vari-
FEDERAL MERIT PAY, ROUND I1
31
ables included whether the employee was located in headquarters or the field, educational level, grade, GSA organizational unit, and occupational classification.
were derived from the December 1985 and December 1986 data, and they included reward, experience, and demographic/backgroundfactors.
Finally, an employee's past performance rating was used in the model as a control for possible correlation in ratings over time. This control helps to ensure that the analysis focuses on explaining changes in ratings from time period to time period, rather than simple ratings levels. A focus on rating change allows for an interpretation of results that can be more easily placed within the expectancy theory framework discussed above.
In logistic regression, the dependent variable is specified as a logarithmic function of an odds ratio. Each of the two odds ratios in this analysis has the probability of receiving an "outstanding" rating as the denominator, while one has the probability of receiving a "fully successful" rating, and the other a "highly successful" rating, in the numerator. The parameters of the logistic regression model are estimated by use of a maximum likelihood procedure, and significance tests are carried out within the context of a chi-square distribution.5 A positive (negative) sign for an estimate indicates that an increase in the predictor value increases (decreases) the value of the odds ratio, which implies that the numerator probability would be expected to increase (decrease) relative to the denorninator probability. For purposes of this analysis, a level of significance equal to .05 was employed.
Statistical Analyses Two types of statistical analysis were used in the study. First, descriptive statistics were computed for the distributions of key components of merit pay: performance ratings, merit increases, performance awards, and salary increases. The descriptive statistics permitted an assessment of whether the system was operating consistent with prescriptions drawn from expectancy theory. The second type of analysis was logistic regression, performed within the framework of the SAS CATMOD procedure.4 The analysis was done for two time periods (December 1986 and December 1987), and it employed performance rating as the dependent variable. Only the "fully successful," "highly successful," and the "outstanding" categories of performance appraisal were used. The predictor variables
Results Descriptive statistics for the merit pay variables are reported in Table 1. The performance ratings for each of the three years in the study were highly concentrated in the top two categories. Relatively few employees in the sample received ratings either below fully successful (5 in 1985; 1 in 1986; 3 in 1987) or at fully successful (28 in
Table 1
Frequencies for Performance Appraisal Ratings and
Performance Awards 1985
Frequency
1986
%
Performance Ratings Marginally acceptable Fully successful Highly successful Outstanding PMRS Merit Increase No increase Received merit increase Performance Award No award Award received % Salary Increase 0 1 2
3
4
5
6
8
10
*
Performance award data did not appear in the 1987 datatape for the sample.
JANUARYIEBRUARY 1989
Frequency
1987
%
Frequency %
PUBLIC ADMINISTRATION REVIEW
32
Table 2
Table 3
Logistic Regression Results for the Probability of
"Fully Successful" versus "Outstanding"for 1986
Logistic Regression Results for the Probability of
"Highly Successfulttversus"0utstanding"for 1986
---
Recognition Merit Increase Performance Award HeadquarterdField % Salary Increase Yeas in Position Yeas in Grade
Estimate
Standard Error
-1.01 -1.45 -1.22 -0.29 -0.97 -0.03 0.10
0.62 1.39 0.57 0.55 0.30 0.09 0.06
Chi-square Probability 2.69 1.09 4.59 0.27 10.36 0.12 2.25
0.10 0.30 0.03 0.60 0.001 0.73 0.13
Table 4
Logistic Regression Results for the Probability of
"Fully Successful" versus "Outstanding"for 1987
Estimate Recognition Merit Increase Performance Award Headquarterspield % Salary Increase Years in Position Years in Grade
-0.76 1.16 -0.44 0.70 -0.26 -0.03 0.11
Standard Error 1.50 1.06 1.47 0.54 0.3 1 0.09 0.07
Estimate Recognition Merit Increase Performance Award Headquarterspield % Salary Increase Years in Position Years in Grade
0.61 0.27 0.76 0.19 0.40 0.72 0.10
1985; 67 in 1986; 60 in 1987). For each year, the large majority of employees were rated either "highly successful" or "outstanding." Although the distribution of performance ratings may be characterized as highly concentrated in each of the three years, it should be noted that the distribution in 1986, when the minimum two percent performance award for outstanding performers took effect, reflected a significant reduction in average ratings from the preceding year. A much higher proportion of the sample was rated "fully successful" compared to "highly successful" and "outstanding." However, the proportion of "outstandings" increased again in 1987. The large proportions of employees rated "fully successful" or above had predictable consequences for merit increases and performance awards. Most employees (97% in 1985; 98.4% in 1986; 93.5% in 1987) qualified for merit increases. Over 60% of the sample received performance awards in both 1985 and 1986. Although many employees received merit increases, the amounts of these increases were quite small.6 Salary increases for most employees ranged from one percent to three percent in each of the three years. Performance awards are not reflected in these totals because they are distributed as nonrecurring cash bonuses.
Logistic Regression Analysis Preliminary analysis indicated severe multicollinearity in the data involving the sets of organizational and occu-
0.48 1.15 0.43 0.37 0.20 0.07 0.05
Chi-square Probability 2.17 1.36 0.04 1.66 6.85 1.04 0.03
0.14
0.24
0.85
0.20
0.01
0.30
0.86
Table 5
Logistic Regression Results for the Probability of
"Highly Successful" versus "Outstanding"for 1987
Chi-square Probability 0.25 1.20 0.09 1.72 0.70 0.13 2.66
-0.71 -1.34 -0.08 -0.48 -0.52 0.07 -0.008
Standard Error
Estimate Recognition Merit Increase Performance Award HeadquartersField % Salary Increase Years in Position Years in Grade
-2.30 1.29 2.22 0.19 0.02 -0.03 0.07
Standard Chi-square Probability Error 1.40 0.84 1.40 0.34 0.12 0.07 0.05
2.69 2.38 2.51 0.33 0.04 0.21 2.01
0.10
0.12
0.11
0.57
0.85
0.65
0.16
pational variables. The overall significance of each of these sets was tested by comparing the likelihood ratios for a model that contained all predictor variables and a model that excluded one of the sets. The differences of the likelihood ratios with the occupational set excluded were insignificant at the .25 level for both the 1986 and 1987 models, while those with the organizational set excluded were significant at the .05 level. This indicates that the set of occupational variables was not significant as a predictor of performance ratings and that the set of organizational variables was significant. Subsequent analysis, therefore, focused on models that did not include the occupational variables.
Reward and Experience Factors The logistic regression results for the reward and experience factors appear in Tables 2 through 5. Results for the categorical demographic variables (grade, education, and organizational subunit) and the past performance appraisal control are not reported in the tables, but they are summarized in the next section. The results for comparisons between "outstanding" and each of the other two appraisal categories are shown independently for both 1986 and 1987. A positive sign for the estimate indicates that the predictor decreases the likelihood of an "outstanding" rating relative to either a "fully successful" or "highly successful" rating. A negative sign indicates that the predictor increases the likelihood of an "outstanding" rating relative to the other ratings.
FEDERAL MERIT PAY ROUND 11
33
The signs of the estimates for 1986 are generally consistent with expectations. but onlv a few of the relationships are signihcant at the .05 leiel. The association for performance award was significant for the probability between an "outstanding" and a "fully successful" rating. The most significant predictor was percentage increase' Each 'quation indicates that as change increased the likelihood of an "outstanding" rating increased relative to both "fully successful" and "highly successful" ratings. None of the reward and experience variables were significant in the two equations used to predict 1987 performance ratings.
Demographic/Background Factors The headquarterslfield variable was not significant in any of the four equations, indicating that no systematic differences exist between ratings for headquarters versus field managers. The analysis of the other demographic variables proceeded somewhat differently, because each was of three Or more categories' Separate regressions were run using each category of the other variables (grade, education, and organizational subunit) as the reference category. This permitted development of a matrix comparing each category of a variable with all others so that the categories could be ranked to reflect signifi-
cant variations among them. The rankings are presented in Table 6.
For the grade variable, the results indicated that grade 14 incumbents at the end of 1985 were significantly more likely and grade 13 incumbents significantly less likely to receive an rating in 1986. No significant differences were found for 1987. Significant differences among educational levels occumd only in 1986 for the log odds ratios between "highly successful" and "outstanding... Individuals with college degrees tended to do less well than both more educated individuals (employees with post baccalaureate courses or degrees) and less educated individuals (employees with some college but no degree). Significant differentials were found among organizational subdivisions for 1986 and 1987. 1n general, employees of the Office of Administration, Office of General Counsel, and Federal Supply Service were more likely to receive higher ratings than were their counterpans in other organizational units during the periods srudied. Employees in the Public Buildings and Office of Inspector General were rated more strictly, making them significantly less likely to receive high ratings. The overall significance of past performance rating was tested by the same difference of likelihood ratios technique described previously in the discussion of multi-
Table 6 Ranking of Categories of Demographic Variables According to the Ease or Difficulty of Getting an Outstanding Rating Relative to a Fully or Highly Successful Rating Aggregate Score* Fully Successful Highly Successful 1986 1987 1986 1987 Grade Grade 13 -1 0 -1 Grade 14 1 0 1 Grade 15 0 0 0 Education High school or GED 0 0 0 Some college 0 0 1 Received bachelor degree 0 0 -3 Post B N l s t professional degree 0 0 1 Masters degree or PhD 0 0 1 Organizational Unit Office of Administration 2 1 1 Federal Supply Service 2 1 0 Office of General Counsel 1 0 2 Information Resource Management 1 0 0 Federal Property Resources 0 0 -1 Public Buildings Service -1 0 -2 Office of Inspector General -5 -2 0 Past Performance Rating Fully Successful 0 0 0 Highly Successful 0 -1 -1 Outstandina 0 1 1 * The aggregate score is a summary measure. It is computed from SAS CATMOD procedure results that analyze individual parameters. The greater the value of the aggregate score, the higher the relative probability of receiving an outstanding rating.
0 0 0
0 0 0 0 0
0 0 0 0 0 0 0 0 -1 1
PUBLIC ADMINISTRATION REVIEW
34
collinearity. The likelihood ratio difference in 1986 was insignificant at the .05 level. For 1987, the difference was significant at the .005 level. These tests indicate that the past performance appraisal control was more strongly related to performance appraisal in 1987 than in 1986.
Goodness of Fit The quality of the logistic regressions can be assessed by comparing predicted with observed ratings for each employee. For the 1986 performance ratings, the observed and the predicted appraisal ratings corresponded in 66 percent of all cases. The proportion of 1987 appraisal ratings correctly predicted was 63 percent. These results indicate that a prediction of an employee's appraisal rating using the employee's scores on the reward, experience, demographic, and control variables would have been correct 66 percent of the time for 1986 and 63 percent for 1987. In light of the lack of significance of the reward and experience variables, the high percentage of correct predictions in 1987 indicates the importance of the past performance rating in the prediction equation.
Discussion The descriptive statistics suggest several possible threats to the theoretical logic that underlies PMRS. First, if the performance appraisal system is unable to discriminate effectively among different levels of performance, then the link between high performance and monetary rewards is weakened. The results indicate that PMRS merit increases have become automatic, meaning that most employees receive them much like the pre-CSRA system for salary adjustments. The tendency to rate large numbers of employees above "fully successful" assures that many employees receive performance awards as well. Although the ratings may be completely accurate, they do not readily contribute to differentiating rewards according to performance. Second, the opportunities to achieve large increases in pay under PMRS still appear to be quite small. The results indicate that most GSA managers in the sample received increases of only a few percent. The highly concentrated distribution of performance ratings may, in theory, diminish the potential motivational power of merit pay, but it is important to recognize that performance ratings affect more than just monetary awards. Appraisal ratings affect an employee's understanding of the job by providing feedback about performance, self-image, organizational commitment, and trust. Although normally-distributed performance ratings may be desirable to meet the objectives of the compensation system, they could undermine other aspects of employees' organizational attachments, producing an overall negative effect on motivation. Evidence from several sources suggests that tradeoffs exist between the compensation objectives of performance ratings and employee self-image,7 performance feedback,* and organizational commitment.9
Recent oversight reports indicate that "unrealistically high" and "inflated" performance ratings are a problem in PMRS.10 However, in general, employees covered by PMRS do not perceive that ratings are inflated, and they believe that the job elements and standards used to appraise performance are fair and accurate." Thus, any effort to drive down ratings is likely to undercut the present level of support for PMRS and to create morale and productivity problems that will overshadow any performance gains from improving the pay-performance contingency. Despite the skew in the ratings distribution, the logistic regressions indicated that PMRS had a positive effect on performance in 1986. Two monetary reward variables (performance awards and percentage salary increase) were important predictors of 1986 rated performance. However, none of the reward variables were significant predictors of 1987 rated performance, suggesting that the effectiveness of PMRS declined from 1986 to 1987. What could account for the decline in the significance of the reward variables over time? At least three explanations seem plausible. First, the changes from 1986 to 1987 could reflect learning that is occurring in the system. For instance, over time, employees learn how rewards are allocated contingent upon their behavior and adjust their behavior to these contingencies. The widespread distribution of merit increases and performance rewards could communicate to employees that financial rewards are readily available and do not require extraordinary effort. The significance of the reward variables in the 1986 equations but not the 1987 equations could be a reflection of this learning. A second explanation involves the design of PMRS merit increases. The size of merit increases depends on two factors: employees' annual performance ratings and their positions in the salary ranges for their grades. The higher an employee's performance rating the more rapidly the employee progresses in the salary range. However, employees who are near the maximum rates for their grades are only eligible to receive increases equivalent to the differences between their current salaries and the maximum rates. Employees who have reached the maximum rates in the salary ranges can receive no merit adjustments, regardless of their performance ratings. Over time, increasing numbers of employees are likely to reach the salary ceilings for their grades, resulting in decreasing returns for their work effort. The change in the results from 1986 to 1987 could reflect this maturation in employee salary levels, especially given the high ratings over the period studied. A third explanation is that the validity of the appraisal process may be declining over time. As performance appraisals become routinized, they may become more pro forma and, therefore, less accurate as assessments of employee behavior. Hall and Lawler have termed this phenomenon the "vanishing performance appraisal," reflecting the decreasing attentiveness of organization members to performance appraisals over time.12
The regressions showed that organizational affiliation is associated with the potential performance rating an employee might receive. For example, during the period studied, managers in the Office of Administration and Federal Supply Service were likely to receive significantly better ratings than their counterparts in the Public Buildings Service and Office of Inspector General. The association between organizational affiliation and performance ratings may simply reflect variations in managerial strategies for implementing PMRS in different organizational subunits. Adapting merit pay strategy to subunit requirements is a legitimate managerial decision, particularly if the strategy is appropriate and results in improved unit performance. However, variations across subunits can produce serious inequities in the amount of merit increases going to employees of similar rank and ability, and they may affect employees' chances for performance bonuses. These inequities may, in turn, influence organizational performance. Intraorganizational movement may be increased as employees transfer laterally to other units where they believe the possibilities for rewards are increased; others may leave the organization entirely. Such movements, over the long run, may be detrimental to organizational performance.
Conclusion This analysis has produced two major findings about PMRS. First, performance appraisals of the sample of GSA managers studied in this research have been concentrated at the "highly successful" and "outstanding" end of the ratings continuum, possibly creating problems for the system to generate the proper contingencies between employee performance &d rewards. Although this finding is based on the study of a single agency, it is consistent with the findings of several oversight groups.13 Second, the ability of financial rewards to predict performance was significantly poorer in 1987 than in 1986. Among the factors that might account for the decline in the effectiveness of PMRS over time are organizational learning, design factors associated with limits imposed by salary ranges, and declining validity of performance appraisals. What influence these factors have had on PMRS should be the focus for further research designed specifically to assess their effects. Because high ratings threaten the contingent relationship between pay and performance and may have been a source for the decline in PMRS's effectiveness in 1987,
one strategy that federal personnel managers and policy makers could consider would be to reduce the levels of performance ratings. This strategy is not likely to succeed, however, for at least two reasons. First, as noted earlier, any effort to drive down ratings is likely to create morale and productivity problems that will overshadow any performance gains from improving the pay-performance contingency. Second, in light of the large lag of federal salaries behind private salaries, a reduction in ratings and its attendant consequences for pay are likely to be perceived by many employees as unjust and punitive.14 If the effectiveness of PMRS has indeed declined, and this conclusion deserves further study, more radical strategies, including de-emphasizing the importance of contingent pay, seem to be appropriate. Although PMRS has engendered considerably less hostility among federal managers than its predecessor, PMRS has not demonstrably achieved its ultimate objective-improving performance in the federal sector. This study has looked at the performance effects of employee rewards, experience, and demographics. The study was limited, however, in that the measure of performance was supervisory performance ratings rather than actual individual or system performance. To date, only one other assessment of federal merit pay has analyzed the performance consequences of individually-contingent managerial pay.15. Future assessments of PMRS and similar public sector systems need to go beyond compliance and input evaluations and focus on the basic question of their effects on performance. James L. Perry is Professor in the School of Public and Environmental Affairs, Indiana University, Bloomington. His research interests involve public management and public personnel. He is editor of the Handbook of Public Administration (Jossey-Bass, 1989). Beth Ann Petrakis is a doctoral student in the joint program in public policy, Indiana University, Bloomington. Her research focuses on human resource management. Theodore K. Miller is Associate Professor in the School of Public and Environmental Affairs, Indiana University, Bloomington. His research interests are in the field of fluvial geomorphology, and he has regularly contributed to the School's teaching program in quantitative analysis.
Note The authors acknowledge the assistance of Thomas Cowley, Classification and Pay Policy Division, U. S. General Services Administration, for his help in obtaining the data used in this study. We are grateful to the consulting staff of Bloomington Academic Computing Services, Indiana University, for assistance. We appreciate the comments of Carolyn Ban, Tom Cowley, Gerald Gabris, Rosslyn Kleeman, Eugene McGregor, and Dennis Smith on an earlier version of this manuscript.
1.
For assessments of the original federal merit pay system, see Jone L. Pearce and James L. Perry, "Federal Merit Pay: A Longitudinal
Analysis," Public Administration Review, vol. 43 (MayIJune 1983), pp. 315-325; Karen H. Gaertner and Gregory H. Gaertner, "Performance Contingent Pay for Federal Managers," Administration and Society, vol. 17 (May 1985), pp. 7-20; Jone L. Pearce, William B. Stevenson, and James L. Perry, "Managerial Compensation Based on Organizational Performance: A Time Series Analysis of the Impact of Merit Pay," Academy of Management Journal, vol. 28 (June 1985), pp. 261-278; and United States General Accounting Office, A 2-Year Appraisal of Merit Pay in Three Agencies GGD-84-1, (Washington: U.S. Government
PUBLIC ADMINISTRATION REVIEW
Printing Office, 1984). For a review of how merit pay has fared in
the public sector generally, see James L. Peny, "Merit Pay in the
Public Sector: The Case for a Failure of Theory," Review of Public
Personnel Administration, vol. 7 (Fall 1986), pp. 57-69.
For discussions of expectancy theory, see, among others, Victor H.
Vroom, Work and Motivation (New York: John Wiley, 1964) and
Edward E. Lawler 111, Pay and Organization Development
(Reading, MA: Addison-Wesley, 1981).
United States Congress, House of Representatives, Civil Service
Amendments of I984 and Merit Pay Improvement Act. Hearing
before the Subcommittee on Compensation and Employee Benefits,
Committee on Post Office and Civil Service, 98th Congress, 2nd
Session, 1984, p. 2.
SAS Institute, SAS User's Guide: Statistics (Cary, NC: SAS
Institute, Inc., 1985), pp. 171-253 at p. 191.
Yvonne M. M. Bishop, Stephen E. Fienberg, and Paul W. Holland,
Discrete Multivariate Analysis: Theory and Practice (Cambridge,
MA: The MIT Press, 1975) and SAS Institute, SAS User's Guide:
Statistics, p. 173.
Although most employees qualified for merit increases, the actual
size of the salary adjustment varied according to an employee's per-
formance rating and position in the pay range. Employees whose
performance qualifies them for a merit increase but are at the maxi-
mum pay for their grade receive no actual merit adjustment.
Herbert H. Meyer, "The Pay for Performance Dilemma,"
Organizational Dynamics, vol. 3 (Winter 1975), pp. 39-50.
H. Meyer, H. E. Kay, and R. P. French, Jr., "Split Roles in Performance Appraisal," Harvard Business Review, vol. 43 (January/February 1965), pp. 123-129. Jone L. Pearce and Lyman W. Porter, "Employee Responses to Formal Performance Appraisal Feedback," Journal of Applied Psychology, vol. 71 (May 1986), pp. 211-218. U.S. Office of Personnel Management, Performance Management and Recognition System (Washington: U.S. Office of Personnel Management, July 1987); U.S. Office of Personnel Management, Performance Management and Recognition System: FY I986 Performance Cycle (Washington: U.S. Office of Personnel Management, March 1988); United States General Accounting Office, Pay for Performance: Implementation of the Performance Management and Recognition System, GGD-87-28, (Washington:
U.S. Government Printing Office, January 1987); U.S. Merit Systems Protection Board, Performance Management and Recognition System: Linking Pay to Performance (Washington: U.S. Merit Systems Protection Board, December 1987). 11. Ibid., pp. 8-11. The Merit System Protection Board's (MSPB) 1986 Merit Principles Survey contained three items asking for employee perceptions about ratings inflation and satisfaction with performance standards: My supervisor tends to inflate performance ratings--Strongly agree or agree, 9.5%; Disagree of strongly disagree, 57.1%. The standards used to evaluate my performance are fair--Strongly agree or agree, 59.7%; disagree or strongly disagree, 20.1%. To what extent are the job elements in your performance standards an accurate statement of work you are expected to perform in your job?--very great, considerable, or some extent, 84.7%; little or no extent, 13.9%. 12. Douglas T. Hall and Edward E. Lawler 111, "Unused Potential in Research and Development Organizations," Research Management, vol. 12 (September 1969), pp. 339-354. 13. U.S. Office of Personnel Management, Performance Management and Recognition System; U.S. Office of Personnel Management, Performance Management and Recognition System: FY 1986 Performance Cycle; United States General Accounting Office, Pay for Performance: Implementation of the Performance Management and Recognition System; U.S. Merit Systems Protection Board, Performance Management and Recognition System: Linking Pay to Performance. 14. In 1987, the difference between federal and private pay averaged 23.74 percent, reaching as high as 28.86 percent at Grade 15. See Advisory Committee on Federal Pay, Report on the Fiscal Year 1988 Pay Adjustment Under the Federal Statutory Pay Systems (Washington: Advisory Committee on Federal Pay, August 19, 1987). For a general discussion of the current status of the federal civil service, including compensation issues, see Charles H. Levine, "Human Resource Erosion and the Uncertain Future of the U.S. Civil Service: From Policy Gridlock to Structural Fragmentation," Governance, vol. 1 (April 1988), pp. 115-143. 15. Jone L. Pearce, William B. Stevenson, and James L. Perry, "Managerial Compensation Based on Organizational Performance: A Time-Series Analysis of the Impact of Merit Pay."
Appendix
Definition and Operationalization of Variables
Education: Highest education level achieved, measured on a five-level ordinal scale: 1. High school or GED 2. Some college 3. Received bachelors degree 4. Post BA/ 1st professional degree 5. Masters degree or PhD. Recognition Award: Indicates whether an employee received some type of recognition award such as a cash award or special achievement award. If an employee received a recognition award in 1985, then recognition was coded 1; if no award was received, the variable was coded 0. Grade: General Schedule grade level 13, 14, or 15. Merit Increase: Indicates whether employee received a merit increase as defined by the Performance Management and Recognition System. If an employee received a merit increase in 1985, then merit increase was coded 1; if no merit increase was received, the variable was coded 0. Percentage Salary Increase: Percentage annual increase in salary. This variable was calculated by dividing an employee's total incentive award for a year by the total salary and then multiplying by 100.
FEDERAL MERIT PAY, ROUND U
37
Appendix (cont.) Occupational Classification: The occupational classification of an employee's job. 1. Personnel and Social Science: GS-200 2. General Administrative and Clerical: GS-300 3. Accounting and Budgeting: GS-500 4. Engineering and Architecture: GS-800 5. Legal: GS-900 6. Business and Industry: GS-1100 7. Supply, Equipment, Facilities: GS-2000 8. Transportation: GS-2100 Performance Rating: The supervisory rating of an employee's performance. The agency's five-level rating system was used as the measure. The five levels are: unacceptable, minimally acceptable, fully successful, highly successful, and outstanding. Organizational Affiliation: The major organizational unit or bureau for which a GSA employee works. Dummy variables were created for nine units within GSA. 1. Office of Administration 2. Federal Supply Service 3. Office of General Counsel 4. Public Buildings Service 5. Information Resources Management Service (1986) 6. Office of Inspector General 7. Federal Property Resources 8. Information Resources Management (1985 only). Headquarterskield: A dichotomous variable indicating whether an employee was located in the Washington, DC headquarters or in a regional office. Years in Grade and Years in Present Position: Two variables measuring the time since an employee acquired hisher current grade and position.
Public Administration Review
Fiftieth Anniversary
Articles are invited for the special MarchIApril 1990 issue of PAR and for other issues for the fiftieth year. To meet deadlines for review and publication early in 1990, manuscripts need to reach PAR by early July 1989. While articles are not being commissioned, manuscripts oriented to the fiftieth anniversary--and the past, present, and future of public administration--are encouraged. All will be promptly refereed to assist authors and to guide decisions on publication in observance of PAR3 fiftieth year. Submit all PAR manuscripts to:
Chet Newland
University of Southern California
1201 J Street
Sacramento, CA 95814-2919