with faise confidence in their ability to monitor organizational performance. To further enhance the .... providers with the best clinical, functional and satisfaction ...
Health Services Management Research 11, 3-23 © Health Services Management Centre 1998
A review of organizational performance assessment in health care S. G. Leggat*, 1. Narine*, 1. Lemieux-Charles*, J. Barnsler*' G. R. Baker*, c. Sicottet , F. Champagnet and H. Bilodeau *Hospital Management Reseach Units, Department of Health Administration, University of Toronto, Toronto and tCR/S, Université de Montréal, Montréal, Canada
~~
As health care organizations look for ways to ertsure cost-effective, high quality service delivery while still meeting patient needs, organizational performance assessment (OPA) is useful in focusing improvement efforts. In addition, organizational performance assessment is essential for ongoing management decision-making, operational effectiveness and strategy formulation. In this paper, the roles and impact of OPA models in use in health care are reviewed, and areas of potential abuse, such as myopia, tunnel vision and gaming, are identified. The review shows that most existing OPA models were developed primarily as sources of information for purchasers or consumers, or to enable providers to identify areas for improvement. However, there was little conclusive evidence evaluating their ' impact. This review of existing OPA modeis enabled the establishment of principles for the development, implementation and prevention of abuse of OPA specifie to health care. The OPA models currentIy in use in health care may provide managers with faise confidence in their ability to monitor organizational performance. To further enhance the field of OPA, areas for future research are identified.
Introduction The assessment of organizational performance is essential for management decision-making, operational effectiveness, and strategy formula tion. As health care organizations look for ways to ensure effective, high quality service delivery, within the constraints of cost effec Sandra G. Leggat, Lutchmie Narine, Louise Lemieux Charles, Jan Barnsley, G. Ross Baker, Department of Health Administration, University of Toronto, 12 Queen's
Park Crescent West, Toronto, ON Canada MSS lA8 and Claude Sicottle, François Champagne, Henriette Bilodeau, GRIS, Université de Montréal, Montréal, Canada. Correspondence to SGL.
tiveness, while still meeting patient needs, performance measures are required to foeus any improvement efforts. Development and use of a performance assessment framework has been suggested as one way for health care providers to evaluate and continuously improve their performance, identify unmet health needs, achieve greater accountability, and mobilize resources for improvement (Flood et al., 1994; Luttman et al., 1994). The literature contains many normative and prescriptive articles, outlining the components of various
organizational effectiveness models. However,
the point has also been made that effective
ness is an organization- and situation-specifie 3
Health Services Management Research
construct (Child, 1974; Cameron, 1980; Van Peursem et al., 1995; Leggat and Leatt, 1997). To date, a single organizational performance assessment (OPA) model for the health care field has not been established, tested, or adopted. Assessment of performance in health care organizations is difficult because health care processes result in social outcomes that cannot be measured precisely (Van Peursem et al., 1995). The goal of this paper is to advance the development of an OPA model appropriate to the unique needs of health care organizations. Specific objectives are to: review the current uses of OPA in health care; present docu mented impact of existing OPA models in health care; outline the requirements of OPA suggested by researchers within the field; pro pose principles for the development, imple mentation, and prevention of abuse of OPA specific to health care; and identify future areas of research in this field.
Performance indicators A performance indicator is defined as an obser vation expected to indicate a certain aspect of performance' (Kazandjian et al., 1993, p. 26). The Canadian Council on Health Services Accreditation (CCHSA, 1996) defines a perfor mance indicator for health care as A measure ment tool, screen or flag that is used to monitor, evaluate and improve the quality of client care, clinical support services and organizational functions that affect client outcomes' (p. 7). Performance indicators can yield data that are either numerical or qualitative. They are evalua tive (their purpose is to assess or judge), they are results-oriented (they measure progress towards a set goal) and they include a reference point so that current achievement can be compared to earlier performance or to another standard (Berkowitz, 1995). Indicators can report on performance from two perspectives. The first focuses on dimen sions of performance over which the organi zation is thought to have direct impact, such as waiting times. The second perspective is broader, relating to population characteristics, such as population health status, which could be affected by performance of health care providers, but which may also be influenced by factors outside the control of the health care system. For example, sorne measures such as the percentage of women over the age of 50 who have had mammograms or the percentage 1
1
4
of children who are immunized may be as much a matter of personal choice behavior and the general social milieu as it is related to the performance of a health care system. As other writers have noted, there is a need to distin guish between measures that are management driven and those that are not (Van Peursem et al., 1995).
OPAmodels Organizational performance has long been recognized as a multidimensional construct (Venkatraman and Ramanujan, 1986), requiring a number of indicators to enable a full assess ment. While health care organizations have been defining and measurîng performance indicators for a number of years, there is little consensus on which set of performance indica tors provides the best information on organiza tional performance. There is a need, therefore, to step back from the individual indicators and reflect on the appropriateness of different combinations of performance indicators. An organizational performance assessment model is an integrated framework used to establish a set of performance indicators relevant to the assessment of performance of an identified organization. While an OPA model is organiza tion-specific, many aspects of the OPA can be shared across organizations to enable compar isons. Organizational performance frameworks and models respond to the many dimensions of performance by incorporating a variety of performance indicators. When these data are collected consistently over time, they can pro vide a long-term perspective on organizational performance. Analysis of the performance data resulting from use of an OPA model can be done in a variety of ways. In the simplest and most common approach, indicator data are com pared over time or among combinations of organizations. Ehreth (1994) categorized hos pital performance measures into two analytical approaches: (1) simple ratios of data; and (2) measures derived from data envelope analysis (DEA). DEA is a linear programming technique used to evaluate the relative technical efficiency of organizations in relation to the most effi cient production frontier (Rouse, 1995). DEA is useful for comparing large numbers of organi zations and the methodology can incorporate a variety of performance indicators in the analysis (Van Peursem et al., 1995).
Organizational.performance assessment in health care
Another approach, multiple regression, pro vides a technique for establishing correlations among dependent and independent variables which has also been found to be useful for the analysis of performance data (Van Peursem et al., 1995). However, regression techniques in OPA have significant limitations. The variables are traditionally limited to ratio form (Van Peursem et al., 1995), an average relationship is identified that is not necessarily appropriate for aIl organizations (Sherman, 1984), and inappro priate causal relationships may be inferred (Sharp, 1992).
marking enables organizations to learn frorn the best others in the industry, and functionaI benchmarking encourages organizations to look outside their industry for effective practices (Mosel and Gift, 1994). Performance indicators can aiso be used as the basis for the development of performance standards, practice guidelines and clinical pathways. For example, findings from the Minnesota Clinical Comparison and Assess ment Project were used .to develop practice guidelines and clinical pathways for selected patient care services (Barbas et al., 1995).
The uses of OPA in health care
Ensuring accountability
Organizational performance information in health care can be used for many purposes, for example, to improve organizational effective ness, ensure accountability, monitor manage ment, and foster collaboration. Improving operational effectiveness The Table presents the OPA models reviewed in the development of this paper. These models were found to provide information that enabled analysis and often resulted in improve ment in care delivery. OPA can be used to clarify and communicate organizational goals and priorities to managers and employees. In the case of the Cleveland Health Quality Choice Program, the publicly available performance scores were used by individual hospitals to mobilize staff and resources to improve internaI work processes (Rosenthal and Harper, 1994). The Redcliffe Hospital's performance indica tors were developed in part to set targets for productivity bargaining agreements between staff and management (Warrian, 1995). Performance indicators can also assist in improving operational effectiveness through benchmarking. Benchmarking of performance indicators enables identification of providers that consistently produce the best results over extended periods of time. Both process and outcome benchmarking are required to identify strengths and weaknesses and implement improvements. Benchmarking can help organi zations or systems to become more productive by focusing on the best practices and iden tifying issues that require further attention (Baker and Pink, 1995; Nelson et al., 1995). InternaI benchmarking focuses on best practices within an organization, competitive bench
Recently, interest has been expressed in the role of performance assessment in ensuring public accountability of health care providers (Epstein and Kurtzig, 1994; Jencks, 1994; Panzer, 1994). Thirty-eight of the US states have mandated the collection, analysis and distribution of data on hospital use, effectiveness, and performance (Epstein and Kurtzig, 1994). The CCHSA has initiated a process ta develop consistent per formance indicators for accreditation purposes. The Joint Commission on Accreditation of Healthcare Organizations aCAHO) has man dated hospitals to adopt an outcome measure ment system and has gone as far as to certify a number of vendors of such systems aCAHO, 1994). The Foundation for Accountability (FACCT) in Oregon has made specific recom mendations for health care plan and provider performance evaluation (Allen and Rogers, 1996). The British National Health Service (NHS) indicator system is officially mandated to assess the extent to which the performance goals of the NHS Patient's Charter are met. The results are freely available and the NHS encourages consumers and their family doctors to use them to make decisions about where to get the best service. Purchasers, funders, and consumers of care can use performance reports to identify service providers with the best clinical, functional and satisfaction outcomes, at the lowest cast. For example, the performance data generated by the Cleveland Health Quality Choice Program were used by the Cleveland business coalition to identify service providers ta receive the business purchasing group's request for proposaI. The Under the Microscope project in the US was developed to provide feedback on the performance of a home care service 5
Hea/th Services Managenlent Research Table
Organization performance mode/s Levelof assessment
Country and model Australia Australian Council on Health Care Standards (Collpoy and Balding, 1993) Canada Canadian Comprehensive Auditing Foundation (CCAF, 1987) Capital Health Authority Performance Report (Capital Health Authority, 1995) Hypothetical Hospital Balanced Scorecard (Baker and Pink, 1995) Toronto Academie Health Science Council Performance Scorecard (TAHSC,1996) Vancouver Hospital & Health Sciences Centre (1995) Women's College Hospital (1995) New Zealand Redcliffe Hospital (Warrian, 1995) United Kingdom National Health Service (NHS Patient's Charter Unit, 1995) United States American Group Practice Association Outcomes Measurement Consortia (Kania et al, 1996) Medicare HOMO / CMP External Review (Delmarva Foundation for Medical Care, Inc.,1994) United States BENCHmark (Porter, 1995) Califomia Hospital Outcomes Project (Romano et al, 1995) Cleveland Health Quality Choice (Rosenthal and Harper, 1994; CHQCP, 1995) . Hypothetical Goveming Body Quarterly Report (Luttman, Siren and Laffel, 1994) Joint Commission's Indicator Measurement System (Nadzam et al, 1993) lllinois Hospital (Counte, Glandon and Holliman, 1988) Maryland Hospital Association (Kazandjian et al, 1993) Maryland Quality Indicator Project (Scheiderer, 1995) Minnesota Clinical Comparison & Assessment Program (Borbas, McLaughlin and Schultz, 1995) Monmouth Medical Centre (Laffel, Thompson and Sparer, 1995) United States QUIIX-Ed (Fitzgerald, Shiverick and Zimmerman, 1996) United States CRISP (Bergman, 1994) CRISP (Nerenz, Zajac and Rosman, 1993) Dartmouth-Hitchcock (Nelson et al, 1995) HEDIS (CCHRI, 1995) HEDIS (Corrigan and Nielsen,1993) HEDIS Gordan, Straus and Bailit, 1995) Henry Ford Health System (Sahney, 1995) Hypothetical (Ellencweig, 1992) Kaiser Permanente (1995) Massachusetts Health Care Purchaser Group Gordan, Straus and Bailit, 1995) Pennsylvania Health Care Cost Containment Council (Localio et al, 1997)
provider to account to the current and poten tial client referral sources (Miller and Lazar, 1995). Gamm (1996) provided comment on four types of accountability: palitical; commercial; clinical or patient; and community. Political accountability refers to the respanse of the organization to the externally imposed mandates 6
Hospital Hospital
Hospital . System Program
Hospital
Long Term CaFe System
and baundaries of the health care organization, and can generally be assessed through the regular reporting required by funding agencies. Commercial accountability focuses on creating service value. Clinical or patient accountability is closely related ta commercial accountability, in that it is concerned with the intrinsic value of the services provided ta the clients or
Organizational performance assessment in health care
patients. Assessment of commercial and clinical accountability requires information on both the inputs to the health care process and the outcomes achieved. Community accountability represents the public trust, reflecting the interests of the population served by the organization. The advent of integrated health delivery systems has increased the focus on the latter three types of accountability. This need for greater accountability to a wider group of stakeholders requires OPA models to take a broader community perspective in the assess ment of performance. The assessment of how weIl an organization is meeting the requirements for commercial, patient, and community accountability requires the collection and analysis of consumer focused information, in addition to the internaI financial and business process performance indicators. In the US, information on consumer satisfaction and consumer opinion has been mandated as a component of performance assessment by the National Committee for Quality Assurance (NCQA). The Agency for Health Care Policy and Research has funded a project to develop a standard instrument for measuring consumer opinion on ambulatory care. A recent study by Hibbard and Jewett (1996) found that consumers were very inter ested in having information to enable evalua tion of the performance of health care plans, supporting the increasing focus on account ability to the consumer. Van Peursem et al., (1995) suggested that disclosure of performance measures would lead to increased participation in defining, discussing and further developing OPA by marginalized populations. Edgman Levitan and Cleary (1996) further clarified the type of information that consumers required. They found that consumers wanted unbiased expert opinion to assist them in judging the performance of health care providers, but that they also wanted to be apprised of the opinions of other consumers. The need to respond to accountability requirements continues to shape the content and format of OPA. The use of OPA to assess accountability requirements can impact the structure of the assessment mode!. While ranking of health care organizations may be sufficient for identifying areas for improvement, Magnussen (1996) has suggested that cardinal measures of efficiency are required when there is potential for redistri bution of resources based on the performance assessment.
Monitoring management Related to the broader accountability require ments, OPA can be used to monitor and assess the performance of management staff. The Canadian Comprehensive Auditing Foundation (CCAF, 1987) suggested the use of manage ment representations on: direction, relevance, appropriateness, achievement of intended re sults, acceptance, secondary impacts, costs and productivity, responsiveness, financial results, working environment, protection of assets, and monitoring and reporting. These performance indicators would then be subject to auditors' opinions about their fairness to provide infor mation to governing bodies about management performance. An underlying principle of the health care reforms in the UK was to focus on the perfor mance of the system managers. Performance indicators, such as cost per case, average staff cost per whole time equivalent staff, ambulance service costs per 1000 population served and energy usage per 100 cubic metres of estate, are used to monitor management performance. Fostering collaboration Perhaps the most significant impact of OPA models may be their ability to facilitate cooper ation among institutions and interest groups with competing interests. Many providers participating in the development of OPA models compete with one another for market share or for funding. However, to address the need for performance improvements and accountability to consumers and funders, they join together to undertake projects that are usually beyond the capability or resources of any one provider. To obtain comparative data and benchmarks, individual providers need the cooperation of their peers. Such collaboration in turn can lead to standardized data collection methods among providers and greater inter action between providers and the research community. For example, the Minnesota Clinical Comparison and Assessment Project consists of a consortium of the Health Care Education and Research Foundation, a non-profit applied research institution, and 53 Minnesota hospitals. These hospitals represent 60% of the licensed beds in the state of Minnesota (Borbas et al., 1995). Similarly, since its inception in 1988, the Cleveland Health Quality Choice Program has grown to be a consortium consisting of more than 13 000 corporations and small businesses, 7
Health Services Management Research
2900 physicians, and 50 hospitals in a four county area surrounding Cleveland, Ohio (Harper, 1995). In Toronto, Canada, Il teaching hospitals share performance data through the Toronto Academie Health Science Council. An important consequence of the growing focus on performance assessment in health care has been the standardization of information and the establishment of a common language for comparison.
The impact of OPA in health care A strength of many existing OPA models is the ability to identify correlations between the indi cator rates and the processes of care delivery. Kazandijan et al. (1993), through the Maryland HospitaIs Quality Indicator Project, demon strated that hospitals in different states were able to use the indicator information to change care processes, resulting in subsequent impro vements in the indicator rates. The QUIIX-Ed Project (Fitzgerald et al., 1996) also demon strated that the quality indicators focused effort on systematic performance improvement. A number of models have been credited with improving performance. The Cleveland Health· Quality Choice Program reported up to 19 million dollars in savings from CQI efforts among member hospitals, arising from the pub lication of their performance scores (Harper, 1995). The Minnesota Clinical Comparison and Assessment Project used input from perfor mance scores to develop guidelines, and found an overall compliance rate with their elective cholecystectomy guidelines of 88%. The New York State Department of Health's perfor mance assessment program for coronary artery bypass graft surgery led to a decline in the risk adjusted mortality of more than 40% (DeBuono, 1995). The Children's Hospital in Columbus, Ohio, credited participation in the BENCHmark project as the key factor in the reduction of expenses and the improvement of quality and services (Porter, 1995). Initially, the NHS performance assessment initiative had little impact, largely because the results were contained within a series of in comprehensible written reports (Smith, 1994). Subsequent releases distributed performance indicators in machine readable form suitable for use on most personal computers along with software that facilitated analysis of the data. Currently, as Smith (1994) reported, 'perfor mance indicators have become an integral part 8
of the system of performance review within the NHS....There is increasing evidence that the NHS executive is being held to account by Parliament on the basis of PIs' (p. 150). Among the OPA models reviewed for the development of this paper (see Table), the most common uses of performance information were ta provide information for purchasers, and for benchmarking among organizations. Most of the performance models examined were in the early stages of development, with no hard evidence as to their long-term impact. Indeed, speaking of Monmouth Medical Center, Laffel et al. (1995) noted: ' we have no proof that report cards have a bearing on market performance or operational performance of the health care organizations using them.. .it is,too early ta tell whether our abilities to grow as a system will be enhanced by our report card exercise' (p. 70).
Potential weaknesses of OPA There are three major weaknesses associated with existing OPA models. The first relates to the content and quality of the data underlying the indicators used within the models and the interpretation of these data. The second weak ness is the potential for abuse of the OPA models through misinterpretation or manipula tion of performance data. Thirdly, there is the lack of a comprehensive model that addresses aIl the important aspects of organizational performance. Content and quality of the underlying data In many cases it is not clearly demonstrated that variation in the performance indicators is a result of true performance differences rather than simply a reflection of the content or quality of the data. In a review of mortality indicators, Localio et al. (1997) suggested that variations in the indicators could have resulted from many things, including random fluctuations, differ ences in patient severity, personal preferences of patients and surgeons, and true differences in mortality. OPA models may not provide the information necessary to sort out the causes of reported variances. Existing administrative databases are often incomplete and do not con tain the information required ta analyze these variances. Gencks, 1992; Iezzoni et al., 1992). Luft and Hunt (1986) cautioned users on the interpretation of performance indicators derived from low-volume data. Palmer (1996) suggested
Organizational performance assessment in health care
that problems arise when users interpret a change in an indicator as a true change in performance and act upon this information, when the observed change actually faiis within the range of acceptable measurement error in the instrument. Barnsley et al. (1996) suggested that indicator imprecision could result from existing inadequacies in data collection. Van Peursem et al. (1995) stressed that performance indicators cannat be expected to stand alone as information sources. The purpose of perfor mance indicators is to focus attention towards issues of interest (Culyer, 1983) and it is in this spirit that OPA models shouid be developed and used. The need to adjust the indicators for varia tions in case mix and severity has been weIl documented (Barnsley et al. 1996). Jencks et al. (1988) found that clinically-based severity adjustment explain~d 20% of the variance in patient-Ievel mortality. Recognition of these differences among organizations may contri bute to increased confidence in the results of OPA. Jollis et al. (1993) found that patient mix differed substantially across hospitals, yet existing administrative databases do not always contain the data necessary to enable appro priate adjustment of the data (Localio et al., 1997). However, in a study using DEA analysis of intermediate hospital outputs (e.g. admis sions, inpatient days), Grosskopf and Valdmanis (1993) concluded that case mix adjustments did not significantly alter the performance results. Attention to the timing of measurement is also important and requires the definition of the episode of care, the content, processes and expected outcomes. Hospital characteristics and patient characteristics, such as patients who do not comply with treatment, who are self-destructive, or who do not respond to encouragement, may skew the indicator data and may need to be recognized and investigated.
Potential for abuse of OPA The abuse of OPA data can take many forms and can be either intentional or unintentionaL The reports resulting from OPAmay be mis leading to many readers. For example, a consu mer might think that because other consumers were satisfied with the care at a hospital, he or she would aiso be satisfied. However, because satisfaction is individually defined, this may not be the case. The Cleveland Health Quality Choice project recognized that performance
data that are not clinically or statistically signi ficant may be interpreted as such by consumers (Rosenthal and Harper, 1994). For this reason, the Cleveland data are not presented in a format that encourages ranking and hospitaIs are only compared to the mean performance of aIl hospitais. In addition, the detailed reports are available only ta subscribers who attend project-sponsored user training workshops. The performance report may be used by funders and regulators to single out 'bad apples' (Nelson et al., 1995). The fear is that more emphasis will be paid on laying blame than on learning and correcting the causes of bad results. A related concern is that while perfor mance reports might do a good job in high lighting areas in need of improvement, they are unable to show how to make those improve ments. It is one thing to know that something is good or bad and quite another to know why it is good or bad. AdditionaI information is required to understand the meaning of the per formance indicators and to use them effectively to improve performance. There is no guarantee that individual mana gers will take account of the impact of their actions beyond the areas of their direct respon sibility as reflected in their performance targets. This lack of congruence between personal incentives and larger organizational or system objectives may give rise to managers pursuing their own narrow objectives at the expense of a coordinated organizational strategy. If this oceurs, aPA systems in fact add unnecessary costs to the delivery of care. Other potentially distorting effects of perfor mance measures that can influence managerial behavior and which may be termed abuse, include the phenomena of tunnel vision, myo pia, convergence, and gaming (Smith, 1994). Tunnel vision refers to the tendency to foeus on areas included in performance reports to the exclusion of other important areas. No set of indicators can comprehensively cover the entire domain of an organization's or system's activity, possibly leading to improvement efforts that only foeus on areas for which there are OPA indicators. Even if a comprehensive package could be envisaged in the present, it might not be adequate in the future. Meyer (1994) stressed that when the organization changes, the measqrement system needs to change to support the new organization. Hence, there is a need to continually review and revise the performance measurement modeIs. 9
Health Services Management Research
Myopia refers ta the potentiaI concentration on short-term criteria, to the exclusion of long-term issues which may only show up in performance indicators in many years' time. However, it is the nature of health program that many of their activities yield returns only after many years, and those benefits are un certain in both magnitude and timing. There fore, the need to incorporate the long-term benefit of activities within performance mea surement systems is particularly acute for health organizations and systems. Gibson et al. (1973) stressed the need for different criteria for evaluation of the short, intermediate, and Iong-term performance of an organization. Convergence takes place when performance systems employ exception reporting, and it is in an organization's interest not to be exposed as an outlier. Given that objective predeter mined benchmarks rarely exist against which performance can be measured, it is often the practice to focus on extreme values, whether good or bad. Hence in such systems, only if an organization is exposed as an outlier on sorne first-line indicator is it subjected to fur ther scrutiny on other performance dimensions. This screening practice can encourage action to avoid extreme performance among organi zations that wish to escape detailed scrutiny. Thus, as performance indicators take on pro minence, one ma:y' see sorne convergence in reported numbers. This may occur by real changes in behavior, regression to the mean, creative accounting changes, or falsification. Even in the absence of management by excep tion, convergence may take place, since any organization delivering exceptional performance will raise expectations. Managers may weIl sacrifice short-term gain in retum for long-term security by opting to 'ron with the pack'. Gaming occurs when there is an altering of performance behavior to obtain sorne per ceived strategic advantage, for example, the optimization of DRG / CMG reporting. Gaming is closely related to the phenomenon of conver gence noted above. The level of performan~e from previous time periods can be the baSIS for implicit or explicit targets for future per formance. Organizations operating under this type of feedback mechanism have an incentive to retain managerial slack in their performance. If they report exceptional performance in one year, that performance could form the basis for future expectations which may not be achievable. Hence it is safer to demonstrate 10
moderate perforlnance over time, giving rise to mo~est ~argets that ('an be attained easily. There IS eVldence for such behavior both from the central planning (Birnlan, 1978) and private sector (Lukka, 1988; Bricrs and Hirst, 1990) literatures. An opposite concern is that the performance data wIll not be used. H.osenthal and Harper (199~) reported on a survey that found that pubIIshed outconles data were not considered to be important to n1arket success by the users of such data. Decisions were often made without the use of available data. It has been dernonstrated in both laboratory and field settings that 'bad news' is altered or blocked while 'good ne~s' is sent quickly to superiors (Larson and KIng, 1996). In this way, infor mation generated .th~ough aPA may be filtered or altered before Il IS received by higher level decision-makers. A related issue is whether the benefits of a performance indicator system are sufficient to justify the costs of developing the system. Palmer (1996) s~g~ested minimizing the cost of OP~ by: derlvlng measures from widely accessIble databases; developing and dissemi nating. well-specified measurement protocols; sampl~g cases w],en the cost per case is high; and usmg smaller sample sizes for monitoring perfonnance repeatedly. There. have been few reported instances of ab,:ses m the use of 01) A reporting systems. This may ,·vell be due to the formative nature of most systems. Understandably, energy is ini tially focused on buiJding appropriate models with evaluation to follow. However, there are sorne tent~tiv~ indications that the NHS per formance mdIcator system may be subject to tu~el vision and convergence. Government POIICY through the Patient's Charter dictated that no patient should wait more than 2 years for elective surgery. As a result, the NHS per fo~ance. sch~me. gave great prominence to walting time Indlcators. Given the bounded p0,:ers of compr~hension of managers and patients, the attentIon ta waiting times is likely to be at the expense of other important criteria of health service performance, such as the quality of postoperative care or waiting times for non-elective surgery. Indeed, Smith (1994) reported that in resp()nse to the Patient's Charter, district health authorities (DHAs) rapidly reduced the numbers of people waiting for more th~n 2 years for elective surgery. However, thlS seems t() have been achieved at
Organizational performance assessment in health care
the expense of other surgical patients, espe cially those awaiting more serious surgery, as their average waiting time increased. Subse quently, the government specified maximum waiting times for specific types of serious sur gical procedures. The NHS performance review system includes an expert system to identify instances of extreme performance. The process involves an examination of a set of first line indicators, usually waiting times, for aberrant behavior. Cases of unsatisfactory performance are then subjected to more detailed investiga tion of second-line indicators such as bed throughput or length of stay. The expected outcome of the investigation is a prescription for an improvement strategy. Implicit in the expert system analysis is sorne model level of service representing best practice. However, its focus on poor performance may in fact pro vide incentives for managers to adopt typical rather than best practice as suggested by the phenomenon of convergence. Trying to avoid exposure as an extreme performer may weIl result in health authorities adopting regional or national norms for waiting times or other first-line indicators, even if this contradicts local preferences or needs. Lack of a comprehensive model
It has long been recognized that the assess ment of the performance of an organization is dependent upon the perspective used to structure the evaluation. Many authors have stressed that different stakeholders use perfor mance information in varying ways, and that these diverse needs must be reflected in the design of an OPA model (Cameron, 1980; Kanter and Brinkerhoff, 1981). The presence of multiple environments and organizational constituencies requires multiple measures of performance (Kanter and Brinkerhoff, 1981). However, there is little consistent advice on what combinations of performance indicators provide relevant organizational performance information. Various authors have suggested that differ ent components are required to ensure the comprehensiveness of an OPA mode!. Accord ing to Donabedian (1966), a comprehensive evaluation of performance requires a com bination of structural, process, and outcome indicators. Structural measures focus on the organization's capacity for effective work (Flood et al., 1994). Process measures evaluate effort
or conformity to established practice norms and the processes used in the provision of care, but do not directly assess the effectiveness of the activities performed (Flood et al., 1994). InitiaIly, structural and process measures pro vided the basis for performance assessment. However, it was found that meeting structural and process standards did not always result in the best performance, requiring the devel opment of outcome indicators (Barnsley et al., 1996). Outcome measures focus on the changes produced and the results achieved (Flood et al., 1994). While outcome measures can provide important information on performance, the evaluation of structure and process measures is also important, as these provide the basis for the development of management and policy decisions (Zuckerman et al., 1995). The impor tance of structural, process, and outcome mea sures was established in the 1980s, and most OPA models attempt to include indicators from each of these perspectives. However, outcome indicators have not been weIl developed in health care. The nature of health care outcomes, which typically result over a period of time (Kazandjian et al., 1993) and which may be re lated to many complex and interrelated factors (Angaran, 1991), makes the measurement of outcomes difficult. Venkatraman and Ramanujan (1986) iden tified three general levels of firm performance which should be included in an analysis: finan cial performance, business performance, and organizational effectiveness. This approach requires assessment of performance within a number of dimensions such as profitability, growth potential, market presence and position ing, quality, and social responsibility. Fleming and Boles (1994) devised a variation more suited to health care, comprising clinical inte grity, financial integrity, and corporate destiny. In a similar framework, Kaplan and Norton (1992) developed the concept of the balanced scorecard, with four components of perfor mance: financial; internaI business; learning and growth; and customer. The principle behind the balanced scorecard approach is that organi zational performance can only be assessed through a 'balanced' approach. There are often performance trade-offs, such that while the financial picture is important, information on internaI business performance and impact on consumers is a necessary requirement. Nelson et al. (1995), concerned by the judge mental, static and potentially punitive nature of Il
Health Services Management Research
scorecards and report cards, suggested the use of an instrument panel. The instrument panel is designed to capture high level indicators of performance that provide information on the key organizational processes. The instrument panel, unlike the scorecard, is not focused on past performance, but looks to the present to enable quick response and necessary corrections. Recognition of a balanced approach to per formance measurement is consistent with the work of Quinn and Rohrbaugh (1983) who identified three competing value dimensions in organizations. Organizational managers must make decisions which result in trade-offs among control and flexibility, internaI and external focus, and means and ends orientation. A recent study by Buenger et al. (1996) illu strated that emphasis on sorne values hampered the pursuit of other values. This competition among organizational values requires a broad set of performance indicators in the analysis of organizational performance, so that aware ness is developed of the organizational impact of the trade-off decisions. With an approach focusing specifically on the measures within an OPA model, Van Peursem et al., (1995) reviewed existing OPA models in relation to a framework comprising economy, efficiency, effectiveness and the type of indicators used in the model (e.g. nominal, ordinal or ratio). Through this analysis, they suggested that it was essential for OPA models to contain a balance of ordinal, nominal, and ratio indicators within each of the analysis areas (e.g. economy, efficiency, effectiveness). They found that the majority of the indicators in existing models were ratio measures, as the ratio structure enabled comparisons among different organizations. However, they argued that because ratio measures were imprecise, using them exclusively would not provide sufficient information to monitor organiza tional performance. A complete OPA model would contain a balance of the more narrative approach of nominal indicators, and the ranking approach of ordinal indicators in addi tion to the ratio indicators related to economy, efficiency, and effectiveness. They suggested that the addition of indicators related to quality of life and health status would provide the necessary ordinal measures, and the inclusion of satisfaction surveys would address the need for nominal measures. In a recent developmental paper, Sicotte et al. (1997) established that while a number 12
of different OPA conceptual models did exist, none was able to adequately incorporate aIl important aspects of organizational perfor mance, and aIl were limited by reliance on a particular perspective of performance. To address this deficiency, the authors developed a conceptual framework (conceptual frame work of health care organization performance) to guide the analysis of the performance of health care organizations. This focuses on the essential functions and interactions among the functions within a health care organization, and also incorporates the concepts of address ing the inputs, activities, outputs and outcomes in a performance assessment proposed by Lewis and Modle (1982) and Torrance (1986). Using this perspective, a comprehensive model would ensure that there were performance indicators for aIl of the organizational functions. Use of OPA models that are not complete may focus the assessment process on factors that are easy to measure but which are not important to the true performance of the organization. In addition, Laughlin et al. (1994) suggested that OPA models could define irre levant content and goals for an organization based on the dominant perspective. The per formance frameworks described above are not inconsistent in their recommended approach to the measurement of organizational perfor mance. Sicotte et al. (1997) stressed the impor tance of recognizing the organizational functions and interactions among them within the eva luation framework. Kaplan and Norton (1992) underlined the need to structure the OPA in relation to the strategy of the organization, and have suggested the development of balanced scorecards to reflect the strategie business units throughout the organization. Nelson et al. (1995) stressed the need to structure OPA to provide information to monitor and control the critical organizational processes. The structural, process, and outcome indicators identified by Donabedian (1966) make the connection between the activities of the organization and the mea surement of performance in relation to these activities. Finally, Van Peursem et al. (1995) pro vided advice on the nature of the measurements within the performance framework.
Principles for the design and development of OPA models Based on our review and analysis, a number of principles were identified for the design and
OrganizationaI11Pl"'tf11"~mn1r-1I'Passessment
development of organizational performance assessment models. While these principles are helpful in establishing OPA, they also raise a number of questions for future research in the area. 1. Link the OPA model with the
organizational strategy Clear declaration of the organizational strategy will focus the designers of the OPA approach on the areas of importance. This ensures that the OPA functions within a strategie manage ment capacity (Kaplan and Norton 1992) and can then be used to provide important feedback on the attainment of strategie goals. Typically, existing OPA models have focused on im proving operational effectiveness, often at the expense of an organization's overaIl strategy. Porter (1996) suggested that in the rush to improve operational effectiveness through the use of organizational performance information, organizational strategy has been lost. Although many organizations have achieved superior operational effectiveness, the lack of an organi zational strategy that differentiates them from their competitors has actually led to diminishing returns (Porter, 1996). This tension between the information needs for strategie management and for operational effectiveness requires further study. Is it possi ble for OPA models to provide the required information for both purposes, or must organi zations choose a predominant focus for OPA? 2. Ensure that the OPA model provides
diversified perspectives on organizational performance OPA measures should provide diversified perspectives on organizational performance (Luttman et al., 1994). The requirement for diversity stems from the fact that in the past, inferences about organizational or system per formance were often based solely on financial ratios (see Cleverly, 1980, 1985). Reliance on financial measures alone does not provide a full understanding of the effectiveness of systems. A variety of indicators is necessary to provide a holistic view of an organization (Van Peursem et al., 1995). Buenger et al. (1996) articulated the com peting values model, suggesting that organiza tions balance four critical values throughout aIl operations: the human relations value; the
in health care
open systems value; the internaI process value; and the rational goal value. This model suggests that organizational performance is maximized when managers focus on aIl four of the values. For example, a move to increase efficiency within the internaI process value will likely decrease the flexibility and adapta bility inherent within the open systems value. An OPA model that has a variety of indicators allows managers to appreciate interactions between significant performance dimensions on these organizational values. For example, efforts to reduce service costs may increase waiting times and lower customer and physi cian satisfaction. A set of indicators that offers a range of perspectives can help managers engage in fact-based discussions when impor tant performance objectives and values conflict. Further research is required to assist practi tioners in identifying what indicators would constitute a sufficient range of perspectives. 'Cost-benefit constraints mean that it would be impractical to produce and disclose all possible measures of health management performance' (Van Peursem et al., 1995, p. 58). Is attention to the four quadrants on the balanced scorecard sufficient, or is a broader approach required? 3. Limit the number of indicators in the model Creating measures incorporating different per spectives can lead to a bewildering array of indicators which may result in information overload. Attention should be paid to the creation of a parsimonious set of indicators that balances the need for multiple measures with the selection of only those that are critical to monitoring and adjusting organizational operations. OPA systems must ensure a focus on what is truly important to improve perfor mance or to achieve the strategie directions. Developing indicators for an OPA model can also be costly. While the model must fairly reflect performance with a variety of indicators, the number of indicators it contains will be limited by the resources available for indicator development, measurement and evaluation. The level of assessment for which a model has been developed (e.g. program, hospital, system) influences the indicator structure within the mode!. For example, models de veloped for the program/ service level may not include indicators to provide information on external relations, such as resource acquisition. 13
Health Services Management Research
On the other hand, models developed for assessing global health system performance require a more external focus, with fewer indi cators relating to internaI functions. Case study research will be helpful to identify the criteria used for including (or excluding) an indicator. 4. Ensure the quality of the data and indicators
Because OPA data are subjected to intense scrutiny from stakeholders, the quality of the data must be assured. The New York State Department of Health's performance assess ment program for coronary artery bypass graft surgery requires that all submitted data are verified using other Department of Health data bases and patient medical records (DeBuono, 1995). A constructive OPA model must yield information that is valid and clearly defined, reliable, predictable, and relevant. Indicators also need to be comparable, consistent, and timely, and the information must be feasible to collect. But how can organizations identify the payback between further information sys tem and data development and the improve ments within the organization resulting from use of OPA? When does further investment to improve the quality of the data become non productive? 5. Ensure stakeholder input in the development of the model There is agreement that the development pro cess must include input from leaders (Luttman et al., 1994) and organizational stakeholders (Kanter and Brinkerhoff, 1981). The funda mental values, attitudes, and information needs of the users of the performance indicators need to be identified and incorporated into the OPA model (Kanter and Brinkerhoft 1981; Flood et al., 1994). For example, Hungate (1994) suggested that patients want disease-specific information inclùding functional outcomes; purchasers want information on comparative outcomes performance, and providers need information on comparative processes and risk-adjusted outcomes. In the late 1980s, the Rochester Area Hospital Corporation aban doned a project designed to assess quality and hold the hospitals financially accountable for their patient outcomes. Drachman (1996) reported on a useful process for establishing patient satisfaction benchmarks involving 14
representatives of direct patient carel patient relations, marketing, and marketing research from a number of university medical centers. Failure to seek stakeholder input can lead to a multitude of problems, including abandon ment of an otherwise well-developed OPA project (see Panzer, 1994). A number of studies have shown that infor mation generated internally within an organi zation is more likely to be used than externally generated information (Oh and Rich, 1996). Involving stakeholders in the development process helps to ensure that performance infor mation is seen to be internally derived, thus increasing the probability that it is used for decision-making within the organization. At a minimum, the measures and measurement methodology need to be openly communicated during the development process (Van Peursem
et al., 1995). Further study is required on the values and perspectives that different groups of stake holders bring to OPA discussions. How are competing values accommodated in OPA? Do the more powerful stakeholders always define the OPA and, if so, what is the impact on the performance information derived from such models over time? 6. Deploy the OPA throughout the organization Luttman et al. (1994) have stressed the impor tance of ensuring that performance assessment processes are deployed throughout the organi zation or system. If OPA measures are seen as being the preserve of a select group, a sense of ownership and accountability for performance results is unlikely to occur. They point out that linking individual performance with that of the organization is necessary to motivate employees and managers to use and act upon the findings of the performance indicators. An important recommendation from the Clinical Outcome Indicators Project was the need to develop a detailed dissemination plan concur rent with the indicator selection (Barnsley et al., 1996). Luttman et al. (1994) also recommended im plementation of a pyramid of detail, reversing the usual direction of traditional reporting systems and giving front-line workers the opportunity to make improvements in their work process. This approach is seen as a way to ensure deployment of the performance system
Organizational performance assessment in health care
throughout the organization. This approach is particularly important in community-based and home health organizations (Applebaum and Phillips, 1990), where front line staff need to take direct responsibility for performance improvements. In the past, performance infor mation tended to flow upward to senior mana gement, bypassing the staff responsible for work processes. This left front-line workers without the means to examine their own per formance and senior management suffering from information overload. The pyramid of detail reporting system enables front-line workers to have access to detailed performance data that encourages maximal performance improvements, while senior levels receive summarized performance reports. The design of the information system sup porting performance monitoring was also identified by Luttman et al. (1994) as critical to the success of performance reporting. Inter active electronic systems expand the capacity to analyze performance data and drill down' from summary scores to actual processes that affect the delivery of care. The issue with respect to this design principle is to ensure that enhancements are adopted to meet the real and specifie needs of the reporting system, and are not just technology-driven. Further case study research will be important to review the relationships among corporate level indicators and operating level indicators. How can organizations achieve an effective mix of both corporate and operating indicators of performance without an overly complex assessment structure? Should operating level indicators simply be rolled up, or are there other more important relationships between the operating and corporate level indicators that need to be defined? 1
Summary The purpose of this paper was to present an overview of OPA in health care. We have discussed the uses of OPA models, the impact of existing OPA models and sorne potential weaknesses of OPA. Finally, we presented sorne principles for the development, imple mentation, and prevention of abuse of OPA systems. We found that most existing OPA models were primarily developed as sources of com parative information for purchasers and con sumers or to enable providers to ideRtify
areas for performance improvement. Given that most models were in the early stages of development, there was little hard evidence evaluating their impact. A review oJ the ~xisting models suggested tensions between ensuring that the approach is comprehensive and captures the needs of the many stake holders, and the need to be cost-effective and pragmatic in implementation. Many health care organizations have adopted OPA models to gain performance information. However, this is an area where extensive research will be required to ensure effective and efficient design, development and use of organizational performance assess ment. The sets of performance indicators we reviewed, which were not structured in an OPA framework may, in fact, provide mana gers with false confidence in their abiIity to monitor organizational performance.
Acknowledgements The authors are members of the Health Care Mana gement Group of HEALNet, a Network of Centres of Excellence project funded by the Medical Research Council of Canada and the Social Sciences and Humanities Research Council (SSHRC). Members of HEALNet's Health Care Management Group include: from the Hospital Management Research Unit, Department of Health Administra tion, University of Toronto - G. Ross Baker PhD, Jan Barnsley PhD, Rhonda Cockerill PhD, Peggy Leatt PhD, Sandra Leggat PhD (Candidate), Louise Lemieux-Charles PhD, Kevin Leonard PhO, Michael Murray PhD, and George Pink PhD; and from Groupe de recherche interdisciplinaire en santé et Département d'administration de la santé, Univer sité de Montréal.- Henriette Bilodeau PhO, Régis Blais PhD, François Champagne PhD, André-Pierre Contandriopoulos PhD, Jean-Louis Denis PhD, Lambert Farand PhD, Ann Langley PhD, Raynald Pineault PhD, Danièle Roberge PhD and Claude Sicotte PhD. The authors gratefully acknowledge the contribution of the other members of HEALNet's Health Care Management Group in the conception and development of this paper.
References Allen, H. M. and Rogers, W. H. (1996). 'Consumer surveys of health plan performance: a comparison of content and approach and a look to the future.' Joint Commission Journal on Quality Improvement, 22(12): 775-794.
Angaran, D. M. (i991). 'Selecting, developing and evaluating indicators.' American Journal of Hospital Pharmacy, 48: 1931-1937. 15
Health Services Management Research Applebaum, R. and Phillips, P. (1990). Assessing the quality of in-home care: the lIother" challenge for long-term care.' Gerontologist, 30(4): 440-450. Baker, G. R. and Pink, G. H. (1995). 'A balanced scorecard for Canadian hospitals.' Healthcare 1
Management Forum, 8(4): 7-13. Barnsley, J., Lemieux-Charles, L. and Baker, G. R. (1996). 'Selecting clinical outcome indicators for monitoring quality of care.' Healthcare Management
Forum, 9(1): 5-12. Bergman, R. (1994). 'Are my outcomes better than yours?' Hospitals & Health Networks, 68(15): 113-116. Berkowitz, P. (1995). 'Judging performance.' Univer sity Affairs, (June-July): 6-8. Birman, L. (1978). 'From the achieved level.' Soviet
Studies, 30: 153-172. Borbas, C., McLaughlin, D. B. and Schultz, A. (1995). 'The Minnesota Clinical Comparison & Assess ment Program: bridging the gap between clinical practice guidelines and patient care.' In: Majumdar, S. K., Rosenfeld, L. M., Nash, D. B. and Audet A. M. (eds.) Medicine and the Twenty-first Century. Easton, PA: Pennsylvania Academy of Science. Briers, M. and Hirst, M. (1990). 'The role of bud getary information in performance evaluation.'
Accounting, Organizations and Society, 15(4): 373-398. Buenger, V., Daft, R. L., Conlon, E. J. and Austin, J. (1996). 'Competing values in organizations: con textual influences and structural consequences.'
Organization Science, 7(5): 557-576. Cameron, K. (1980). 'Critical questions in assess ing organizational effectiveness.' Organizational Dynamics, (Autumn): 66-80. Capital Health Authority (1995). A year in review: Capital Health Authority Performance Report, Edmonton, Alberta. Edmonton, AB: Capital Health Authority. CCAF (1987). Effectiveness reporting and auditing in the public sector. Ottawa: Canadian Compre hensive Auditing Foundation. CCHRI (1995). Report on quality of care measures. Califomia Co-operative HEDIS Reporting Initia tive. San Francisco, CA: The MEDSTAT Group. CCHSA (1996). A guide to the development and use of performance indicators. Ottawa: Canadian Council on Health Services Accreditation. Child, J. (1974). 'Managerial and organizational fac tors associated with company performance - part 1.' Journal ofManagement Studies, Il: 175-189. CHQCP (1995). /Summary report from the Cleveland Health Quality Choice Program.' Quality Manage ment in Health Care, 3(3): 78--90. Cleverly, W. O. (1980). Assessing financial perfor mance with 29 key ratios.' Healthcare Financial Management, 34: 30. Cleverly, W. O. (1985). 'Predicting hospital failure with the financiai flexibility index.' Healthcare FinanciaI Management, 39: 29. Collpoy, B. T. and Balding, C. (1993). 'The Australian development of national quality indicators in 1
16
health care.' Joint Commission Journal on Quality
Improvement, 19(11): 510-516. Corrigan, J. M. and Nielsen, D. M. (1993). 'Toward
the development of uniform reporting standards
for managed care organizations: the Health Plan
Employer Data and Information Set (version 2.0).'
Joint Commission Journal on Quality lmprovement, 12(12): 566-575.
Counte, M. A., Glandon, G. L. and Holloman, K.
(1988). 'Using ratios to measure hospital financial
performance: can the process be simplified?'
Health Services Management Research, 1(3): 173-180.
Culyer, A. J. (ed.) (1983). Health Indicators. Oxford: Martin Robertson. DeBuono, B. A. (1995). Coronary artery bypass surgery in New York State 1991-1993. New York: Unpublished. Delmarva Foundation for Medical Care, Inc. (1994). Final report, external review performance mea surement o.f Medicare HMOs / CMPs. Unpublished. Donabedian, A. (1966). 'Evaluating the quality of medical care.' Milbank Memorial Fund Quarterly, 44(2): 166-206.
Drachman, D. A. (1996). 'Benchmarking patient satis faction at academic health centers.' Joint Commis
sion Journal on Quality Improvement, 22(5): 359-367. Edgman-Levitan, S. and Cleary, P. D. (1996). 'What information do consumers want and need?' Health
Affairs, 15(4): 42-56. Ehreth, J. L. (1994). 'The development and evalua tion of hospital performance measures for poliey analysis.' Medical Care, 32(6): 568-587. ........" Ellencwei~ A. Y. (1992). Analysing health systems: a modular approach. Toronto: Oxford University Press. Epstein, M. H. and Kurtzi~ B. S. (1994). 'Statewide health information: a taol for improving hospitals accountability.' Joint Commission Journal on Quality
Improvement, 20(7): 370-375. Fitzgerald, R. P., Shiverick, B. N. and Zimmerman, D. (1996). Applying performance measures to
long term care.' Joint Commission Journal on Quality
1
Improvement, 22(7): 505-517.
Fleming, S. T. and Boles, K. E. (1994). 'Financial and clinical performance: bridging the gap.' Health
Care Management Review, 19(1): 11-17. Flood, A. B., Shortell, S. M. and Scott, W. R. (1994). 'Organizational performance: managing'for effici ency and effectiveness.' In: Kaluzny, A. D. and Shortell, S. M. (eds) Health Care Management: Organization Design and Behavior. Albany: Delmar. Gamm, L. D. (1996). 'Dimensions of accountability for not-for-profit hospitals and health systems.'
Health Care Management RevieuJ, 21(2): 74-86. J. M. and Donnelly, J. H. (1973). Organizations: Structure, Process, Behavior. Dallas, TX: BPI. Grosskopf, S. and Valdmanis, V. (1993). 'Evaluating hospital performance with case-mix adjustec' outputs.' Medical Care, 31(6): 525-532. Gibson, J. L., Ivancevich,
Organizatiollal per!ornlanct.' assessnlent in healtll care Harper, A. J. (1995). 'Collecting and reporting of patient outcomes.' ln Kazandjian, V. A. (ed.), The Epidemiology of Quaiity. Gaithersburg, NY: Aspen Hibbard, J. H. and Jewett, A. (1996). 'What type of quality information do consumers want in a health care report card?' Medical Care Research and Review, 53(1): 28-47. Hungate, R. W. (1994). 'Purchaser quality measures: progressing from wants to needs.' Joint Commis sion Journal on Quality lmprovement, 20(7): 381-387. lezzoni, L. L, Foley, S. M., Caley, J., Hughes, J., Fisher, E. S. and Heeren, T. (1992). 'Comorbidities, complications, and coding bias. Does the number of diagnosis codes matter in predicting in-hospital mortality?' Journal of the American Medical Association, 267: 2197. Jencks, S. F. (1992). 'Accuracy in recorded diag noses.' Journal of the American Medical Association, 267: 2238. Jencks, F. (1994). 'The govemment's role in hospital accountability for quality of care.' Joint Commission Journal on Quality Improvement, 20(7): 364-369. Jencks, S. F., Daley, J., Draper, D., Lenhart, T. N. and Walker, J. (1988). 'Interpreting hospital mortality data: how can we proceed?' Journal of the American Medical Association, 260: 3625. JCAHO (1994). Framework for improving perfor mance: from principles to practice. Oakbrook Terrace, IL: Joint Commission on Accreditation of Healthcare Organizations. ]ollis, J. G., Ancukiewicz, M., DeLong, E. R., Pryor, D. B., Muhibaier, L. H. and Mark, D. B. (1993). 'Discordance of databases designed for claims payment versus clinical information systems: implications for outcomes research.' Annals of InternaI Medicine, 119: 844. Jordan, H. S., Straus, J. H. and Bailit, M. H.(1995). 'Reporting and using health plan performance information in Massachusetts.' Joint Commission Journal on Quality Improvement, 21(4): 167-177. Kaiser Permanente (1995). Quality Report Cardo Oakland, CA: Kaiser Permanente Department of Quality and Utilization. Kania, C., Richards, R., Sanderson-Austin, J., Wagner, J. and Wetzler, H. (1996). 'Using clinical data for quality improvement in outcomes measurement consortia.' Joint Commission Journal on Quality Improvement, 22(7): 492-504. Kanter, R. M. and Brinkerhoff, D. (1981). 'Organiza tional performance: recent developments in mea surement.' Annual Review of Sociology, 7: 321-349. Kaplan, R. S. and Norton, D. P. (1992). 'The balanced scorecard measures that drive performance.' Harvard Business Review, (Jan-Feb): 71-79. Kazandjian, V. A., Lawthers, J., Cernak, C. M. and Pipesh, F. C. (1993). 'Relating outcomes to processes of care: The Maryland Hospital Association's quality indicator project (QI Project).' Joint Commission Journal on Quality lmprovement, 19(11): 530-538.
Laffel, G. L., Thompson, D. and Sparer, C. (1995). 'Developing a corporate-Ievel performance assess ment system.' Quality Management in Healtlz Care, 3(4): 622-670. Larson, E. W. and King, J. B. (1996). 'The syslemic distortion of information: an ongoing challenge to management.' Organizational Dylzal11ics, (Winter): 49-60. Laughlin, R., Broadbent, J., Sheam, D. and Willg Atherton, H. (1994). Absorbing LMS: the coping mechanism of a small group./ Accoullting, Auditing & Accountability Journal, 7( 1): 59-85. Leggat, S. G. and Leatt, P. (1997). lA framework for assessing the perfonnance of integrated health delivery systems.' Healthcare A1anagc111cIlt Forum, 10(1): 11-18. Lewis, A. F. and Modle, \\': (1982). 'Health indi cators: what are they? An approach to efficacy in health care.' Health Trends, 14: 3-7. Localio, A. R., Hanory, B. H., Fisher, A. C. and TenHave, T. R. (1997). 'T~e public release of hospi tal and physician mortality data in Pennsylvania. A case study.' Medical Care, 35(3): 272-286. Luft, H. S. and Hunt, S. S. (1986). "Evaluating indi viduaI hospital quality through outconle statis tics.' Journal of the American Medical Association, 225(20): 2780-2798. Lukka, K. (1988). "BudgetaI}' biasing in organiza tions: theoretica1 frame,,"ork and empirical evi dence.' Accounting, Organizati01t,S and Socit-ty, 13(3): 281-301. . Luttman, E. J., Siren, P. B. and Laffel, G. L. (1994). 'Assessing organizational performance: Quality Management in Health Care, 2(4): 44-53. Magnussen, J. (1996). 'Effidenc:r measurenlent and the operationalization of hospital production.' Health Servic~s Research, 31(1): 21-37. Meyer, C. (1994). 'Ho",,' the right nleasures help teams excel.' Harvard Business RevieltJ, (MaY-June): 95-103. Miller, R. and Lazar, J. (1995). "Public reJ'l1rting of performance measures in home care.' loi", Commis sion Journal on Quality lmprorement, 21(3): 105-115. Mosel, D. and Gift, B. (1994). 'Collaborati\'e bench marking in health care.' Joint COlll1"issltl f1 Journal on Quality Improvement, 20(5): 239-249. Nadzam, D. M., Turp~ R., Harold, L. S. and White, R. E. (1993). 'Data-driven performance improve ment in health care: the Joint ConlnlÎssion's Indicator Measurement System (lM Syst~nl).' Joint Commission Journal on Quality Improvenlt""', 19(11): 492-500. Nelson, E. C., Batalden, P. B., Plume, S. K"" Mihevc, N. T. and Swartz, W. C. (1995). "Report cards or instrument panels: who needs what?' l