Janet M. Albright, R.R.A., Robert J. Panzer, M.D., Deborah Tuttle, M.P.S.,R.N.,. Jeanne N.Dent, B.S.N.,, ... Robert. Marder, M.D., Project Manager of the JCAHO's Department of Research, defined a ..... [9] Veatch, Reggi. "Clinical Applications of ...
Outcome Reporting to Target Areas for Quality Improvement Janet M. Albright, R.R.A., Robert J. Panzer, M.D., Deborah Tuttle, M.P.S.,R.N., Jeanne N.Dent, B.S.N.,,R.N. The Office of Clinical Practice Evaluation and the Quality Assurance Program Strong Memorial Hospital University of Rochester Medical Center Rochester, New York 14642 -
in March, 1989 noted that "indicators represent a key to
Abstract Strong Memorial Hospital in Rochester, New York has designed a quality asswance outcome repting system using The SAS System to mandpate exsting clinicalf,¶nancia4 MedisGroups and risk management data bases. Motlity, readmission, and incident trends are shared with clnical
qualit improvement in the present environment." Robert Marder, M.D., Project Manager of the JCAHO's Department of Research, defined a clinical indicator as "an instrument that measures a quantifiable aspect of patient care to guide professionals in monitoring and evaluating patient care quality and/or a iateness....Indicator data are not used as measures of quality, but as "flags" for locating areas of patient care that the organization should evaluate further....Health care professionals wili use process and outcome indicators
revewers. Flaging statisttcal significant devations senres as an early warning signl ofpotential quality issues. The role of these reports in the quality evolution and the details of each
report are described.
....Outcome indicators look at the results of practitioner's activity, including complications, adverse events, short-term results of specific procedures and treatments, and longer-term statuses of patients' health and functioning."['] "The Joint Commission is testing outcome indicators in anesthesia, obstetrics, trauma, oncology, and cardiovascular care, as well as the processes of infection control and drug use. In each instance, these indicators are being developed not as ends in themselves, but rather with key processes in mind. Thus, the feedback of comparative data to each hospital will aid in evaluating and continually improving the governance, managerial, clinical, and support processes that contribute to patient care outcomes."161 Because computerized patient discharge data, e.g. DRGs, admission/discharge dates, and discharge disposition, are already being collected for other purposes, outcome data are an efficient and feasible source of information, at least in the near future, to flag those diagnostic and treatment processes in need of quality improvement. A literature review reveals contrasting outcome reporting strategies. For example, an unnamed teaching center in New York State employs manual generic screens and clinical indicators. Criteria are developed with the assistance of departmental QA staff resulting in limited departmental standardization. Comparative analysis is encouraged using departmental standards and state or national statistics. Special emphasis is placed on trending information. Lack of automation creates massive amounts of information and no easy way to focus review!171 Another example is Central Baptist Hospital, a 300 bed hospital in Lexington, Kentucky, which is using the Health Systems International (HSI) software for quality assurance and utilization review. They have adapted negative outcome and procedural indicators that have been developed by professional medical associations and regulatory groups. Utilization review staff manually gather the indicator information as part of their concurrent chart review. Potential infection control and risk management referrals as well as actual incident information are also captured. Trended and targeted reports are retrospectively generated for clinical
Introduction Quality assurance (QA) has evolved through several phases. The first phase involved peer review in which individual cases were reviewed by a practitioner's peers. This could often be quite subjective. Then QA moved on to the criteria phase which removed some of this subjectivity by using "objective" criteria relating to structure2 process, and outcome which were defined by clinical experts.[ It became apparent over time that such criteria presented problems. Among these was the inability of the criteria to accurately and reliably discriminate between good and poor quality care. The current phase is the normative era, the age of the "bell-shaped curve", aided by the increasing analytic capability of computer systems enabling the identification of statistically significant deviations that might represent quality issues!21 For many QA analyse, patient outcomes are the most relevant measures both for problem identification and assessment of intervention. This is supported by the intention of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) to shift from the exclusive study of structure and process to a strong emphasis on outcome measurement by the early 1990's. Over time, a national JCAHO database will develop that wili facilitate normative comparisons among hospitals and other providers.131 The next phase will focus on continuous quality improvement (C.Q.I.)141, which is, in part, dependent on statistical data analysis to target processes for improvement and to monitor the effects of interventions. Priority is given to improving those processes having significant deviations from a comparison group. The clinical meaning of any deviation from the comparative standard is ascertained by clinical reviewers. An intervention(s) to eliminate or reduce a problem is then identified and implemented. Over time, the effectiveness of the intervention is analyzed in a similar process using control charts and other C.Q.I. tools. If warranted, further process improvement is then pursued. Once the process reaches optimal performance, a monitoring system is established to ensure continuing effective operation, again using control charts. Often identified as the force behind Japan's emergence as an industrial power, the industrial model of C.Q.I. is in its early stages in health care. Outcome data will surely play a key role in supporting quality improvement techniques in health care. Dennis S. O'Leary, M.D., President of the JCAHO, speaking at the 2nd National Invitational Forum on Clinical Indicator Development
0195-4210/90/0000/0276$01.00 @ 1990 SCAMC, Inc.
review.1q
Increasingly, hospitals are utilizing outcome information to target processes for improvement. For example, DartmouthHitchcock Medical Center is collecting clinical and operational data from hospitals in Maine, Vermont, and New Hampshire reflecting all processes involved in treating coronary artery bypass grafts and valve replacements. Physicians are then anayzing the data to determine the issues contributing to 276
various patient outcomes. The most common contributing factors will then be identified for potential improvement. As another example, one of Humana's hospitals targeted hip surgery processes for improvement based on a higher than expected mortality rate. Data anasis revealed postoperative urinary tract infection as the critical factor affecting the statistical deviation. Several strategies were then identified for implementation. Follow-up mortality data a year after intervention revealed that the hospital's hip surgery mortality rate had declined to one of the lowest in the Humana
when one of the DRGs meets the inclusion criteria and there is at least 1 case in the other DRG. This provides the user the opportunity to consider the two DRGs together when analyzing the information for quality issues. These reports include an analysis of trends by reporting data for three time periods - the most recent three months, the most recent six months, and the most recent twelve months We use the preceding two year time period as a baseline for comparison to recent performance. Data for the most recent three months gives departments an early indication of their effectiveness. However, for most DRGs, clinically important deviations are supported by the larger number of patients seen in six or twelve months. Several statistical comparisons that are describecd below are then made between each of the first 3 time periods and the 24 month benchmark period. We identify statistical significance, using The SAS System's Fisher exact, two-tailed test at the .1, .01, and .001 levels. A separate compilation of DRGs with statistically significant differences is provided to users to enable a more efficient review of the data. Four columns are generic to all of the report overviews providing a context for the outcome data in each report: total hospital discharges, number of discharges for the specialty and subspecialty, median length of stay, and median age.
system.191
Strong Memorial Hospital (SMH), a 722 bed university teaching hospital in Rochester, New York, has faced outcome reporting challenges by first using exsting hospital information. SMH's decentralized QA program is currently supported by a central QA staff of 4 FTEs who focus pimarily on departmental QA support/monitoring and an Office of Clinical Practice Evaluation (OCPE). OCPE has, among its several roles, the function of serving as the data analysis and reporting arm for quality assurance. Employing a QA analyst/programmer and 2.5 FTEs as QA information analysts trained in medical record technology and project management, OCPE provides routine and ad hoc reporting based on needs identified by clinical departments and regulatory reporting requirements. A 10 year collection of inpatient clinical/financial data, 3 year accumulations of ambulatory (emergency room and clinic), ambulatory surgery, and operating room data as well as a risk management data base and the MedisGroups severity tool10 provide the basis for clinical analsis of QA issues at SMH.
Report Speciflcs Mortalitq Overview This report presents overall death rates and death rates within 30 days of admission (Figure 1). Significant differences are calculated for each of these time periods. We provide mortality rates from the MedisGroups National Comparative Data Base as a national normative comparison. Reports adjusting mortality for admission severity are produced separately using MedisGroups software.
O. Outcome ResorS Tbe remainder of this article is devoted to discussing the QA outcome reports and their present role in problem identification and intervention assessment. Derived from the inpatient clinicaVfinancial, MedisGroups, and risk management data bases, these inpatient reports are produced quarterly and are only one of several QA reports that are regularly distributed by OCPE. Departments receive reports pertaining to their own patients; administration receives a hospital wide report. Each report includes overviews and patient level detail. The three current overmew reports summarize mortality, readmission, and patient incident rates, over time. The patient level detail reports provide enough additional information for departments to examine several more specific aspects of any identified events described at the summary level. The review may stop with the patient listings or move onto targeted chart reviews.
Readmission Overview Data pertaining to readmissions within 7 and 30 days of discharge (Figure 2) are displayed along with any significant differences over time. We then describe readmissions according to the clinical reason for their admission by comparing MDC and DRG between the initial and subsequent admissions. Readmission to the same DRG is presumably a more interesting quality issue to pursue than a readmission to a different MDC. Caution is needed in the interpretation of readmission rates since planned as well as unplanned readmissions are included.
Incident Overview Hospital incident data (Figure 3) including falls, medication errors, needle punctures, etc. are highlighted in this overview. This report is based on risk management data merged with clinical information from discharge abstracts. Incident rates, average number of incidents per discharge, and average number of incidents per patient day are calculated. Statistical significance is provided for the incident rate. We provide specific data on the frequency of the various incident types. Although clinical departments and physicians are generally not directly responsible for the patient incidents that occur, these reports are intended to focus attention on specific patient subpopulations with high or increasing rates of incidents. This may assist in risk management by enabling identification at admission of potentially problematic patients.
Overview Report For each of the overview reports, we partition discharges by clinical department using the specialty and subepecialty of the attending physician. Using the New York State DRG algorithml1, we then further divide discharges into major
diagnostic categories (MDCs) and DRGs. New York State DRGs are generally identical to Federal DRGs, but provide more clinically meaningful divisions in several areas, e.g. trauma, AIDS, substance abuse, and neonatal care. Within each report, DRG specific information is printed only if the number of cases meets certain predefined criteria (definition varies by specialty and subspecialty) thus allowing departments to focus on higher volume DRGs. Less frequent DRGs are aggregated in the subtotal section. When there is a pairing of DRGs, i.e. DRGs partitioned into "complicated" and "uncomplicated" components, both DRGs are reported 277
*33
* *
3
*
I 3
Jl
*
3W
*
3 3
3
L00
L.
I
0 I
55E
I
3
*J
* c O*
3s
*~
IC
3
S.C
I-
-
*
3
*
*e
*
i
*
*6
3
iS
go"0.0.s
OCc do0
a uo41 m* a
* s -c" CZ C. >-WD-5 LII-
--56
6- LO.0 a0>3 *6.40. *SS 00 0
30 3=
I.
*
.CGN 0
C$ 3=
3
CZ
*
43
00-
*>os 3-O V SCO C-o X
*_0430.
0
Z--
S0 33.-000
Z01
iWM 04
06.-
343
U.
43
0 n3 BZ zq
Z
s.
*
06 C00
'D)N 33
C:b
a
.
hi,
0010%a 0h--
co-o 0 .0
m,
z
.
S c 0
=0 _
.
Z-
o* n-l a 011 O ZOUZ
* 03 ~~~~~~~~~~ .o
434z =CZ@
0 36 30.
S =
=644'4
ZU-
000).
_-= CY