son & Burman, 1994). For example, a study by Suri, Egleston, Brody, and Rudberg (1999) using MDS data found that of 2,780 residents, only. 11% had advance ...
Research Brief
Advantages and Disadvantages of Using MDS Data in Nursing Research Juh Hyun Shin, PhD, RN; and Yvonne Scherer, EdD, RN
ABSTRACT The purpose of this article is to review the advantages and disadvantages of using Minimum Data Set (MDS) data for nursing research, the psychometric characteristics of the MDS 2.0, and threats to the validity of its psychometric characteristics. The defined major advantages of the MDS are: (a) it provides continuous evaluation of residents' health and functional status, and (b) it enables facility evaluation at the nursing home level. The reviewed articles from the literature report that MDS 2.0 has moderate to moderate/high validity and reliability; however, the psychometric properties of MDS 2.0 are still controversial, mainly because the instrument has failed to identify depression in older nursing home residents.
ABOUT THE AUTHORS
Dr. Shin is Research Assistant Professor, and Dr. Scherer is Associate Professor, School ofNnrsing, State University of New York at Buffalo, Buffalo, New York. Address correspondence tojuh Hyun Shin, PhD, RN, Research Assistant Professor, School of Nursing, State University of New York at Buffalo, 823 Kimhall Tower, 343Í Main Street, Buffalo, NY 142J4-3079; e-mail: iamjoohynn@gmaU. com.
T
he Minimum Data Set (MDS) was developed to offer a comprehensive assessment of nursing home (NH) residents because of concerns expressed in the Omnibus Budget Reconciliation Act of 1987 (OBRA '87) about the quality of care in NHs (Hawes et al., 1995; Mukamel & Spector, 2003). The Nursing Home Reform Act was passed as a
JOURNAL OF GERONTOLOGICAL NURSING • VOL. 35, NO. 1, 2009
part of OBRA '87 to improve quality of care through regulation and inspections (Mukamel & Spector, 2003). OBRA '87 required development of standardized assessment of N H residents (Institute of Medicine [IOM], 2001). Consequently, in 1998, the Resident Assessment Instrument (RAI) was implemented nationally through the Centers for Medicare
& Medicaid Services' (CMS) Health Care Quality Improvement Program for NHs (Mukamel & Spector, 2003). The RAl has three components: (a) the MDS, which is used as a preliminary screen to recognize potential resident problems and strengths, (b) resident assessment protocols, which are organized, problem-oriented frameworks for MDS information and investigation of additional clinical information about residents, and (c) usage guidelines (CMS, 2002/2008). The purpose of this article IS to review the advantages and disadvantages of using MDS data for nursing research, the psychometric characteristics of the MDS 2.0, and threats to the validity of its psychometric characteristics.
The MDS is a 284-item instrument devised to evaluate the medical, mental, and social characteristics of nursing home residents.
The MDS is a 284-item instrument devised to evaluate the medical, mental, and social characteristics of N H residents (Lawton et al., 1998). The MDS was constructed, tested, and modified through consultation by and suggestions from professionals, including researchers and government regulators (TOM, 2001). By the end of 1990, the MDS was implemented in all U.S. NHs certified by CMS (IOM, 2001). The MDS measures residents' activities of daily living (ADLs), as well as changes in these activities (IOM, 2001). The MDS is divided into 15 sections: cognitive patterns, communication and hearing patterns, vision patterns, physical functioning and structural problems, continence, psychosocial well-being, mood and behavior patterns, activity-pursuit patterns, disease diagnoses, health conditions, nutritional status, oral and dental status, skin condition, medication use.
and special treatments and procedures (CMS, 2002/2008). OBRA '87 provisions mandated development of the MDS and the routine use of the electronic MDS for all N H residents and required that quality assurance and assessment processes be used in all NHs to improve quality of care (Rantz et al., 2000). The major advantage of the MDS is that it is a very good source of research data, especially for measuring quality in long-term care settings, thus making the MDS data set a rich source of information on N H residents. Version 3.0 of the MDS was proposed to the CMS for validation in April 2003 and will be updated from MDS 2.0 (Anderson, Connolly, Pratt, & Shapiro, 2003). Currently, version 2.0 of the MDS is being used across the United States, but the updated version is still under development and is scheduled to be implemented in October 2009 (Anderson et al., 2003).
ADVANTAGES AND DISADVANTAGES OF USING MDS DATA The ma)or advantage of the MDS is that it requires N H staff to thoroughly assess residents' health and functional status on a regular and continuous basis (Hendrix, Sakauye, Karabatsos, & Daigle, 2003; Won, Morris, Nonemaker, & Lipsitz, 1999). NHs are required to complete an MDS assessment on admission and every 3 months after admission, in addition to documenting significant changes in status on a quarterly basis (Lum, Lin, & Kane, 2005; Zimmerman, 2003). The MDS IS used in preliminary screening to identify potential problems and strengths of residents (CMS, 2002/2008). It screens functional status elements and comprehensively assesses residents. In addition, quality of care can be monitored in NHs, as MDS data is collected regularly (Rantz et al., 1996). After implementation of the MDS in 1987, several researchers reported benefits, including restraint reduction (Hawes et al., 1997; Lum et al., 2005; Marek, Rantz, Fagin, & Krejci, 1996; Migdail, 1992), decreased dehydration (Blaum, O'Neill, Clements, Fries, & Fiataronc, 1997), and increased physical and cognitive function (Morris et al., 1997). However, further research to evaluate the impact of MDS is necessary. In addition, it is also possible using the MDS to compare a particular facility with its peer group of NHs (Zimmerman, 2005). The individual data of residents in the MDS is meaningful, as the tool evaluates risk-adjusted health outcomes across facilities (Mukamel & Spector, 2003). All MDS records are conveyed by the CMS through state public health agencies to a national warehouse and are used to aid in the existing survey and certification processes that monitor NH quality (Mor et al., 2003). Since June 1998, al! NHs certified by CMS have been required to submit MDS
JOGNonline.com
information electronically to the Health Care Financing Administration (renamed CMS in July 2001) on a quarterly basis (lOM, 2001). Thus, a longitudinal record of the clinical and psychosocial profiles of residents is now possible (Karon, Sainfort, & Zimmerman, 1999). One of the MDS' disadvantages is its inaccuracy. Abt Associates (Reilly, 2001 ), one of the largest forprofit government research organizations, reported that 11.65% of the 284 MDS Items have high error rates (Chomiak et al., 2001). Accuracy was determined by comparing MDS data entered by trained RNs and a medical record review (Chomiak et al, 2001). Abt Associates concluded that cognitive patterns, psychosocial well-being, physical functioning, skin condition, and activity-pursuit patterns were reported with the least accuracy (Chomiak et al., 2001). Underreported MDS items included vision, health conditions, pain, and falls; overreported criteria included intravenous medication, intake and output, and physical, occupational, and speech therapies (Chomiak et al., 2001). The U.S. Department of Health and Human Services, Office of Inspector General (2001) compared the MDS with medical record reviews and found—consistent with Chomiak et al.'s (2001) findings— 17% of errors of 640 residents' MDS data. Furthermore, according to the U.S. General Accounting Office (2002), 9 of 11 states that had MDS accuracy-review programs discovered that some MDS categories have common errors. Such errors have been noted with the following items: mood and behavior, nursing rehabilitation and restorative care, ADLs, therapy, physician visits or orders, toileting plans, and skin conditions. Another issue is the level of technology in NHs. According to a presentation by Harvell (2004), the Office of the National Coordinator on Health Information Technology, U.S. Department of Health and Human Services, found that the
development stage of the information technology of the current MDS design and content dates from the early 1990s. Because MDS information technology has not been well developed, the use of electronic sources retrieved from MDS 2.0 has been limited (Harvell, 2004). It has been difficult to retrieve specific items that researchers want to use for research purposes. EVALUATION OF THE PSYCHOMETRIC PROPERTIES OF MDS 2.0 Considering the role of the MDS (including placement, care planning, and reimbursement), its reliability and validity have become important issues in research (Lawton et aU 1998). Since all NHs certified by Medicare and Medicaid use the MDS, reliability and validity should be a prerequisite for use in practice. However, few studies have examined the validity and reliability of MDS 2.0 (Gruber-Baldini, Zimmerman, Mortimore, & Magaziner, 2000). Psychometric studies of MDS 2.0 regarding cognition, depression, perineal dermatitis, pain, urinary tract infection, nutrition, behavior, and ADLs are summarized in the Table. Validity The vahdity of the MDS has been studied in the areas of cognitive function, depression, perineal dermatitis, pain, urinary tract infection, nutrition/weight loss, behavior, and ADLs. Criterion validity is measured by comparing one instrument with an external criterion; construct validity determines that the instrument actually measures what researchers want to study (Polit & Beck, 2006). The majority of researchers have compared the MDS with validated research instruments to test its criterion validity, and results have varied widely. Some researchers tested construct validity by comparing two groups of NHs that had high and low prevalence on measured concepts.
JOURNAL OF GERONTOLOGICAL NURSING • VOL. 35, NO. 1, 2009
Cognitive Function. In most studies, the cognitive performance scale of the MDS has been found to have high criterion validity (range = 0.41 to 0.92) (Cohen-Mansfield, Taylor, McConnell, & Horton, 1999; Gruber-Baldini et al., 2000; Hartmaier, Sloane, Guess, & Koch, 1994; Morris et al., 1994; Snowden et al., 1999) and moderate to high construct validity (range - 0.45 to 0.70) (Lawton et al., 1998). However, Horgas and Margrett (2001) reported that the cognitive performance scale of the MDS was only valid for a nondepressed sample, which means that this tool may not be valid for individuals with depression. Considering many NH residents have depression, future studies must confirm the validity of the MDS for such residents. Depression. The validity of the MDS in measuring depression in N H residents has been questioned in many studies. Depression items have not been found to be significantly related to validated instruments of depression, such as the Cornell Scale for Depression in Dementia (Hendrix et al., 2003), the Geriatric Depression Scale (Koehler et al., 2005), or the Revised Memory and Behavior Problems Checklist for nondepressed samples (Horgas & Margrett, 2001). Its validity was found to be quite low in one study (Anderson, Buckwalter, Buchanan, Maas, & Imhof, 2003), moderate in another (Meeks, 2004), and quite high in a third (Burrows, Morris, Simon, Hirdes, & Phillips, 2000). Further studies are required to confirm the validity of the MDS in measuring depression. Perinea! Dermatitis. The validity of the MDS in measuring perineal dermatitis was supported by comparing risk factors of perineal dermatitis with data retrieved from residents' charts (Toth, Bliss, Savik, & Wyman, 2008). Pain. The validity of the MDS regarding pain items is also questionable. Some pain items were found to
TAHLt
PSYCHOMETRIC PROPERTIES OF MINIMUM DATA SET ( M a m j J I i m ^ ^ Criterion Validity
Overall MOS _
Hawes et al.ll995)
1
Morris et al. (1997)
Criterion Validity
Castenetal.(1998)
W
Cognitive Performance
BL Muirisetal.tl994)
Construct Validity
Comparison Instruments
•
B
1
Number of NHs
N
Source
» '^
J
187
21
2,172
Not reported
High
Reflected two different samples
MMSE,TSI
•
Hartmaieretai,(1994)
200
1
8
0.41 to 0.76
1
Hartmaieretal.(1994)
200 ~ l
8
High
MMSE
1
Gruber-Baldinietal-
59
0.92
Cognitive Performance Scale
0.68
MMSE
0.66
Psychogeriatric Dependency Rating Scale. Orientation Scale
1,939
yM
Global Deterioration Scale
H (2000)
1
H Cohen-Mansfield et at. f l (1999)
1
290
0.71 to 0.75
MMSE
0.75 to 0.77
Global Deterioration Scale
H Snowdenetal.(1999)
140
Not reported
0.45
• •
Horgas&Margrett (2001)
135
1
IH
Significant only for nondepressed sample, not depressed sample
Lawtonetal.(1998)
1 •
1
RMBPC
Mattis Dementia Rating Scale
0.45
Global Deterioration Scale
Oepression ^ ^ ^ | • Meeks [2004) H Hendrixetdl.(2003) 1 1 Anderson, Buckwalter, et m al. (2003)
1I
MMSE
GD5
0.5
91 321
3
Not significant
145
3
0.09 to 0.23
Cornell Scale for Depression in Dementia Hamilton Depression Rating Scale
GDS
-0.07 to 0.19
Chart diagnosis
0.1 Oto 0.26
1
Koehleretal.(2005}
H
Burrows et ai. (2000)
704
9
Not significant
108
2
0.70
Hamilton Depression Rating Scale
0.69
Cornell Scale for Depression in Dementia
H
Lawton et al. (1998)
513
H "
Horgas & Margrett (2001)
135
GDS
Monitoring of Side Effects Scale 1 ;
Significant only for nondepressed sample. not depressed sample
0,1 5 to 0.44
RMBPC |
•
Perjneal dermatitis • loth et al. t2üOS)
43
be significantly related to residents' self-reports and geriatricians' assessments of residents with moderate impairment, but no statistically significant relationship was found for
10
2
Significant
' Chart review
residents with severe cognitive impairment (Cohen-Mansiield et al., 1999). Fisher et al. (2002) also reported that the pain items on the MDS were not significantly related to the proxy pain
questionnaire developed by the researchers. Two research teams found that the validity of the pain item on the MDS was questionable in that the MDS underestimated the prevalence
JOGNonline.com
Reliability Internal Consistency
TestRetest
Interrater
0.4 to 0.7
found to be related to those in surveillance data sets (Stevenson, Moore, & Sleeper, 2004). It was reported that the MDS overestimates the number of residents who have urinary tract mfections, yet adequately estimates residents without urinary tract infections {Stevenson et al., 2004). Nutrition/Weight Loss. In a study by Simmons, Lim, and Schnelle (2002), MDS weight loss quality indicators were found to reilcct differences among two NH groups (low and high prevalence of weight loss), and the construct validity of these indicators were supported; however, the results for their criterion validity varied (Blaum et al., 1997; Simmons et al., 2002). The MDS was shown to underestimate the number of residents with risk factors of undernutrition compared with interview assessment protocols used by the researchers (Simmons et al., 2002). Behavior. The behavior items of the MDS have low to moderate criterion validity (range = 0.24 to 0.5) (Lawton et al., 1998; Snowden et al., 1999). ADLs. ADLs, as measured by the MDS, were shown in two studies to have moderate to high validity (range = 0.5 to 0.98) (Lawton et al., 1998; Snowden et al., 1999). As these were the only two studies available at the time of this investigation, further research (s necessary to confirm the validity of ADL items. Reliability
of pain for residents with cognitive impairment (Cohen-Mansfield et al., 1999; Fisher et al., 2002). Urinary Tract Infection. MDS urinary tract infection items were not
Studies suggest that, overall, the MDS 2.0 has moderate to high reliability (Casten, Lawton, Parmelee, & Kleban, 1998; Hawes et al., 1995; Morris et al., 1997). The reliability was found to be high for the cognitive performance scale (Gruber-Baldini et al., 2000; Morris et al., 1994), moderate to high for pain items (Cohen-Mansfield et al., 1999; Fisher et al., 2002), and moderate to high for ADL items (Lawton et al., 1998; Morris, Fries. & Morris, 1999; Snowden et al., 1999). However, the reliability of depression items was
JOURNAL OF GERONTOLOGICAL NURSING • VOL. 35, No. 1,2009
found to be low (Anderson, Buckwaiter, et al., 2003). Further research to test the reliability of the depression items is required. Summary
In genera!, MDS 2.0 has been reported to have moderate to moderate/high validity and reliability. In terms of criterion validity, the cognitive performance scale, perineal dermatitis items, and ADL items were fairly good, while depression and behavior items were generally low. Pain and nutrition/weight loss items are still questionable. Construct validity was supported in the cognitive performance scale and nutrition/weight loss items. However, the construct validity of depression items was low. Internal consistency of the cognitive performance scale and depression and pain items were supported. The stability of the MDS (test-retest reliability) was supported for the cognitive performance scale and pain items. However, a low-to-moderate coefficient was reponed for depression items. The equivalence of MDS (interrater reliability) of overall MDS was quite acceptable and good for ADLs. Alarmingly, there is still a major concern that the MDS has failed to identify depression in elderly NH residents (Anderson, Buckwalter, et al., 2003; Koehlcr et al., 2005; Meeks, 2004). Considering many residents have depression, further research is required to revise the MDS 2.0 to improve its psychometric properties. THREATS TO RELIABILITY AND VALIDITY OF MDS DATA Some of the poicnual threats to the reliability and validity of MDS data are the raters, training for completing the MDS, and those who administer the MDS in practice (i.e., direct caregivers versus administrative staff). The task of filling out the MDS should be performed by an RN who confirms the completion of the forms with a legal signature (CMS. 2002/2008). RNs fill out the MDS
11
, _LE
(CONTINUED)
PSYCHOMETRIC PROPERTIES OF MINIMUM DATA SET (I Criterion Validity Number of NHs
Criterion Validity
Comparison Instruments
Construct Validity
Cohen-Mansneid et al. (1999)
Some correlations were significant
Residents' self-reports
No significant relationship
Residents' self-reports
Some correlations were significant
Geriatricians assessment
Geriatricians assessment Fisher et al. (2002)
No significant relationship
Proxy pain questionnaire developed by researchers
Urinary tract infection Surveillance data sets
Nutrition/weight loss O007)
Interview assessment protocols by research staff
75
Significantly related
Blaumetai.{1997)
Anthropometrical and bioelectrical measures of nutritional status Reflected differences among two groups
Simmons et al. (2002
Behavior Snowden et al.(T999)
Alzheimer's Disease Patient Registry
Cohen-Mansfield Agitation inventory
Lawton etal. 1998) Significant only for nondepressed sample not depressed sample
Horgas & Margrett (2001)
Activities of dally living Snowden et dl. (1999)
140
Lawton etal. (1998)
Dem»itia Rating Sc«e Physical Self-Maintenance Scale
Morris etal.(1999)
Note. GD5 = Geriatric Depression Scale: MMSE = Mini-Mental State Examination: NH = nursing home: RMBPC = Revised Memory and Behavior Problems Checklist: TSI = Test for Severe Impairment. " = Represents a subsample with severe cognitive impairment. " - Represents a subsample with moderate cognitive impairment.
12
JOGNonline.com
Reliability Internal Consistency
TestRetest
Interrater
r :"
0.69 to 0.88
0.48 to 0.85
1 1
0.75 to 0.92 0.89 0.92
forms using a chart review, their own assessment, and other N H staff members' assessments {Lum et al., 2005). Concerns regarding random errors and bias have been raised because coordinators in charge of completing the MDS have varying resources and training (Lum et al., 2005). If NHs do not have RNs on staff, they are required to submit the MDS forms to RNs for certification (CMS, 2002/2008). NHs usually hire an RN MDS coordinator for this purpose, but each individual N H has Its own policy regarding hirmg staff responsible for completing MDS forms. These individuals' backgrounds vary considerably. They may be RNs, attending physicians, social workers, activities specialists, occupational therapists, speech therapists, dietitians, or pharmacists (CMS, 2002/2008).
1 1 1 0.84 to 0.87
The clinical competence of the data recorders or MDS coordinators affects the accuracy of the data records (Aaronson 6c Burman, 1994; Lyons & Payne, 1974). For example, Anderson, Buckwalter, et al. (2003) raised concerns that some N H staff members were unable to identify symptoms related to mood, depression, or behaviors on MDS assessments because of inadequate training. Lawton et al. (1998) also raised concerns that some staff in one N H reported difficulty differentiating dementia and depression. The MDS may be completed by administrative staff without direct care responsibilities or by direct caregivers. When the MDS is filled out by administrative staff, the data records may not be as accurate as those completed by direct health care providers (McCurren, 2002; Schnelle, Wood, Schnelle, & Simmons, 2001). 0.75 to 0.94
JOURNAL OF GERONTOLOGICAL
This process may contribute to the failure to identify MDS problems because administrative staff's contact time with residents is shorter than that of direct caregivers (Hendrix et al., 2003). Because large data sets are usually collected by many different facilities or states.
NURSING
• VOL. 35, No. i, 2009
they are usually assessed, collected, and documented by many people. Thus, the interrater variation, especially for subjective items, will threaten validity (VonKoss Krowehuk, Moore, & Richardson, 1995). The MDS also has an interrater variation problem because many factors affect MDS data collection (e.g., the clinical competence of recorders, patient dysfunctional status). In addition, an MDS completed by trained staff may have better psychometric properties than one administered by untrained staff (Ouslander, 1994). The communication skills and extent to which direct caregivers can observe the real situation while completing the MDS is unknown, because the MDS can be completed using either a chart review or direct caregiver observations (Fisher et al., 2002). Although physical assessment findings are likely to be recorded accurately, data that require patient recall and interviews with patients are likely to have more discrepancies (Aaronson & Burman, 1994) because recall bias or prejudice may influence the data records. In addition, RNs may not have adequate time to complete an MDS, as they are overburdened in managing residents' health problems. The reliability of the MDS may also be threatened by residents' unstable health and functional status, especially in NHs (Blaum et al, 1997). N H s have different mixes of residents, including severely ill elderly residents and terminally ill younger residents ("Home at Last," 1999). Residents are generally in poor health, have three to six diagnoses, and receive 3 to 18 drugs per day (Mohler, 2001). Approximately 75% of residents need assistance with more than three ADLs (IOM, 2001; Kovner, Mezey, & Harrington, 2002). The MDS assessment of cognitive function and pain for a patient may fluctuate, because those items are influenced by subjective judgment, acute events, and medication schedules (Fisher et al., 2002).
13
Considering that approximately half of N H residents have dementia, with approximately one third having Alzheimer's disease (National Academy on an Aging Society, 2000), the data that require interviews with residents may not accurately reflect the resident's status. These factors, as well as communication difficulties and cognitive impairment, can result in an unstable assessment. Having experienced and professional N H staff can help decrease MDS reliability problems (Hawes et al., 1995). In addition, some information simply may not be recorded (Aaronson & Burman, 1994). For example, a study by Suri, Egleston, Brody, and Rudberg (1999) using MDS data found that of 2,780 residents, only 11% had advance directives, only 17% had a do-not-resuscitate order on admission, and only 6% among those who had been admitted without advance directives completed one after their admission. Mezey, Mitty, Bottrell, Ramsey, and Fisher (2000) reponed that only 51% of all N H residents across the nation have advance directives. External validity of the studies cited may be threatened because of the following limitations: (a) The samples were small (most of the studies had less than 200 participants), and (b) a majority of studies were conducted at a small number of NHs, limiting the generalizability of the findings. The facility characteristics (ratio of staffing to residents, size, and ownership) threaten external validity because those characteristics may limit generalization of findings from the MDS (Phillips, Hawes, Mor, Fries, 6c Morris, 1996; Reilly, 2001). Studies using larger data sets and samples that better represent the population would be necessary to address the appropriateness of using MDS data in research. Measurement Error Measurement error occurs due to the failure to match conceptual definitions between the theoretical
14
connotations of a concept and its operational meaning (i.e., variables in data sets) (Lange & Jacox, 1993). In addition, an inappropriate choice of variables will also threaten validity: In designs in which researchers have a research question first and then search for the data, external validity is threatened because the data set may not represent the target population in the research. Conversely, internal validity is threatened because researchers may not control confounding variables (Lange & Jacox, 1993). Several strategies have been suggested to improve the psychometric properties of the MDS: • CMS should provide clear, standardized, and consistent definitions for using the MDS (Stevenson et al., 2004). • An adequate and continuous training program for the MDS should be carried out for staff (Morris et al., 1994; Simmons et al., 2002). • A set of specific, concrete instructions and protocols that MDS coordinators can follow should be developed and updated from the existing MDS (Morris et al., 1994; Simmons et al., 2002). THREATS TO VALIDITY COMMON TO LARGE DATABASES Large health care data sets like the MDS are characterized as having the following (Connell, Diehr, & Hart, 1987): • Computer-based formats. • Large enough samples to aecommodate a wide variety of statistical methods. • Availability to researchers who are not responsible for data collection. The MDS and other large data sets have been used and are expected to be used in the future to improve quality, especially in long-term care research (Ryan, Stone, & Raynor, 2004). However, the use of large data sets may be inherently threatened by sampling and measurement errors G^^ob, 1984).
Sample Selection Error The CMS data center holds and manages basic resident demographic and clinical information of all reported MDS data for the purposes of payment, surveys, certification, regulation, and research (CMS, 2005). All MDS records are stored on magnetic media, and the CMS' safeguard system includes security codes, staff training for retrieving MDS information, and access to data restricted to authorized staff (CMS, 2005). This storage system, implemented and managed by the CMS, decreases concerns about data storage. Large data sets like the MDS generally have no sample mclusion and exclusion criteria, as the databases are not developed as the outcome of a study protocol (Lange & Jacox, 1993). Thus, the sample may not represent the whole population of interest to researchers (Lange & Jacox, 1993). For example, data regarding NHs that are not certified by CMS are not available, and the MDS may not represent an entire N H population, although it is a major method of studying NHs. Data Storage. Pabst (2001) provided several suggestions for managing MDS data: • Appropriate choices of hardware and software with expert technical support are essential. • As the MDS can be used across different facilities or over time, the data storage format should be consistent. For example, researchers should be cautious about the different formats (proprietary versus generic), especially regarding data transfer from one facility to another or from one research team to another, where different data storage formats may be used. Coding systems differ depending on what software is used. • MDS developers should set up data set structures carefully based on sample data sets before actually collecting large data sets, lessening the extra work of changing formats in the long term.
JOGNonline.com
• File backup of large data sets is necessary in the event that computer problems occur. If researchers do not consider the use of data without verifying the format, accuracy will be threatened.
reimbursements, payments, surveys, certification, and regulation. In addition, the MDS is protected by the CMS' safeguard systems, and researchers can compare resident outcomes across NHs. However, use of the MDS in research has limitations (Castle, 2003; Nicoll & Beyea, 1999; Rantz & Connolly, 2004): • Investigators cannot directly retrieve data they want to use and cannot determine specific times and intervals. • The accuracy of the data may be questionable. • The data may be old. • Data in the MDS were collected for purposes other than research and may not include variables
Data Extraction. Some factors may make data extraction difficult. Part or all of an MDS may be missing, or records may have never been entered (Byar, 1980; VonKoss Krowchuk et al., 1995). For example, newly admitted residents do not yet have prepared MDS data in their charts. Additional extraction is sometimes needed to increase the completeness of a data set. Data Collectioîî and Docnmentation. Data documentation or coding of the MDS may be performed by the same person doing the assessment or by a different person who does not assess the resident. If the MDS data collection and coding are performed by the same person., shortcuts may be implemented to save time and effort. For example, data collectors can enter their observations directly into a laptop computer, if one is available (Pabst, 2001). However, if MDS recorders have to recall their observations when they document or code, recall bias can decrease the accuracy of the information (VonKoss Krowchuk et al., 1995). In the latter case, large amounts of data must be entered, increasing the possibility of error. Another cause for data entry error is a demanding workload. MDS RNs and coordinators may be tired, and errors may occur (Pabst, 2001). Appropriate allocation of work and use of additional human resources during peak times is suggested to lessen error rates (Pabst, 2001 ). These stored data may be linked directly to the central research data repository, or copied data may be sent by regular or electronic mail (Pabst, 2001). These innovative methods will decrease errors because the process of data entry is shortened, but appropriate training for data collection and a regular check of data accuracy is necessary.
The MDS is an important source of data regarding nursing home residents and is useful for outcomes research, reimbursements, payments, surveys, certification, and regulation. CONCLUSION MDS 2.0 has been skillfully developed and is used widely. It is a very good source of research data, especially for measuring quality in long-term care settings. The MDS IS a good source of data because researchers have access to large samples representing very large populations at state and national levels, as all NHs certified by CMS are required to use the form. Furthermore, the use of the MDS for secondary data analysis is advantageous in that it can save time and effort, can decrease expenses such as paying people to collect data, is useful m conducting exploratory and correlational studies, and is helpful m the examination of trends over time (Castle, 2003; Nicoll & Beyea, 1999; Rantz & Connolly, 2004). The MDS is an important source of data regarding NH residents and is useful for outcomes research.
JOURNAL OF GERONTOLOGICAL NURSING • VOL. 35, NO. 1, 2009
researchers want for their studies. Furthermore, researchers need to be aware that the retrieved data do not represent the entire NH population and that variation in MDS documentation remains a concern. Ways to improve the psychometric properties of the MDS include the use of clear, standardized, and consistent definitions. Also, adequate and continuous training programs with specific and concrete instructions and protocols will help improve the quality of data collected from the MDS {Lawton et al., 1998). The stable, psychometric properties of MDS 2.0 are needed to provide care for all N H residents. REFERENCES AarunstMi. L.S., & Burnun. M.E. (1994). Use of health records ¡n research: Reliability and validity issues. Research in Nursing & Health, 17. 67-73. Anderson, L., Connolly, B., Pratt, M., & Shapiro, R. (2003, June). .MDS J.O
15
for nursing homes. Retrieved December 1, 2008, from ht:p;//www.cms.hhs. gov/NursingHomeQualiiyInits/25_ NHQIMDS30.asp Anderson, R.L., Buckwalier, K.C., Buchanan, R.J., Maas, M.L., & Imhof. S.L. (2003). Validity and reliability of the tnininium data set depression rating scale (MDSDRS) for older aduks in nursing homes. Age and /lge/Mg,J2, 435-438. Blaum, C.S., O'Neill, E.F., Clements, K.M., Fries, B.t:., & Fiatarone, M.A. (1997). Validity of ihe minimum data set for assessmg nutritional status in nursing hnme residents. American Journal oj Clinical Nutrition, 66, 787-794. Burrows, A.B.. Morris, J.N., Simon, S.E., Hirdes, J.P., & Phillips, C. (2000). Development of a minimum data sct-hased depression rating scale for use in nursing homes. Age mJ Ageing, 29, 165-172. Byar, D.P. (1980). Why data bases should not replace randomized clinical trials. Biometrics, 36, 337-342. Casten, R., Lawton, M.P., Parmelee, P.A., & K-leban, M.H. (1998). Psychometric characteristics of the minimum data set 1: Confirmatory factor analysis. Journal of [he American Geriatrics Society, 46, 726-735. Castle, J.E. (2003). MaximÍ7.ing research opportunities: Secondary data analysis.yoMrWo/ Neuroscience Nursing, JÍ, 287-290. Centers for Medicare & Medicaid Services. (2005). MDS 3.0 update. Retrieved June 30, 2006, iroin http://www.cms.hhs.gov/ Nursin^HomeQualitylnits/down loads/ MDS30MDS30Update!pdf Centers for Medicare & Medicaid Services. (2008, July). Revised long-term care facility resident assessment instrument user's manual. Version 2.0. Retrieved November 5. 2008, from http://www.cms.hhs.gov/nursInghomequalityinics/20_NHQIMDS20. asp (Original work published 2002) Chomiak, A., Eccord, M., Frederickson, E., Class, R., Clickman, M., Crigsby, J., ct a!. (2001). Final report: Development and testing of a minimum data set accuracy verification protocol. Baltimore: Centers for Medicare &i Medicaid Services. Cohen-Mansfield,J.,Taylor, L., McC(mnell,D., & Horton, D. (1999). Estimating the cognitive ability of nursing home residents from the minimum data set. Outcomes Management for Nursing Practice, J, 43-46. Connell, F.A., Diehr, P, & Hart. L.C. (1987). The use of large data bases in health care studies. Annual Review of Public Health, 8,51-74. Fisher, S.E.. Burgio. L.D., Thorn. B.E., Allen-Burge, R., Cerstle, J., Roth, D.L., ct al. (2002). Pain a.ssessment and management in cognitively impaired nursing home residents: Association of certified nursing assistant pain repon, minimum data set pain report, and analgesic medication use. Journal of the American Geriatrics Society, 50, 152-156. Cîruber-Baldini, A.L., Zimmerman, S.I., Mortimore, E., & Magasiner, J. (2000). The valid-
16
ity of the minimum data set in measuring the cognitive impairment of persons admitted to nursmg homes. yowr«ii/o/i/jc/4mencan Geriatncs Society, 4S, 1601-1606. Hanmaier, S.L., Sloane, PD.. Cuess, H.A., & Koch, G.G. (1994), The MDS cognition scale: A valid instrument for identifying and staging nursing home residents with dementia using the minimum data set. Journal of tbe American Geriatrics Society, 42, 1173-1179. Harvell. J. (2004). Addressing tbe bealthcare needs of our aging population -wnb technology. Retrieved March 30, 2005, from the Institute of Electrical and Electronics Engineers Web site: http://www.ieeeusa, org/calendar/conferences/geriatnctech/ JennieHarvellUHS.ppt Hawes. C , Mor, V.. Phillips, C D . , Fries, B.E., Morris, J.N., Steele-F'riedlob, E., et al. (1997). The OBRA-87 nursing home regulations and implementation of [iic re.sident assessment instrument: Effects on process quality. yo«mii/o/ret' American Geriatrics Society, 45,^)77-^5. Hawes, C., Morris, J.N., Phillips. C D . , Mor. V.. Fries. B.E.. & Nonemaker, S. (1995). Reliability estimates for the minimum data set for nursing home resident assessment and care screening (MDS). Tbe GerontologisI, JÎ, 172-178. Hendrix, C.C, Sakauye, K.M., Karabatsos. G., & Daigle, D. (2003). The use of the minimum data set to identify depression in the elderly. Journal of tbe American Medical Directors Association, 4, 308-312. Home .u last. (1999). Elder Gare, /7(5). 32. Horgas, A.L., & Margrett, J.A. (2001). Measuring behavioral and mood disruptions in nursing home resident.*; using the minimum data set. Outcomes Management for Nursing Practice, 5, 28-35. Institute of Medicine. (2001). Improving tbe quality of long-term care. Retrieved November 6, 2008, from the National Academies Press Web site: bttp://books.nap. edu/html/improving_long_term/ Jacob, H. (1984). Using published data: Errors and remedies. Newbury Park, CA: Sage. Karon, S.L.. Sainfort. F, & Zimmerman, D.R. (1999). Stability of nursing home quality indicators over time. Medical Care, 37, 570579. Koehler, M., Rabinowitz, T., Hirdes, J., Stones, M., Carpenter, G.I., Fries, H.i-:., et al. (2005). Measuring depression in nursing home residents with the MDS and GDS: An observational psychometric study. BMC Geriatrics, 5, 1. Retrieved November 5,2008, from http://www.biomedcentral.com/14712318/5/1 Kovner, C T , Mezey, M., & Harrington, C. (2002). Wbo cares for older adults? Workforce implications of an aging society. Health Affairs (Millwood), 21, 78-89. Lange. L.L., & Jacox, A. (1993). Using large data bases in nursing and health policy research. Journed of Professional Nursing, 9, 204-2 lL
Lawton, M.P., Casten, R., Parmelee, P.A.. Van Haitsma, K., Corn, J., & Kleban, M.I 1. (1998). Psychometric characteristics of the minimum data set II: VaWó'ny. Journal oj the American Geriatrics Society, 46, 736-744. Lum, T.Y.. Lin. W.C, & Kane, R.L. (2005). Use of proxy respondents and accuracy of minimum data set assessments of activities of daily living. _/oMf7j