Adm Policy Ment Health DOI 10.1007/s10488-011-0393-5
ORIGINAL ARTICLE
Challenges to Quality Assurance and Improvement Efforts in Behavioral Health Organizations: A Qualitative Assessment Luis E. Zayas • J. Curtis McMillen • Madeline Y. Lee • Samantha J. Books
Springer Science+Business Media, LLC 2011
Abstract Behavioral health organizations have been increasingly required to implement plans to monitor and improve service quality. This qualitative study explores challenges that quality assurance and improvement (QA/I) personnel experience in performing their job in those practice settings. Sixteen QA/I personnel from different agencies in St. Louis, Missouri, U.S.A., were interviewed face-to-face using a semi-structured instrument to capture challenges and a questionnaire to capture participant and agency characteristics. Data analysis followed a grounded theory approach. Challenges involved agency resources, agency buy-in, personnel training, competing demands, shifting standards, authority, and research capacity. Further research is needed to assess these challenges given expected outcomes.
The content of this paper was presented in part at the 13th Annual Conference of the Society for Social Work Research on January 2009. L. E. Zayas (&) School of Social Work, Arizona State University, 411 N. Central Ave., Suite 800, Phoenix, AZ 85004-0689, USA e-mail:
[email protected] J. C. McMillen School of Social Services Administration, The University of Chicago, Chicago, IL, USA e-mail:
[email protected] M. Y. Lee School of Social Work, Tulane University, New Orleans, LA, USA e-mail:
[email protected] S. J. Books School of Social Work, Washington University in St. Louis, St. Louis, MO, USA e-mail:
[email protected]
Keywords Quality assurance Quality improvement Behavioral health services Qualitative methods
Introduction Behavioral health organizations have been increasingly required to specify and implement plans to monitor the quality of the services they provide as part of the accrediting process (e.g., Council on Accreditation 2008), third party payment processes and policy changes. Hence, these agencies have hired employees to conduct quality assurance and improvement work, creating an unprecedented opportunity to improve these services. However, little is known about what approaches these personnel are employing, the organizational receptivity towards their efforts, or the kind of impact they are having, among other issues. Presently, empirical knowledge of quality assurance and improvement practice in behavioral health is relatively undeveloped. As part of a study that examined the work of quality assurance and improvement personnel in real-world agency settings (McMillen et al. 2008; Lee et al. 2011), this paper reports on the challenges that these personnel experienced. Quality assurance (QA) began in industries where the consequences of quality problems were catastrophic and therefore originally emphasized 100% conformance to strict standards (e.g., Office of Planned Space Flight 1966; Thomas 1965). In the 1970s, QA migrated to other manufacturing industries, then to retail businesses that attempted to monitor and control business-consumer relations. QA in healthcare became common when the Joint Commission on the Accreditation of Hospitals started pushing audits of medical files in the early 1970s and requiring QA systems in 1979 (Roberts et al. 1987; Sederer and St Clair 1990). As accreditors moved to the behavioral
123
Adm Policy Ment Health
health sector in the 1980s and early 1990s, an emphasis on QA followed. QA has it at its heart the monitoring of desired processes and outcomes (Donabedian 2003; Goonan and Jordan 1992). Monitoring tools include utilization reviews to see if expensive procedures were or remain justified, profile analyses used to identify outliers in clinical practice, service reviews (often called chart reviews) as data-based inquiries that examine the degree to which delivered services meet prescriptive process criteria (Palmer 1991), and surveys to assess customer satisfaction, outcomes, and phenomena thought to influence the quality of service, such as organizational climate (McMillen et al. 2005). Recent work suggested QA professionals in mental health practice settings are typically using service reviews and surveys (McMillen et al. 2008). QA professionals in behavioral managed care organizations are more likely to be involved in utilization reviews and profile analyses. Gradually, quality improvement (QI) emerged as a separate, but related professional endeavor (e.g., Juran 1992). While QA was not always seen as a professional role, QI was to be performed by quality managers responsible for using QA data to improve and change practice. Soon after came emphases on the need for continuous quality improvement (Berwick 1989; Sederer and St Clair 1990) as opposed to occasional efforts to solve high-profile problems, accompanied by the promulgation of specific models to manage quality efforts, such as total quality management (e.g., Deming 1986; Walton 1990), and Six Sigma (e.g., Pande et al. 2000), both of which require sophisticated statistical analytics. Modern quality assurance and improvement schemes recommend that organizations focus on selected key services and processes, determine how best to measure these processes and outcomes, monitor these key processes and outcomes to identify areas of improvement, search for major causes of quality problems, devise creative solutions to these problems, implement changes, continue to monitor key aspects of quality, and learn from implementing these efforts (Crosby 1979; Deming 1986; Zirps and Cassafer 1996; Harry 1988; Walton 1990; Pande et al. 2000; Donabedian 2003). Two recent pushes are to apply these processes to the implementation of evidenced-based practices and to develop a research base for the effectiveness of these techniques based on hard science, a movement sometimes referred to as evidence-based quality improvement (e.g., Kojania and Grimshaw 2005). This latter movement may conflict with the daily practice of QA and QI within agencies, where data are generated for internal consumption only, with tight expectations that no agency ‘dirty laundry’ gets aired. Contributions to QA and QI processes from the behavioral health arena have, to date, largely focused on how to monitor processes and outcomes most germane to mental health (Hermann 2005; Sederer and Dickey 1996).
123
Little is known about whether behavioral health organizations have embraced modern quality and insurance models or how these efforts are faring. Recent research in mental health service agencies revealed ample variation in QA and QI activities. While some were involved in setting a quality agenda for the agency and leading those efforts, others reported a more narrow range of responsibilities. Most were involved in data collection (chart reviews, conducting surveys) and analyses, writing reports from these efforts for various audiences, preparing for audits, and facilitating the accreditation process (McMillen et al. 2008). Few described efforts at implementing evidencebased practices or generating knowledge on the effectiveness of quality improvement methods. Staff at smaller agencies tended to define their roles more narrowly as did part-time quality professionals (McMillen et al. 2008). Given the recency of the QA and QI positions and the variety of tasks in behavioral health agencies, these personnel may face work challenges that should be assessed. To our knowledge, no study to date has focused on the challenges of QA or QI work in behavioral health. The conceptual literature, however, alludes to some potential challenges. The organizational context may present certain hurdles. Crosby (1979) argued that management participation in and positive attitudes toward QA efforts were essential to their success, while advocates of total quality management or TQM (e.g., Deming 1986; Walton 1990) emphasized the need for organizational cultural change to ‘‘buy in’’ to the idea of continuous quality improvement. Lack of expertise also may curtail well meaning efforts. Several formulations of QI, especially TQM and Six Sigma (e.g., Harry 1988; Pande et al. 2000), stress the need for sophisticated data analyses to identify potential quality problems. Crosby (1979) championed the notion that QA efforts require a highly trained, professionalized workforce. Two studies in the health field help frame this work. Alexander et al. (2007) analyzed aspects of organizational context, QA and QI activities and patient care outcomes of 1784 community hospitals in the U.S.A. and found that these activities were associated with better patient outcomes only in hospitals that were financially sound. They concluded that ‘‘hospitals that adopt a strategy of continuous quality improvement without sufficient financial cushion may be creating superfluous structures with no real substance behind them’’ (Alexander et al. 2007, p. 10). This suggests that resources may pose a substantial challenge to QA and QI work especially in community agencies, since in many cases these agencies have even fewer financial resources than hospitals. Another study focused on challenges reported by 16 health policy makers and researchers involved in efforts to improve health care quality (Wang et al. 2000). This work draws concern over finances and organizational context. The authors identified
Adm Policy Ment Health
as key challenges convincing stakeholders of the value of QA and QI efforts and the need for technological change to capture aspects of process and outcomes. In sum, this small conceptual and empirical literature has highlighted organizational buy-in, financial resources, workforce skill, and technological conditions as potential challenges to effective QA and QI work. This study explores the self-reported challenges of QA and QI personnel in behavioral health agencies in St. Louis, Missouri, U.S.A. using qualitative interview methods. The aim is to identify and assess barriers that these employees experience and perceive in performing their work in these service settings in order to gain a better understanding of the complexities involved and to help develop strategies that could facilitate and improve their work. These personnel are made responsible for assessing and attesting to the quality of services rendered by their organization in order to meet accreditation requirements and promote the best possible client outcomes. Hence, the research community could do its part to support these efforts by evaluating their work and examining the obstacles they face in performing such functions.
Methods Study Design This qualitative study involved individual semi-structured interviews with a purposeful sample of 16 QA or QI personnel employed in private, non-profit behavioral health agencies in St. Louis, Missouri, U.S.A. and neighboring districts regarding the work that they do. A purposeful, maximum variation sampling strategy, which seeks to capture the broadest range of perspectives on the problem of inquiry from diverse sources (Kuzel 1999) was used to identify agencies from which to recruit participants. Our sample included a mix of quality-focused personnel working in different agencies that varied by size and types of services provided in order to capture diverse experiences and service contexts. These included five community mental health centers, five family services agencies, two residential treatment agencies, and four family services and residential treatment agencies; three were large agencies ([5 million annual budget), nine were medium size, and four were small agencies (\2 million annual budget). Participants were recruited through invitation letters mailed to selected agencies. Only one QA/I employee was recruited per agency, and to be eligible they had to be employed in that capacity at the agency for at least the past 6 months. An invitation letter describing the study was mailed directly to the person responsible for QA/I at the agency, and another letter was mailed to the agency director to ensure transparency. Interviews were scheduled as they
replied and agreed to participate. Out of a total of 22 targeted agencies contacted, four did not have QA/I personnel, and at two other agencies one QA/I specialist declined to participate and another did not return our calls. Out of 16 participants recruited, two were just under the eligibility criteria; one was in the QA/I position for 4 months and another was at the agency and in the QA/I position for 5 months. We call these employees QA/I personnel because while some were doing only QA work, others were doing both QA and QI. A $30 stipend was offered as an incentive. The study was approved by an Institutional Review Board, and informed consent was obtained from all participants. Data Collection Interviews were conducted in private at the participants’ agency. Data were collected using a semi-structured instrument consisting of nine open-ended questions regarding the background, work, and challenges of QA/I personnel. Of these, two were germane to the focus of this paper: ‘‘What are some of the shortcomings of your work of ensuring that quality services are provided to clients by your organization?’’ ‘‘What challenges do you think quality assurance personnel in this agency face in ensuring quality of services?’’ In-depth qualitative interviewing was deemed a practical research strategy for this study because it allowed us to explore the QA and QI roles within the agency based on the participants’ contextual experience and vantage point without imposing a priori demarcations. This approach is designed to ‘‘generate narratives that focus on specific research questions’’ (Miller and Crabtree 1999, p. 93), particularly on the personnel’s participation in certain work activities and interactions in different contexts. The interviews lasted approximately 60 min, and were audio-recorded and transcribed. A brief questionnaire was administered at the end of the interview to gather data on participant and agency characteristics. Data Analysis Data transcripts were managed in NVivo7 (QSR 2006), a software program for qualitative data analysis. The authors, who were trained in qualitative analysis, formed the analytical team. Qualitative analysis involved open coding and theme development following a modified grounded theory approach, which is an inductive iterative process that involves ‘‘breaking down, examining, comparing, conceptualizing and categorizing data’’ (Strauss and Corbin 1990, p. 61). This modified approach adopted a data coding technique without following other core grounded theory procedures, such as theoretical sampling or inductive theory development. First, the analysts separately reviewed each of the first eight transcripts as these were transcribed. They met regularly to discuss the content of each transcript
123
Adm Policy Ment Health
and to identify emerging categories in order to develop a codebook by consensus. The codebook consisted of 40 coding categories with their definitions. After completing the codebook, two analysts separately coded the first two transcripts, compared their coding patterns, adjusted any discrepancies and revised code definitions to help standardize coding procedures before each one then separately coded half of the remaining transcripts. Consensus procedures during the iterative codebook development process and well defined codebook categories help improve coder reliability (Hruschka et al. 2004). Reports of data coded by specific coding categories were produced for further analysis. The lead author analyzed only coding reports pertaining to challenges and shortcomings in doing QA and QI work. In carefully reviewing these reports, patterns in the participants’ accounts and opinions of work challenges were identified and thematic categories representing the main findings were established. Finally, the lead analyst reviewed the original data set to make sure that relevant supporting evidence was not missed. The rest of the analytical team then reviewed and critiqued the findings based on the evidence. As a measure of trustworthiness the lead analyst also conducted a negative case analysis by searching for information contradictory to the findings in the original data set (Kuzel and Like 1991). Questionnaire data were used to represent sample characteristics (Table 1).
Results Seven main challenges that QA/I personnel in this sample face in doing their work were identified: (1) Resource limitations, (2) Organizational buy-in, (3) Competing demands, (4) Skills development, (5) Changing standards, (6) Lack of authority, and (7) Research capacity. All 16 respondents cited resource limitations, 14 mentioned organizational buy-in, 12 referred to competing demands, and at least half noted each of the other four challenges. All respondents cited more than one of these challenges, and some noted a few other non-recurring factors that did not fit into a category and were disregarded. These seven factors may not exhaust all potential challenges to QA or QI work in these service settings. A descriptive narrative of the findings highlighting representative examples from different agencies is presented below. Challenges to Performing QA and QI Work Resource Limitations All the respondents raised concerns about limited resources having an adverse impact on their work. Lack of funding
123
Table 1 Participants’ characteristics (N = 16) Characteristics
Totals
Gender Female
13
Age Mean
44.5 years
Range
25–60 years
Race/ethnicity African American
1
White
15
Education Bachelors
1
Masters Doctorate
13 2
Years employed by agency Overall with agency Mean
8.19 years
Range
0.5 months–27 years
In current job position Mean
4.18 years
Range
0.4 months–12 years
% of job devoted to QA/I 100
8
85
1
80
1
50
2
25
1
20
1
15 10
1 1
was considered the main resource limitation, which in turn precludes the acquisition of needed technology and the hiring of more staff to support QA and QI efforts. The lack of funding to set up advanced computerized systems to better manage client information, for example, was a major challenge to some. According to one respondent, ‘‘we don’t have electronic medical records. Data collection is labor intensive, time consuming. There’s a lot of room for error when you’re doing it manually.’’ Another one also stated: ‘‘…funding or lack of resources…I’d love for us to have better software for therapists, so they can do less handwriting. If we had a computerized file system…’’ These and other examples illustrate how the lack of technological resources not only hindered QA/I personnel’s ability to assess quality of services, but also impacted on services by taking time away from direct clinical care. Insufficient staff support also was considered an important resource limitation. Many respondents mentioned that they were overwhelmed and overstretched with
Adm Policy Ment Health
the amount of work required. They felt that having more QA/I staff would help them meet their job requirements. As one described, ‘‘it’s an issue of time and budget. We don’t have a full-time QA person. It doesn’t get the time and attention it deserves.’’ Another participant argued, ‘‘if we had more people we could do more…We just don’t have enough people and time.’’ The staff shortage was thought to limit the agency’s ability to effectively carry out its QA and QI programs. Consequently, as two other respondents noted, time becomes a scarce commodity: ‘‘I’d say time. I’m the only person and feeling stretched.’’ ‘‘Money and time. There’s absolutely no budget for what I do.’’ Organizational Buy-In Other common concerns expressed by the respondents were how they were perceived by coworkers and the extent to which every member of the organization—from therapists to board members to supervisors and staff—understood, valued and embraced QA and QI concepts. QA/I personnel were sometimes regarded with suspicion by support staff and therapists. Because their job is to ensure that client services meet quality standards, lower and midlevel personnel sometimes felt that QA/I personnel were ‘‘out to get them’’ or to burden them with additional work and saw the QA role as ‘‘punitive’’. According to one respondent, ‘‘when you do QA work you’re not always the most popular person because what you’re often doing is finding problems that relate to somebody’s performance.’’ The perception some staff have of the QA role is that of ‘‘policing’’ or ‘‘cracking the whip.’’ One respondent mentioned that staff wonder, ‘‘‘is she here to tell me I’m doing bad?’ They view me as someone who is going to constantly point out the negative.’’ Such perceptions of and attitudes toward QA/I personnel appeared to hinder effective work collaborations and QA efforts. Another related challenge was the lack of knowledge of and of buy-in into QA concepts by personnel across agency levels. Several informants noted that some staff did not understand the QA process. They argued that many staff members were not familiar with service provision standards and were oblivious to the importance of monitoring for quality. Some respondents observed that some staff did not know how to assist the QA/I specialist and facilitate the process. Staff viewed QA as the specialist’s responsibility, not as a collective function. According to one respondent, ‘‘the staff feels that [QA] is above and beyond their job and I’m feeling like it’s part of everybody’s job to always be looking out on how to do things better.’’ The pervasiveness and complexity of lack of buy-in into the QA process can be summarized as follows: ‘‘…getting people to buy into the process. Even with clinicians we have a hard time. They didn’t become
therapists to do paperwork. Yet, COA requires certain documentation of a certain standard, and that’s hard because they feel like the documentation takes away from therapy. My supervisor, the buy-in is such that QA is the first thing that goes to the back burner when time gets tight. Buy-in at the board level, when you have board members that are committee chairs getting reports out of them it’s like pulling teeth. They don’t understand how it trickles down and affects everything all the way to service delivery…’’ Competing Demands Having multiple tasks and balancing diverse responsibilities was another important challenge for QA/I personnel. This challenge was documented by respondents who devoted 100% of their work to QA or QI functions and by those who also had clinical, administrative or other responsibilities. Eight participants devoted 100% of their job to QA or QI, two 80% or more, two 50%, and four 25% or less (Table 1). Several of them expressed concern about carrying a heavy workload and having little staff support. As one informant indicated, ‘‘I feel overextended…I have 1001 things to do. All my QA duties continue when I’m working on grants…staff training, but all of that is important.’’ Another respondent mentioned, ‘‘I have a lot to do and sometimes the QA piece takes a back-burner. When push comes to shove that’s the last priority…I supervise clinical work and handle intake assessments over the phone and all that comes first.’’ These personnel also are given tasks unrelated to QA/I: ‘‘Our president [decided] that we needed an agency-wide plan in case there’s a bird-flu outbreak. So, now I’m on this committee in charge of coming up with this plan. How that fits into my role I’m not exactly sure.’’ Competing demands appeared to be related to the availability of resources, which in turn could be affected by such demands. According to one respondent, ‘‘the only reason I could bring [a helper] in is because I was pulled off to write a grant. I needed someone to do [a task] because we had a survey coming up by our major funding source.’’ Competing demands also placed limitations on fostering buy-in and QA/I personnel skill development. For example, one informant lamented: ‘‘I’d like more opportunities to sit with directors and supervisors and walk them through the numbers and explain it more thoroughly…and because of time constraints, mine and theirs…we are not always able to do that.’’ Moreover, as another respondent admitted, ‘‘I have lots of stuff to do where maybe in that extra time I would’ve looked at [the] University to see what [QA or QI] courses are available.’’
123
Adm Policy Ment Health
Skills Development Several participants acknowledged having little or no QA or QI training and considered this a personal shortcoming and a challenge, despite of all but one having at least a Master’s level education (Table 1). For example, according to one informant, ‘‘I’ve had no real training in QA. I know that COA does trainings, but I’ve never been to one.’’ Another respondent admitted relying on an external consultant for on-the-job training: ‘‘What made us bring a consultant in is because I’m not trained in QI processes and she was supposed to train me and help restructure our QI program.’’ Reflecting on their inexperience, some respondents also expressed self-doubt in their work: ‘‘part of the challenge is figuring out what a QA person is supposed to be doing. I wouldn’t be surprised [to] find out that people are [employing] diverse approaches to QA.’’ Another one felt that, ‘‘if I had some formal training I’d be more confident.’’ Some were also challenged by lacking other relevant skills: ‘‘If I had a stronger background in statistics or in SPSS…where I could crunch numbers…that would be a big help.’’ Compounding this challenge is the lack of precedent or the absence of other personnel with QA or QI knowledge in some of these agencies. One informant noted, ‘‘[I] don’t have a background in QA, so it’s me and my supervisor who are doing it. And because we didn’t have anybody doing it for a while we have some touch up to do.’’ Another one observed, ‘‘I’ve never seen a [QA] person who had a boss that did [QA] before…no one is going to [say], ‘when I used to do that, this is how I did [it],’ because no one has done it before – you are your own island.’’ Hence, some personnel were not just inexperienced in QA, but also much on their own: ‘‘…you hardly get any supervision in this job. It’s not like your boss knows what you’re doing. That’s another challenge.’’ According to some respondents, supervisors actually commended them for ‘‘coming in and jumping into this job with little supervision and training and just figuring it out.’’ Changing Standards Another factor that challenged some participants was keeping up with changing standards and procedures governing service programs. This was the only finding that was an external factor not directly influenced by internal agency dynamics. For example, one respondent commented on the difficulty of going through this ‘‘circular activity’’ working with staff on program design, implementation, evaluation and QI ‘‘when the external environment is always changing…the regulations that we are supposed to be in compliance with are constantly shifting.’’ Another one stated, ‘‘we are coming up for credentialing…they are always
123
rewriting their standards. I need to get the new language out so that all [staff] feel comfortable with it when the site visitors come. But when you’re doing things like that it takes away time from doing those things that really are critical.’’ The need to keep abreast of new or changing standards thus created more responsibilities or demands for QA/I personnel. The large number of standards to follow in itself was a daunting prospect: ‘‘…I haven’t even counted, hundreds of standards, and I’m the point person in the agency for answering questions on standards.’’ Respondents also were apprehensive about other changing conditions: ‘‘there’s always changes, legislative changes, regulatory changes, funding source changes, changes in the field, new programs. You have to accept all this flux and change the way you do things.’’ Keeping up with evidence-based practices, as another informant noted, also was challenging: ‘‘you have to keep current in this field of best practices because [its] ever evolving.’’ Lack of Authority Some respondents expressed concern about not having influence to obtain the documentation that they needed to do their job. For example, one respondent stated that, ‘‘one of the most frustrating things about QA is that I don’t have supervisory responsibilities over people I have to interact with.’’ Another one admitted, ‘‘I don’t have the authority to say [to staff] ‘you’re not doing what you’re supposed to be doing.’’’ Their limited influence made it difficult for some to obtain needed documents also from upper management. As one noted, ‘‘I have to go to [the leadership] and ask for reports when I don’t have the authority to make demands of them…that’s challenging.’’ Some program directors resented the QA/I specialist’s guidance as ‘‘stepping on their toes’’ and some supervisors ‘‘just flat out ignore us no matter what we do or say.’’ Another informant alluded to a type of powerlessness different from a lack of formal individual authority over agency staff—a more pervasive one—related to limited organizational buy-in: ‘‘I could have very little influence on what’s happening on the other side of the campus. It takes a culture of activity. I’m limited by the power I have to ensure that everything we do at the organization is of high quality.’’ In all, QA/I personnel’s limited influence over staff or management compliance with the QA/I agenda can hinder their performance. Research Capacity Participants also cited inadequate research capacity, skills and support as affecting their ability to systematically assess quality. These three factors overlap with resource limitations, skills development and organizational buy-in,
Adm Policy Ment Health
but are captured here as a separate theme because they coalesce around the important research function of collecting, managing, and evaluating data for QA or QI purposes. For example, one respondent described the need for ‘‘creating a [management information system] that will allow us to get to that data…to crunch that data, and then to report it.’’ Some respondents acknowledged weaknesses in basic research and technical skills: ‘‘I’m terrible at math and statistics…if I was a genius in that…and knew about computers I’d be a god in this field…’’ or ‘‘I’m not a statistician. I do simple stuff. I have to rely on programming to run reports and queries for me.’’ Another respondent was challenged by colleagues who understood the limitations of her assessments: ‘‘At every organization someone will come to me and say, ‘Is that statistically significant?’ And half the time I’ll say, ‘No, we’re not a research organization.’ It’s really hard.’’ Some QA/I personnel are also facing challenges with their methods of data collection and data entry: ‘‘One thing is validating the data I get, that what I’m being fed is real. I don’t mean that people are making it up, although that might be happening…but it might not be collected properly or uniformly and it’s hard if it requires some manual work.’’
Discussion This original study examined challenges in conducting QA and QI in behavioral health organizations, highlighting major obstacles that QA/I personnel encountered in assessing the quality of client services and in guiding quality improvement efforts. Addressing these challenges, several of which fall beyond the personnel’s control, is essential for supporting QA and QI processes, promoting best practices, increasing the agencies’ viability, and hopefully, for improving client outcomes. Our qualitative research approach also was instrumental in eliciting narratives of the personnel’s experiences and concerns with their job in depth, contextually and without set parameters. The findings include various factors that affect QA and QI at different system levels, from individual factors (skills development and lack of authority), to internal agency conditions (resource limitations, organizational buy-in, competing demands, research capacity), and external dynamics (changing standards). Resource limitations also may be a function of external funding availability. Some of them overlap and most are interrelated. Of all our findings, resource limitations, organizational buy-in, and skills development appear to be the more significant challenges. Curiously, most of the challenges discussed reflected the work of measuring and monitoring, the purview of QA, not the work of improving care (QI). Buy-in and resource comments were often directed at data collection. Research
skills were mostly about data analysis. Clinicians and managers may be more attuned to the language and nature of problem-solving and improving processes, than they are with the measurement and analysis work required to identify problems and assess if they had been improved. Our findings corroborate and complement both the conceptual literature and studies in other health fields that address impediments to QA and QI implementation. These works emphasized lack of organizational buy-in (Deming 1986; Walton 1990; Wang et al. 2000), insufficient financial and technological resources (Wang et al. 2000; Alexander et al. 2007), and an inadequately trained workforce (Crosby 1979; Pande et al. 2000). In a study that compared a local ‘participatory’ approach and a central ‘expert’ approach to quality improvement in depression treatment in primary care settings, limited buy-in was identified as a disadvantage of the expert approach (Parker et al. 2007). Shortell et al. (1998) allegorically described impediments to successful continuous quality improvement implementation in health care practice as the ‘‘weeds’’ in an ‘‘unruly garden’’ that is the U.S.A. health care system: ‘‘misaligned incentives, professional entrenchments, competing priorities, organizational inertia, and lack of adequate information systems’’ (p. 605). Other less well known challenges revealed in this study—competing demands, continually changing standards, lack of authority, and inadequate research capacity—are new contributions to the literature, particularly in behavioral health services research. More resource investment is needed to overcome these challenges; however, this may be difficult in part because neither QA nor QI have demonstrated their value empirically by demonstrating their ability to improve outcomes efficiently. Some of the activities these personnel are tasked may have little or no impact on QA or QI efforts. In a prior report from this study, few targets of QA work served as indicators of high quality care (McMillen et al. 2008). When QA and QI empirically demonstrate improved outcomes, more people in leadership positions may advocate for investment in the science of QA and QI. This study suggests that investments are needed in technical and human resources. Several informants noted that their agencies lacked financial resources to upgrade information technology, build research capacity, and hire additional or full-time QA/I personnel and support staff. Investment is also needed to develop the knowledge base of personnel in QA/I. Despite their graduate education and experience in clinical work, social services, or human relations, several of our study participants lacked expertise in QA and QI approaches and only had basic research skills (Lee et al. 2011). The latter limitation is particularly worrisome since these personnel classified much of their work as research, involving data collection, data management or data analysis (McMillen et al. 2008). Although
123
Adm Policy Ment Health
several participants expressed interest in obtaining formal training in these areas, their busy schedules, competing demands, and tight budgets suggest that there is neither the time nor the resources for additional training. Having to regularly keep up with changing standards and regulations, new program requirements, and best practices has been an added burden and on-the-job training exercises for those with lesser skills. Moreover, lack of authority to obtain needed documents in timely fashion and to effect specific recommendations, coupled with limited buy-in of QA and QI concepts and processes across organizational levels also appeared to undermine their efforts. The following recommendations may help to address these challenges. (1) Develop the science of QA and QI to evaluate the value of investments in them. More research is needed on QA and QI implementation and outcomes. Rigorous assessments of service quality that inform quality improvement strategies which are then evaluated for their outcomes could help to establish the value of this work and to draw resource investments. More resources need to be allocated for workforce specialized training and organization research capacity. However, QA and QI efforts will still need to be carefully chosen to have the most impact on consumers. (2) Administrative leadership is needed for disseminating information about the rationale for QA and QI and the role staff play in supporting these efforts in order to promote organizational buy-in. This could help demystify the notion that one person alone can assure or improve quality of services without a quality-minded culture in place, and could reduce the burden of competing demands and lack of authority for QA/I personnel. Greater organizational buy-in also may help to maximize limited resources. A caveat is that an informed and engaged workforce still may not buy-into efforts that are unlikely to have substantial impact. (3) More training in QA and QI approaches and research methods is needed in order to equip an already well-educated and professional workforce with the necessary specialized knowledge and skills for conducting sound QA and QI science. (4) Lastly, accreditors and regulators need to be mindful of the burden that the proliferation and regular shifting of standards and regulations places on QA/I personnel and their agencies. These burdens may increase in the short term, with increasing value placed on accountability in the context of health care reform and budget shortfall. When serious efforts are made to help overcome these challenges, QA and QI may begin to fulfill their promise. Finally, it is recommended to continue research on the QA and QI enterprises in the behavioral health fields. For example, research on more of these professionals, from more types of mental health organizations is needed. QA/I personnel working in hospitals or managed care settings may face different challenges than their counterparts in
123
community service agencies. QA/I professionals in other geographic areas may come from different educational and professional backgrounds. Further research in these and other related areas may help guide and improve QA/I practice in this field. Acknowledgments This work was supported by a grant from the National Institute of Mental Health (P30 MH068579).
References Alexander, J. A., Weiner, B. J., Shortell, S. M., & Baker, L. C. (2007). Does quality improvement implementation affect hospital quality of care? Hospital Topics, 85(2), 3–12. Berwick, D. M. (1989). Continuous improvement as an ideal in health care. New England Journal of Medicine, 320(1), 53–56. Council on Accreditation. (2008). Performance and quality improvement. Retrieved from: http://www.coastandards.org/standards. php?navView=private§ion_id=75. Crosby, P. B. (1979). Quality is free: The art of making quality certain. New York: McGraw-Hill. Deming, W. E. (1986). Out of the crisis. Cambridge, MA: MIT Center for Advanced Engineering. Donabedian, A. (2003). An introduction to quality assurance in health care. Oxford: Oxford University Press. Goonan, K. J., & Jordan, H. S. (1992). Is QA antiquated, or was it at the right place at the wrong time? Quality Review Bulletin, 18, 372–379. Harry, M. J. (1988). The nature of six sigma quality. Schaumburg, IL: Motorola University Press. Hermann, R. C. (2005). Improving mental health care: A guide to measurement based quality improvement. Arlington, VA: American Psychiatric Association Publishing. Hruschka, D. J., Schwartz, D., St. John, D. P., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W. (2004). Reliability in coding open-ended data: Lessons learned from HIV behavioral research. Field Methods, 16(3), 307–331. Juran, J. M. (1992). Juran on quality by design: The new steps for planning quality into goods and services. New York, NY: The Free Press. Kojania, K. G., & Grimshaw, J. M. (2005). Evidence-based quality improvement: The state of the science. Health Affairs, 24, 138–150. Kuzel, A. J. (1999). Sampling in qualitative research. In B. F. Crabtree & W. L. Miller (Eds.), Doing qualitative research (pp. 33–45). Thousand Oaks, CA: Sage Publications, Inc. Kuzel, A., & Like, R. C. (1991). Standards of trustworthiness for qualitative studies in primary care. In: P. G. Norton (Ed.), Primary care research: Traditional and innovative approaches (pp. 138–158). Newbury Park, CA: Sage Publications. Lee, M. Y., McMillen, J. C., Zayas, L. E., & Books, S. J. (2011). The quality assurance and improvement workforce in social services: An exploratory examination. Administration in Social Work, 35(3), 243–257. McMillen, J. C., Proctor, E. K., Megivern, D., Striley, C., Cabassa, L., Munson, M., et al. (2005). Quality of care in the social services: Research agenda and methods. Social Work Research, 29, 181–191. McMillen, J. C., Zayas, L. E., Books, S. J., & Lee, M. Y. (2008). Quality assurance and improvement practice in mental health agencies: Roles, activities, targets and contributions. Administration and Policy in Mental Health and Mental Health Services Research, 35, 458–467.
Adm Policy Ment Health Miller, W. L., & Crabtree, B. F. (1999). Depth interviewing. In B. F. Crabtree & L. W. Miller (Eds.), Doing qualitative research (pp. 89–107). Thousand Oaks, CA: Sage Publications, Inc. Office of Manned Space Flight. (1966). Apollo reliability and quality assurance program. Washington, DC: NASA. Palmer, R. H. (1991). Considerations in defining quality of health care. In: R. H. Palmer, A. Donabedian, & G. Povar (Eds.), Striving for quality in health care: An inquiry into policy and practice (pp. 3–58). Ann Arbor, MI: Health Administration Press. Pande, P. S., Neuman, R. P., & Cavanagh, R. R. (2000). The six sigma way: How GE, Motorola, and other top companies are honing their performance. New York: McGraw-Hill. Parker, L. E., de Pillis, E., Altschuler, A., Rubenstein, L. V., & Meredith, L. S. (2007). Balancing participation and expertise: A comparison of locally and centrally managed health care quality improvement within primary care practices. Qualitative Health Research, 17(9), 1268–1279. Qualitative Solutions and Research [QSR]. (2006). NVivo (release 7.0), Qualitative data management software. Melbourne, Australia: QSR International. Roberts, J. S., Coale, J. G., & Redman, R. R. (1987). A history of the joint commission on accreditation of hospitals. JAMA, 288, 936–940. Sederer, L. I., & Dickey, B. (1996). Outcomes assessment in clinical practice. Baltimore, MD: Williams & Wilkins.
Sederer, L. I., & St Clair, L. (1990). Quality assurance and managed mental health care. Psychiatric Clinics of North America, 13, 89–97. Shortell, S. M., Bennett, C. L., & Byck, G. R. (1998). Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate progress. Milbank Quarterly, 76(4), 593–624. Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedure and techniques. Newbury Park, CA: Sage Publications. Thomas, C. M. (1965). Quality assurance of remotely maintained equipment in a nuclear fuels reprocessing plant. No publisher listed. Walton, M. (1990). Deming management at work. New York: Plenum. Wang, M. C., Hyun, J. K., Harrison, M. I., Shortell, S., & Fraser, I. (2000). Redesigning health systems for quality: Lessons from emerging practices. Journal on Quality and Patient Safety, 32, 599–611. Zirps, F., & Cassafer, D. J. (1996). Quality improvement in the agency: What does it take? In P. Pecora, W. R. Seelig, F. Zirps, S. M. Davis (Eds.), Quality improvement and evaluation in child and family services: Managing into the next century (pp. 145–174). Washington, DC: Child Welfare League of America.
123