This article was downloaded by: [Rutgers University] On: 21 May 2015, At: 08:22 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Clinical Child & Adolescent Psychology Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hcap20
Scalable Options for Extended Skill Building Following Didactic Training in Cognitive-Behavioral Therapy for Anxious Youth: A Pilot Randomized Trial a
b
c
a
Brian C. Chu , Aubrey L. Carpenter , Christopher M. Wyszynski , Phoebe H. Conklin & d
Jonathan S. Comer a
Department of Clinical Psychology, Graduate School of Applied and Professional Psychology, Rutgers University b
Department of Psychology, Center for Anxiety and Related Disorders, Boston University
c
Department of Psychology, Rutgers University
d
Click for updates
Department of Psychology, Center for Children and Families, Florida International University Published online: 18 May 2015.
To cite this article: Brian C. Chu, Aubrey L. Carpenter, Christopher M. Wyszynski, Phoebe H. Conklin & Jonathan S. Comer (2015): Scalable Options for Extended Skill Building Following Didactic Training in Cognitive-Behavioral Therapy for Anxious Youth: A Pilot Randomized Trial, Journal of Clinical Child & Adolescent Psychology, DOI: 10.1080/15374416.2015.1038825 To link to this article: http://dx.doi.org/10.1080/15374416.2015.1038825
PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions
Journal of Clinical Child & Adolescent Psychology, 0(0), 1–10, 2015 Copyright # Taylor & Francis Group, LLC ISSN: 1537-4416 print=1537-4424 online DOI: 10.1080/15374416.2015.1038825
Scalable Options for Extended Skill Building Following Didactic Training in Cognitive-Behavioral Therapy for Anxious Youth: A Pilot Randomized Trial Brian C. Chu Department of Clinical Psychology, Graduate School of Applied and Professional Psychology, Rutgers University
Downloaded by [Rutgers University] at 08:22 21 May 2015
Aubrey L. Carpenter Department of Psychology, Center for Anxiety and Related Disorders, Boston University
Christopher M. Wyszynski Department of Psychology, Rutgers University
Phoebe H. Conklin Department of Clinical Psychology, Graduate School of Applied and Professional Psychology, Rutgers University
Jonathan S. Comer Department of Psychology, Center for Children and Families, Florida International University
A sizable gap exists between the availability of evidence-based psychological treatments and the number of community therapists capable of delivering such treatments. Limited time, resources, and access to experts prompt the need for easily disseminable, lower cost options for therapist training and continued support beyond initial training. A pilot randomized trial tested scalable extended support models for therapists following initial training. Thirty-five postdegree professionals (43%) or graduate trainees (57%) from diverse disciplines viewed an initial web-based training in cognitive-behavioral therapy (CBT) for youth anxiety and then were randomly assigned to 10 weeks of expert streaming (ES; viewing weekly online supervision sessions of an expert providing consultation), peer consultation (PC; non-expert-led group discussions of CBT), or fact sheet self-study (FS; weekly review of instructional fact sheets). In initial expectations, trainees rated PC as more appropriate and useful to meet its goals than either ES or FS. At post, all support programs were rated as equally satisfactory and useful for therapists’ work, and comparable in increasing self-reported use of CBT strategies (b ¼ .19, p ¼ .02). In contrast, negative linear trends were found on a knowledge quiz (b ¼ 1.23, p ¼ .01) and self-reported beliefs about knowledge (b ¼ 1.50, p < .001) and skill (b ¼ 1.15, p < .001). Attrition and poor attendance presented a moderate concern for PC, and ES was rated as having the lowest implementation potential. Preliminary findings encourage further development of low-cost, scalable options for continued support of evidence-based training. Correspondence should be addressed to Brian C. Chu, Graduate School of Applied and Professional Psychology, Rutgers University, 152 Frelinghuysen Road, Piscataway, NJ 08854. E-mail:
[email protected] Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/hcap.
Downloaded by [Rutgers University] at 08:22 21 May 2015
2
CHU ET AL.
Support for evidence-based treatments (EBTs) has been established across youth (child=adolescent) disorders (Weisz & Kazdin, 2010), and evidence suggests that many EBT effects translate beyond controlled research settings (MacPherson, Leffler, & Fristad, 2014). Unfortunately, successful EBT development does not automatically translate into broad community accessibility. Indeed, gaps persist between treatments in research and practice (McHugh & Barlow, 2010), which owe substantially to challenges in training a diverse workforce to competently administer treatments and maintain skills across settings with varying degrees of support (Comer & Barlow, 2014; Green & Aarons, 2011). Standard training methods to produce EBT providers, such as exposure to treatment manuals and brief workshops, are largely ineffective (Herschell, Kolko, Baumann, & Davis, 2010). These limitations are consistent with research underscoring the need for dynamic and transformational leadership to facilitate meaningful practice changes beyond exposure to instructional material (Aarons & Sommerfeld, 2012). Workshops may produce small increases in treatment knowledge but are less effective in changing clinician behaviors= skills (Herschell et al., 2010). Adding ongoing consultation to this standard training approach has yielded better outcomes, such as increased adherence and competence (e.g., Beidas, Edmunds, Marcus, & Kendall, 2012; Miller, Yahne, Moyers, Martinez, & Pirritano, 2004), even over long-term follow-ups (Chu et al., 2015). Unfortunately, optimal methods for developing and maintaining therapy skills are costly, resource intensive, and time-consuming (Comer & Barlow, 2014). Innovative approaches, such as ‘‘train-the-trainer’’ and teleconsultation models, have been developed, but these still rely on expert-led training=consultation, even as they expand the reach of a limited number of experts. Recently, peer-to-peer learning communities have emerged to facilitate natural learning and consultation with limited expert exposure (e.g., online community of practitioners hosted at www.practiceground.org). As many models of behavior change focus on assessment and manipulation of subjective peer norms within learning contexts (e.g., Ajzen’s theory of planned behavior; Ajzen & Madden, 1986), group formats facilitating peer-to-peer interactions may be particularly useful to enhance motivation and behavior change. If effective, peer collaborative groups could offer a resource efficient option to promote extended contact with instructional material. However, little information exists regarding optimal structure and form of delivery (e.g., group vs. individual; face-to-face vs. remote), or the required consultant characteristics (e.g., expertise level) for
effective training support (e.g., Leffler, Jackson, West, McCarty, & Atkins, 2013). For example, research finds both expert-delivered feedback (Miller et al., 2004) and peer-led behavioral skill rehearsal (Cross, Matthieu, Cerel, & Knox, 2007) lead to improved clinician behavior. A second direction to optimize resources is to reconsider the source material that comprises consultation. Most consultation studies typically focus on providing trainees supervision on the trainees’ own caseloads. However, it has yet to be investigated whether observation of experts providing ongoing supervision= consultation to other trainees could yield meaningful outcomes. Evidence from social learning theory approaches to clinical interventions suggests that viewing culturally admired models demonstrating favorable behaviors can increase one’s perceived efficacy in performing the modeled behaviors and engender behavioral change (Bandura, 2004). A consultation model built on observing experts leverages lessons from this evidence base and can take advantage of the greater scalability that comes with observation compared to direct consultation. Having trainees simply observe expert case consultation could offer a resource efficient model of extended competency building. The current study preliminarily examined the relative efficacy of several low-cost extended skill-building methods that could be implemented in large-scale dissemination efforts following initial trainings. Trainees participated in a traditional day-long, expert-led workshop teaching cognitive-behavioral therapy (CBT) for youth anxiety disorders. After completing the workshop, participants were randomly assigned to one of three extended support programs. The expert streaming (ES) condition remotely provided participants with weekly online videos of an expert providing supervision on prototypical youth anxiety cases to supervisees at his own site (no direct consultation provided to participants).The peer consultation (PC) condition had participants attend weekly in-person group meetings with similar professionals to review CBT components and to consult on skills implementation. PC drew on research from education and corporate research supporting ‘‘peer learning communities’’ in supporting knowledge and skill acquisition=maintenance through structured dialogue and peer feedback (Darling-Hammond & Richardson, 2009). The fact sheet self-study (FS) condition provided weekly one- to two-page fact sheets describing key CBT skills and represented a minimal support condition. Conditions were evaluated on perceived acceptability, satisfaction, and feasibility of implementation in trainees’ own professional settings, as well as efficacy in improving trainee knowledge, use, and implementation
SCALABLE SUPPORT PROGRAMS
of CBT skills for youth anxiety. We hypothesized that both ES and PC programs would be superior to FS in promoting CBT knowledge, therapist confidence in knowledge and skills, and use of CBT skills. Cost-effectiveness, clinician attitudes, and ecological validity within a delivery system are also critical, as clinician-identified barriers (e.g., time limitations, lack of agency support) can impact willingness to implement novel treatments (e.g., Fritz et al., 2013). In exploratory analysis, the implementation potential of each support condition will be compared.
3
American, 17% as Asian American, and 2.9% as Latino= Hispanic. Forty-three percent had completed a higher education degree (31% master’s, 9% Ph.D., 3% Ed.S.), and 57% were pursuing a higher degree (20% master’s, 31% Ph.D., 3% Ed.S., 3% certificate of graduate study). Participants were employed in school districts (31%), community mental health centers (29%), youth agencies (11%), private practices (11%), university counseling centers (6%), hospitals (3%), or ‘‘other’’ (e.g., residential group home, consulting) (26%). Self-identified (nonexclusive) theoretical orientations included behavioral (14%), cognitive (6%), CBT (71%), dialectical behavior therapy (6%), psychodynamic (37%), and other (20%).
METHODS Training and Extended Support Conditions
Downloaded by [Rutgers University] at 08:22 21 May 2015
Participants Thirty-five (89% female) postdegree professional therapists (n ¼ 15) or graduate trainees (n ¼ 20) participated at two northeast U.S. university sites. Inclusion criteria were enrollment in, or graduation from, a master’s or doctoral program in social work, counseling, or clinical or school psychology. Participants needed to be actively working in youth practice settings. Exclusion criteria were prior formal training in CBT for youth anxiety (i.e., practicum experience) and=or refusal to participate in the initial training workshop. Eighty-five participants responded to recruitment; 50 (59%) were excluded; and 35 were randomized to ES, PC, or FS (Figure 1). Of the 35, 69% self-identified as non-Hispanic Caucasian, 11% as African
Initial workshop. Participants attended a group viewing of a 6½-hr online workshop of Wendy Silverman, Ph.D., teaching CBT for anxious youth (diagnostic assessment, key CBT interventions), produced by the American Psychological Association, Society for Clinical Child and Adolescent Psychology (http://www.effectivechildtherapy.com) with the Center for Children and Families at Florida International University and the Children’s Trust. FS. FS participants were e-mailed brief clinical resource sheets (one to two pages each) weekly and reviewed at their own pace across a 10-week period. Fact
FIGURE 1 CONSORT figure. Note: CBT ¼ cognitive-behavioral therapy.
4
CHU ET AL.
sheets (available from first author by request) reviewed (a) an overview of youth anxiety, (b) affective education and relaxation, (c) identifying anxious self-talk, (d) modifying anxious self-talk, (e) problem solving, (f) rewards, (g) self-evaluation, (h) fear hierarchies, (i) imaginal exposures, (j) in vivo exposures, (k) homework, (l) rapport building, (m) parent training, (n) assertiveness and social skills, and (o) relapse prevention.
Downloaded by [Rutgers University] at 08:22 21 May 2015
ES. ES participants accessed a secured link to a weekly video of an expert (first author; a licensed clinical psychologist with 13 years of postgraduate specialized CBT experience) providing deidentified supervision to on-site trainees on their youth anxiety cases. Participants viewed each supervision session from their own computers and did not interact with the expert or trainees. Each video was available online for 1 week. PC. PC participants met in person weekly at study sites for 1-hr peer-led groups across a 10-week period to discuss CBT in relation to their current anxiety caseloads. To facilitate discussion, participants were provided the FS fact sheets and a prompt sheet. Study staff did not provide feedback or facilitate but attendance was taken.
Measures Background information. Demographic, professional discipline, theoretical orientation, work setting, and past training data were collected by self-report.
Training expectations. Expectations were assessed after condition assignment. Four items, based on existing treatment expectancy scales (Devilly & Borkovec, 2000), used a 0–8 scale with higher numbers indicating more positive expectations: (a) ‘‘Does your assigned training condition seem appropriate for its goals?’’ (b) ‘‘How successful do you think your assigned training condition will be in increasing your knowledge of CBT for youth anxiety?’’ (c) ‘‘How successful do you think your assigned training condition will be in enhancing your skills or ability to implement CBT for anxious youth?’’ and (d) ‘‘At this point, how willing would you be to recommend this training program to a colleague working with similar clients?’’ Items were summed for a total expectations score (a ¼ .70). Satisfaction, feasibility, and utility of extended support. Satisfaction used seven items to assess overall satisfaction with the condition (sample item: ‘‘I was satisfied with the amount of training I received’’). Perceived feasibility used six items to assess practicality and perceived barriers and resources (sample item: ‘‘I had the resources [time, space, money, hardware] necessary to complete the training’’). Utility used five Likert-type items and two open-ended questions to assess usefulness of training for one’s job and skills (sample item: ‘‘The training helped me better conceptualize my anxious child and adolescent cases’’). Likert-type items were rated 1 to 5, with higher numbers representing greater satisfaction. Mean scores were computed for each scale (Satisfaction a ¼ .95; Feasibility a ¼ .80; Utility a ¼ .93).
Initial workshop satisfaction. Workshop satisfaction was assessed with eight self-report items, using a 1 (strongly disagree) to 5 (strongly agree) scale, across three domains: (a) content and learning (three items; e.g., ‘‘I acquired new knowledge or skills’’), (b) the instructor=presenter (three items; e.g., ‘‘Presenter was free from bias or stereotyping’’), and (c) overall satisfaction (two items; e.g., ‘‘The workshop was a valuable use of time’’). Total and subscale scores were summed.
Trainee confidence in knowledge and skill. Two single items assessed participant perceptions of knowledge of, and skill in applying, CBT: ‘‘How successful was your overall training condition in increasing your knowledge of CBT for youth anxiety?’’ and ‘‘How successful was your overall training condition in enhancing your skills or ability to implement CBT for anxious youth?’’ Items were rated on a 0–8 scale, with higher numbers reflecting perceptions of greater knowledge and skill.
Knowledge test. Four versions of 20-item to 22-item multiple-choice tests were developed to assess knowledge across time. Questions were adapted from Society for Clinical Child and Adolescent Psychology continuing education exams, for example, ‘‘Which of the following is a rule for creating a hierarchy?’’ with response options (a) Ensure that the list contains both very easy and hard items, (b) Get details such as duration and frequency, (c) Ensure that the items on the hierarchy are broken down, and (d) All of the above.
Use of CBT skills. Adapted from prior therapistreported CBT use measures (Kuriyan & Pelham, 2012), respondents estimate the proportion of anxious youth cases for which they have employed specific skills, including (a) relaxation and affective education, (b) identification and=or modification of anxious self-talk, (c) problem solving, (d) self-evaluation and reward, (e) in vivo exposure, (f) imaginal exposure, (g) fear hierarchies, (h) assertiveness and social practice, (i) parent training specific to youth anxiety, (j) general parent
5
SCALABLE SUPPORT PROGRAMS TABLE 1 Mean (Standard Deviation), Minimum, Maximum, and Correlations Among Variables at Baseline (Postworkshop) Variable Video Content Video Instructor Video Overall Satisfaction Video Total Satisfaction Knowledge Test CBT Use IPS Increased Knowledge Enhanced Skills
M
SD
Min
Max
1
2
3
4
5
6
7
8
4.04 4.02 3.79 3.95 14.96 1.27 4.88 4.53 4.20
0.52 0.46 0.80 0.48 2.63 0.64 0.57 2.00 1.73
2.00 3.00 2.00 3.00 7.00 0.00 3.29 1.00 0.00
5.00 5.00 5.00 4.78 20.00 3.00 5.86 8.00 8.00
– .35 .67 .84 .01 .08 .16 .17 .01
– .35 .64 .07 .04 .15 .004 .13
– .90 .06 .04 .13 .09 .01
– .05 .02 .18 .11 .05
– .05 .04 .40 .33
– .23 .24 .16
– .14 .19
– .74
Downloaded by [Rutgers University] at 08:22 21 May 2015
Note: CBT ¼ cognitive-behavioral therapy; IPS ¼ Implementation Potential Scale. p < .05. p < .01.
training, (k) rapport building, (l) homework, and (m) relapse prevention. Responses were rated on a 4-point scale—0 (none of my clients), 1 (fewer than half of my clients), 2 (over half but not all of my clients), or 3 (all of my clients)—and averaged (a ¼ .74). Systems barriers and intention to implement. The Implementation Potential Scale (IPS; Forman, Fagley, Chu, & Walkup, 2012) is a 25-item measure (1- to 6-item scale) of treatment acceptability and perceived barriers to implementation of EBTs in school settings. Items were adapted for clinical settings (clinic, hospital, university) and the measure shortened to the 14 items with the highest factor loadings on the original subscales. Four dimensions are assessed: acceptability=efficacy beliefs, organizational resources, administrator support, and implementation commitment. Mean IPS scores were computed to reflect the participant’s perceived implementation potential for CBT in their current work setting (a ¼ .91).
Procedures E-mail advertisements were sent to listservs for relevant groups. After phone screening, eligible participants attended the initial training workshop. Participants completed a background questionnaire and knowledge test at preworkshop, and then completed video satisfaction, knowledge test, use of CBT skills, IPS at postworkshop. Participants were randomly assigned, completed the training expectations questionnaire, and began 10-week consultation program. At Weeks 5 and 10 (midpoint, postprogram), participants completed the IPS; knowledge test; use of CBT skills; training satisfaction, feasibility, and utility; and CBT knowledge and skill items. Participants received $30 for completing assessments. PC participants received $20 for attending 80% or more of the meetings.
Data Analysis Data screening examined univariate normality; estimates of skewness and kurtosis were within expected limits. Scatter and QQ plots of predicted model residuals evaluated multivariate normality and homoscedascity; no case outliers were identified, and there was no evidence of heteroscedascity. The only source of missing data was missed assessments (see Figure 1); there were no missing items. Missing data pattern analysis established data were missing at random, and multiple imputation was conducted in SPSS 21 to replace missing values. Descriptives of major outcomes and their intercorrelations are reported in Table 1.
RESULTS Initial Video Workshop Satisfaction and Impact on Knowledge Participants agreed that workshop content was satisfactory (M ¼ 12.1, SD ¼ 1.6; possible range ¼ 0–15), the presenter was appropriate (M ¼ 12.1, SD ¼ 1.4; possible range ¼ 0–15), and the workshop was satisfactory overall (M ¼ 31.7, SD ¼ 3.7; possible range ¼ 0–40). Paired t tests found that CBT knowledge did not change from pre- to postworkshop, t(34) ¼ 1.13, p ¼ .27.
Randomization Participants were randomized into three approximately equivalent groups (i.e., 12, 12, 11). However, three PC participants were re-randomized due to unwillingness to travel to weekly meetings. Resulting cell sizes were 13 (FS), nine (PC), and 13 (ES). Demographics, professional degree, work setting, theoretical orientation, knowledge test, and use of
6
CHU ET AL.
CBT skills did not differ across conditions at baseline (p range ¼ .10–.89).
Downloaded by [Rutgers University] at 08:22 21 May 2015
Preprogram Expectations for Extended Support Conditions One-way analyses of variance found participants differed in expectations in condition’s (a) appropriateness for the program’s goals, F(2, 31) ¼ 4.32, p ¼ .02; (b) potential to enhance skills or ability to implement, F(2, 31) ¼ 8.35, p ¼ .001; and (c) overall potential efficacy, F(2, 31) ¼ 4.07, p ¼ .03. Post hoc analyses (Tukey Honestly Significant Difference) revealed that PC was more appropriate for its goals relative to ES (MPC ME ¼ .71, p ¼ .02). PC was also perceived to have greater potential to increase skills and to be implemented than both FS (MPC ME ¼ .71, p ¼ .02) and ES (MPC ME ¼ .87, p ¼ .004). PC had more positive expectations overall relative to FS (MPCMFS ¼ 2.04, p ¼ .06) and ES (MPCME ¼ 2.27, p ¼ .03).
Postprogram Satisfaction, Utility, and Feasibility Across Conditions One-way analyses of variance showed no significant postprogram differences among conditions for satisfaction and utility (Table 2). Training feasibility differed by condition, F(2, 29) ¼ 3.43, p ¼ .05, and post hoc analyses suggested that PC was less feasible than FS at the trend level MPC ¼ 3.54 (SD ¼ .89) versus MFS ¼ 4.35 (SD ¼ .67), p ¼ .07. Weekly survey completion (indicating general participation=dose) rates were high (82% ES, 86% PC, 87% FS) and comparable across conditions, F(2, 31) ¼ .37, p ¼ .70. Average weekly group attendance in PC was 69% (range ¼ 25%–100%). TABLE 2 Feasibility, Utility, and Satisfaction Ratings of Treatment Conditions and Overall Training
Measure Midpoint, M (SD) Feasibility Utility Satisfaction Post, M (SD) Feasibility Utility Satisfaction
Fact Sheet
Peer Expert All Consultation Streaming Conditions
4.58 (.62) 4.27 (.57) 3.92 (.69)
3.70 (1.09) 4.07 (1.02) 3.40 (1.28)
3.71 (.84) 4.02 (.93) 3.65 (1.10) 3.99 (.93) 3.52 (1.18) 3.63 (1.05)
4.35 (.67)a 4.34 (.68) 3.78 (.82)
3.54 (.89)a 3.85 (1.13) 3.43 (1.16)
3.67 (.83) 3.73 (.78) 3.46 (1.02)
3.91 (.84) 4.00 (.94) 3.56 (.97)
a Peer consultation was rated as less feasible than fact sheet self-study at the trend level (p ¼ .07) after significant omnibus analysis of variance (p ¼ .05).
Knowledge, Skill, and Implementation Outcomes Mixed effects regression models using full maximum likelihood estimation predicted mean values of each training outcome across time (pre-, mid-, postprogram) using Hierarchical Linear Modeling 6.08 (preprogram refers to postworkshop baseline). Repeated observations of each outcome (Level 1) were nested within person (Level 2).1 Time and Condition Time interactions were evaluated as Level 1 random effects where PC and ES were compared to FS as the control condition.2 A separate parallel model also directly compared PC and ES conditions (dummy coded 0=1). Effect sizes are presented as pseudo-R2statistics and ds (Table 3).3
Knowledge Test Mixed effects modeling comparing all three conditions indicated a linear decrease in Knowledge Test scores from pre- to postprogram, B10 ¼ 1.23 (.43), t ¼ 2.85, p ¼ .01, d ¼ 1.42, but neither Condition Time interaction for PC or ES was significant compared to FS. Pseudo-R2 statistics suggested that 9.1% of linear change was accounted for by the support conditions, compared to the unconditional growth model. The model directly comparing ES and PC found a similar linear decrease across groups, and no significant condition effect.
Self-Reported Knowledge Mixed effects modeling indicated significant negative linear slope in participant-reported beliefs that the program increased knowledge, B10 ¼ 1.50 (.29), t ¼ 5.11, p < .001, d ¼ 1.75. Neither Condition Time interaction for PC or ES, compared to FS was significant. Pseudo-R2 statistics suggested that 0% of linear change was accounted for by the support conditions. Direct 1 Support condition, project site, degree status (pursuing=completed degree), and participant sex were evaluated as Level 2 fixed effects. Other than support condition, none were significant predictors in any models and did not contribute significant variance. Thus, we present parameter estimates for final models that included only support condition. 2 All linear models (with the intercept centered at postprogram) were compared both with and without Level 2 predictors against unconditional models. Except where noted, full linear models with Level 2 predictors were superior to unconditional models, using deviance difference, the Akaike Information Criterion, and the Bayesian Information Criterion to evaluate model fit. As an example, the final model predicting CBT Use was: CBT Useti ¼ b00 þ b01(Peer Consultation)ti þ b02(Expert Steaming)ti þ b10(time)ti þ b11(time) (Peer Consultation)ti þ b12(time) (Expert Steaming)ti þ [r0i þr1i(time)ti þeti]. 3 The overall variance accounted for by model parameters are presented using pseudo-R2 statistics (s20 linear s20 final=s20 linear), which compared change of variance from the unconditional linear to the final model (Singer & Willett, 2003). Individual parameter ds were computed using Feingold (2009) standards and reported in Table 3.
SCALABLE SUPPORT PROGRAMS
7
Downloaded by [Rutgers University] at 08:22 21 May 2015
TABLE 3 Means (Standard Deviations) of Training Outcomes at Pre-, Mid-, and Postconsultation Program
Knowledge Test Fact Sheet Peer Consultation Expert Streaming Increased Knowledge Fact Sheet Peer Consultation Expert Streaming Enhanced Skill Fact Sheet Peer Consultation Expert Streaming Use of CBT Strategies Fact Sheet Peer Consultation Expert Streaming Implementation Potential Fact Sheet Peer Consultation Expert Streaming
Preprogram
Mid-program
Postprogram
d (vs. FS)
d (vs. PC)
16.85 (1.21) 17.33 (1.66) 16.08 (2.18)
13.64 (2.38) 14.33 (1.80) 14.45 (2.58)
14.38 (2.66) 14.00 (1.83) 13.73 (2.05)
— 0.81 0.33
— — 0.55
6.69 (1.97) 7.25 (1.67) 6.54 (1.45)
3.50 (0.71) 3.25 (1.04) 3.27 (1.01)
3.69 (0.86) 3.13 (0.64) 3.55 (0.93)
— 0.49 0.02
— — 0.24
5.62 (1.26) 7.50 (0.93) 5.77 (1.01)
3.10 (0.32) 3.63 (1.19) 3.27 (0.91)
3.31 (0.75) 3.25 (1.04) 3.27 (1.10)
— 1.48 0.15
— — 0.62
1.13 (0.46) 0.97 (0.63) 1.05 (0.41)
1.18 (0.68) 1.39 (0.56) 0.76 (0.66)
1.32 (0.89) 1.46 (1.00) 0.97 (0.92)
— 0.61 0.24
— — 0.16
4.94 (0.45) 4.94 (0.42) 4.68 (0.42)
5.10 (0.46) 5.12 (0.67) 4.51 (0.66)
5.05 (0.54) 5.05 (0.64) 4.60 (0.65)
— 0.03 2.09
— — 1.11
Note: Both d are calculated according to Feingold (2009) and absolute values are interpreted as small (.2), medium (.5), and large (.8). d (vs. FS) ¼ effect size of mixed effects parameter comparing peer consultation (PC) to fact sheet self-study (FS) or expert streaming (ES) to FS at post. d (vs. PC) ¼ effect size of mixed effects parameter comparing ES to PC at post; CBT ¼ cognitive-behavioral therapy.
comparisons of ES and PC found nonsignificant differences. Self-Reported Skill Mixed effects modeling indicated a significant negative linear slope in participant-reported belief that extended support enhanced skill, B10 ¼ 1.15 (.15), t ¼ 7.79, p < .001, d ¼ 2.10. A significant Condition Time interaction was identified for PC compared to FS, t ¼ 2.73, p ¼ .01, d ¼ 1.48, but not for ES. Slopes indicate a steeper descent for PC compared to FS. Pseudo-R2 statistics suggested that 76.7% of linear change was accounted for by the support conditions. The model directly comparing ES and PC found a similar linear decrease across groups, and a nonsignificant condition effect, B ¼ 0.34 (.18), t ¼ 1.84, p < .10, d ¼ 0.62.
Implementation Potential For IPS, the linear model was not supported compared to the unconditional means model (DDev ¼ 6.52, p ¼ .09). Instead, two means models (no linear trend) were computed comparing Mean IPS across the three conditions. A significant main effect was found comparing ES to FS, B ¼ .45 (.18), t ¼ 2.51, p ¼ .02, d ¼ 2.09, but not PC and FS. Mean IPS was significantly different between PC and ES, B ¼ .24 (.10), t ¼ 2.46, p ¼ .02, d ¼ 1.11. Means indicate lower
Use of CBT Strategies Mixed effects modeling indicated a significant positive linear slope in mean CBT strategy use, B10 ¼ .19 (.08), t ¼ 2.46, p ¼ .02, d ¼ 0.77, but neither Condition Time Time interaction for PC or ES was significant compared to FS. Pseudo-R2 statistics suggested that 0% of linear change was accounted for by the support conditions. Figure 2 illustrates the relatively parallel increase in CBT use across conditions. Direct comparisons of ES and PC found nonsignificant differences.
FIGURE 2 Self-reported increased use of cognitive-behavioral therapy strategies in clinician practice across support conditions at pre-, mid-, and postprogram. Note: ES ¼ expert streaming; PC ¼ peer consultation; FS ¼ fact sheet self-study.
8
CHU ET AL.
IPS for ES (M ¼ 4.60) than FS (M ¼ 5.05) and PC (M ¼ 5.04).
Downloaded by [Rutgers University] at 08:22 21 May 2015
DISCUSSION Easily disseminable, lower cost training and support options are increasingly needed (Fritz et al., 2013; McHugh & Barlow, 2010). This study piloted two extended support models and a self-study control that require minimal resources after an initial web-based workshop. Participants were satisfied with the initial workshop, but workshop completion did not have immediate effects on CBT knowledge. Initial expectations differed across support conditions. Participants viewed PC as more appropriate for its goals, more likely to increase participant CBT skills, and promising greater overall efficacy than either FS or ES. After program completion, participants found the conditions equally satisfactory and useful and reported comparable increases in self-reported CBT strategies in their practices. However, PC was rated as less feasible than FS at the trend level, and ES was associated with the lowest ratings of implementation potential. Attrition rates were generally low, but two participants initially assigned to PC refused their condition and week-to-week PC attendance ranged widely (25%–100%; M ¼ 69%), raising concerns about feasibility. Encouragingly, self-reported use of CBT increased across all conditions. Improved adherence and competence has been observed in experimental settings after training (e.g., Beidas et al., 2012; Dimeff et al., 2009; Miller et al., 2004) but have not been found to generalize to clinical practice. Further, FS (designed to control for participant engagement) produced equivalent increases in CBT use, suggesting that engagement in relevant content over time may promote important practice changes. Outcomes were self-reported and deserve independent confirmation. Results suggest therapists became more mindful of perceived CBT use, but this may not reflect actual CBT delivery. Inconsistent with previous studies (e.g., Beidas et al., 2012; Sholomskas et al., 2005), knowledge test scores declined across time for all conditions, potentially related to ceiling effects (very high scores across time) or because knowledge tests over-relied on factual content from the workshop, which may degrade easier than knowledge of general principles (e.g., Dimeff et al., 2009; Sholomskas et al., 2005). Results underscore the insufficiency of one-time didactics in aiding lasting dissemination (Comer & Barlow, 2014; McHugh & Barlow, 2010). Likewise, therapist perceptions of CBT knowledge and skills declined over time and declined at a greater rate for PC than FS. These items assessed perceived change in knowledge and skills and the extent
to which the assigned condition contributed; thus, perception of knowledge=skills may have been confounded with satisfaction with support condition. Alternatively, therapists may have lost confidence as they became aware of the intricacies in implementing CBT (i.e., therapists realized what they previously did not know). It is notable that CBT use increased while knowledge scores and confidence decreased. Although preliminary, findings suggest that behavior change may not depend entirely on provider confidence and knowledge (e.g., Beidas et al., 2012; Herschell et al., 2010; Sholomskas et al., 2005). Future research should examine these related constructs, but it is critical to remember that increasing targeted intervention use and, subsequently, improving downstream youth clinical outcomes are the key outcomes. Beliefs about implementation potential did not change over time, but ES participants rated the implementation potential for CBT lower than participants in PC or FS. We expected greater expert exposure to enhance views of implementation potential. This did not occur. ES was the only condition that did not include fact sheets, thus requiring participants to extract their own lessons from the modeled supervision, and thus requiring a more active learning process that could appear difficult to implement. Second, the complex demonstration cases in ES may have been too advanced and intimidating for novice learners. Future models should consider tiered programs that progress from simpler to more challenging cases, with special attention paid to challenging strategies like exposures. Third, the group supervision format (chosen to provide a range of cases) may introduce information irrelevant to trainees’ caseloads, impacting perceived feasibility. Finally, ES was the only condition to require some technological literacy. Several limitations exist. The small sample limited statistical power, but linear trends were found across outcomes, and effect sizes (d) indicated medium to large effects that were statistically nonsignificant; a larger trial is needed to confirm reliability and generalizability of effects and may help identify condition differences. Second, this is one of very few randomized trials in the training=support literature, but we did rerandomize two participants, constraining causal interpretability. Future trials might improve PC feasibility with video-conferenced meetings. Third, some ES participants reported technical difficulties (e.g., poor sound quality, trouble accessing video); further pilot testing is required to distinguish attitude differences between technology concerns and the ES model. Fifth, to ensure equivalent starting points across trainees, participants were required to attend in-person viewings of the initial workshop, potentially impeding recruitment. Sixth, self-reported CBT implementation
SCALABLE SUPPORT PROGRAMS
Downloaded by [Rutgers University] at 08:22 21 May 2015
deserves validation with objective data, although research suggests therapists can offer relatively valid accounts of treatment behavior (e.g., Chapman, McCart, Letourneau, & Sheidow, 2013; Hogue, Dauber, Henderson, & Liddle, 2014). Seventh, training dose requires greater examination, as it was impossible to determine what proportion of ES and FS materials was reviewed. Finally, costs (financial, time, resources) must be concretely evaluated. Limitations notwithstanding, the study demonstrated the potential of resource-efficient, scalable support methods to improve key outcomes, especially increased targeted intervention use. Needed now is development of support conditions, improved assessment measures, and a study with sufficient power to determine the relative efficacy of each training method.
ACKNOWLEDGMENTS We acknowledge the Society for Clinical Child and Adolescent Psychology, FIU Center for Children and Families, and The Children’s Trust of Miami-Dade for producing the online video workshop used for initial training in this study. We also acknowledge Jennifer Green, Ph.D., and Kimberly Howard, Ph.D., at the School of Education of Boston University; Doug Behan, LCSW, at the School of Social Work of Rutgers University; and Caroline Clauss-Ehlers at the Graduate School of Education of Rutgers University for their help in recruiting participants and their support in the conceptualization of the project. We also thank Deejay Robinson for his volunteer work.
FUNDING Portions of this work were supported by NIH K23 MH090247 awarded to Jonathan Comer and the Clara Mayo Memorial Fellowship awarded to Aubrey Carpenter.
REFERENCES Aarons, G. A., & Sommerfeld, D. H. (2012). Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child & Adolescent Psychiatry, 51, 423–431. doi:10.1016= j.jaac.2012.01.018 Ajzen, I., & Madden, T. J. (1986). Prediction of goal-directed behavior: Attitudes, intentions, and perceived behavioral control. Journal of Experimental Social Psychology, 22, 453–474. doi:10.1016= 0022-1031(86)90045-4
9
Bandura, A. (2004). Health promotion by social cognitive means. Health Education & Behavior, 31, 143–164. doi:10.1177= 1090198104263660 Beidas, R. S., Edmunds, J. M., Marcus, S. C., & Kendall, P. C. (2012). Training and consultation to promote implementation of an empirically supported treatment: A randomized trial. Psychiatric Services, 63, 660–665. doi:10.1176=appi.ps.201100401 Chapman, J. E., McCart, M. R., Letourneau, E. J., & Sheidow, A. J. (2013). Comparison of youth, caregiver, therapist, trained, and treatment expert raters of therapist adherence to a substance abuse treatment protocol. Journal of Consulting and Clinical Psychology, 81, 674–680. doi:10.1037=a0033021 Chu, B. C., Crocco, S. T., Arnold, C. C., Brown, R., Southam-Gerow, M. A., & Weisz, J. R. (2015). Sustained implementation of cognitive-behavioral therapy for youth anxiety and depression: Long-term effects of structured training and consultation on therapist practice in the field. Professional Psychology: Research and Practice, 46, 70–79. doi:10.1037=a0038000 Comer, J. S., & Barlow, D. H. (2014). The occasional case against broad dissemination and implementation: Retaining a role for specialty care in the delivery of psychological treatments. American Psychologist, 69, 1–18. doi:10.1037=a0033582 Cross, W., Matthieu, M., Cerel, J., & Knox, K. (2007). Proximate outcomes of gatekeeper training for suicide prevention in the workplace. Suicide and Life-Threatening Behavior, 37, 659–670. doi:10.1521=suli.2007.37.6.659 Darling-Hammond, L., & Richardson, N. (2009). Teacher learning: What matters? How Teachers Learn, 66, 46–53. Devilly, G. J., & Borkovec, T. D. (2000). Psychometric properties of the credibility=expectancy questionnaire. Journal of Behavior Therapy and Experimental Psychiatry, 31, 73–86. doi:10.1016= S0005-7916(00)00012-4 Dimeff, L. A., Koerner, K. K., Woodcock, E. A., Beadnell, B., Brown, M. Z., Skutch, J. M., . . . Harned, M. S. (2009). Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behavior Research and Therapy, 47, 921–930. doi:10.1016= j.brat.2009.07.011 Feingold, A. (2009). Effect sizes for growth-modeling analysis for controlled clinical trials in the same metric as for classical analysis. Psychological Methods, 14, 43–53. doi:10.1037=a0014699 Forman, S. G., Fagley, N. S., Chu, B. C., & Walkup, J. T. (2012). Factors influencing school psychologists’ ‘‘Willingness to Implement’’ evidence-based interventions. School Mental Health, 4, 207–218. doi:10.1007=s12310-012-9083-z Fritz, R. M., Tempel, A. B., Sigel, B. A., Conners-Burrow, N. A., Worley, K. B., & Kramer, T. L. (2013). Improving the dissemination of evidence-based treatments: Facilitators and barriers to participating in case consultation. Professional Psychology: Research and Practice, 44, 225–230. doi:10.1037=a0033102 Green, A. E., & Aarons, G. A. (2011). A comparison of policy and direct practice stakeholder perceptions of factors affecting evidence-based practice implementation using concept mapping. Implementation Science, 6, 104. doi:10.1186=17485908-6-104 Herschell, A. D., Kolko, D. J., Baumann, B. L., & Davis, A. C. (2010). The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review, 30, 448–466. doi:10.1016=j.cpr. 2010.02.005 Hogue, A., Dauber, S., Henderson, C. E., & Liddle, H. A. (2014). Reliability of therapist self-report on treatment targets and focus in family-based intervention. Administration and Policy in Mental Health and Mental Health Services Research, 41, 697–705. doi:10.1007=s10488-013-0520-6
10
CHU ET AL.
Downloaded by [Rutgers University] at 08:22 21 May 2015
Kuriyan, A., & Pelham, W. (2012, November). Motivation and implementation of skills after a conference on evidence based treatments for youth mental health. Poster presented at the annual meeting of the Association for Behavioral and Cognitive Therapies, Washington, DC. Leffler, J. M., Jackson, Y., West, A. E., McCarty, C. A., & Atkins, M. S. (2013). Training in evidence-based practice across the professional continuum. Professional Psychology: Research and Practice, 44, 20–28. doi:10.1037=a0029241 MacPherson, H. A., Leffler, J. M., & Fristad, M. A. (2014). Implementation of multi-family psychoeducational psychotherapy for childhood mood disorders in an outpatient community setting. Journal of Marital and Family Therapy, 40, 193–211. doi:10.1111=jmft.12013 McHugh, R. K., & Barlow, D. H. (2010). The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. American Psychologist, 65, 73–84. doi:10.1037=a0018121
Miller, W. R., Yahne, C. E., Moyers, T. B., Martinez, J., & Pirritano, M. (2004). A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology, 72, 1050–1062. doi:10.1037=0022-006X. 72.6.1050 Sholomskas, D. E., Syracuse-Siewert, G., Rounsaville, B. J., Ball, S. A., Nuro, K. F., & Carroll, K. M. (2005). We don’t train in vain: A dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology, 73, 106–115. doi:10.1037=0022-006X. 73.1.106 Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York, NY: Oxford University Press. Weisz, J. R., & Kazdin, A. E. (Eds.). (2010). Evidence-based psychotherapies for children and adolescents (2nd ed.). New York, NY: Guilford.