Adding Comments and Notes to Your PDF To facilitate electronic transmittal of corrections, we encourage authors to utilize the comment/annotations features in Adobe Acrobat. The PDF provided has been comment enabled, which allows you to utilize the comment and annotation features even if using only the free Adobe Acrobat reader (see note below regarding acceptable versions). Adobe Acrobat’s Help menu provides additional details on the tools. When you open your PDF, the annotation tools are clearly shown on the tool bar (although icons may differ slightly among versions from what is shown below). For purposes of correcting the PDF proof of your journal article, the important features to know are the following: x
To insert text, place your cursor at a point in text and select the Insert Text tool ( from the menu bar. Type your additional text in the pop-up box.
)
x
To replace text, highlight the text to be changed, select the Replace Text tool ( ) from the menu bar, and type the new text in the pop-up box. Do this instead of deleting and then reinserting.
x
To delete text, highlight the text to be deleted and press the Delete button on the keyboard.
x
) to describe changes that need to be made (e.g., changes Use the Sticky Note tool ( in bold, italics, or capitalization use; altering or replacing a figure) or to answer a question or approve a change from the editor. To use this feature, click on the Sticky Note tool in the menu bar and then click on a point in the PDF where you would like to make a comment. Then type your comment in the pop-up box.
x
Use the Callout tool ( ) to point directly to changes that need to be made. Try to put the callout box in an area of white space so that you do not obscure the text.
x
Use the Highlight and Add Note to Text tool ( ) to indicate font problems, bad breaks, and other textual inconsistencies. Select text to be changed, choose this tool, and type your comment in the pop-up box. One note can describe many changes.
x
To view a list of changes to the proof or to see a more comprehensive set of annotation tools, select Comment from the menu bar.
As with hand-annotated proof corrections, the important points are to communicate changes clearly and thoroughly, to answer all queries and questions, and to provide complete information to allow us to make the necessary changes to your article so it is ready for publication. Do not use tools that incorporate changes to the text in such a way that no indication of a change is visible. Such changes will not be incorporated into the final proof. To utilize the comments features on this PDF you will need Adobe Reader version 7 or higher. This program is freely available and can be downloaded from http://get.adobe.com/reader/
REPRINTS
Authors have two options for ordering reprints: Authors who need to use purchase orders may order reprints from the standard Purchase Order Service. For a substantially lower price, authors may use the Prepaid Service. The rate schedules on page 3 give the prices for each service. All international prices include shipping via WWDS. All domestic shipments will be made via UPS. We request that you do not use Post Office box numbers; provide a full street address if possible. Authors may request expedited shipping – the additional cost will be billed. You are required to pay all duties that apply. • Prepaid Service: To take advantage of this lower price option, submit your credit card information with your order or enclose a money order, certified check, or personal check. The prices given on page 3 include postage. • Purchase Order Service: Reprint orders that are not prepaid must be accompanied by a purchase order. Cadmus Reprints will bill you later for the cost of the reprints. Do not send remittance with the reprint order form and purchase order. Remember that the price you see on page 3 includes postage, so it is the exact amount you will be billed. (Exception: Authors requesting expedited shipping will be billed the additional cost.) Complete the order form on the next page and return it to Cadmus Reprints (not to APA). Only one order form is provided – include your coauthors’ orders on this form or make photocopies of the order form for your coauthors. Check the box for either the prepaid service or the purchase order service. Give explicit instructions for all authors receiving reprints, using the space at the bottom of the order form.
To determine the cost of the reprints, count the number of pages in the printed article and refer to the rate schedules on page 3. For example, if your article is 11 pages long, you want 100 reprints, you live in the United States, and you are using the prepaid service, your total cost would be $126. If your proof includes a page of queries following the text, do not include the query page in your article page count. Send the order form to Cadmus Reprints when you return the page proofs. Reprints will be mailed within two weeks of the publication of the journal. Orders received after the issue goes to press will be processed with the next issue.
Where to Order Reprints Send your order form with credit card information, money order, certified check, personal check, or purchase order in the amount indicated on the rate schedule to: Cadmus Reprints P.O. Box 822942 Philadelphia, Pa 19182-2942 Phone: (410) 943-0629 FAX: (877) 705-1373 Personal checks must clear before reprints are processed. There is a $30.00 charge for returned checks.
1
2013 REPRINT ORDER FORM APA Journal Authors: To order reprints, complete all sections of this form. Please read the instructions on page 1. Training and Education in Professional Psychology
2013-1076
SEND the reprint order and (1) credit card number, or (2) money order/certified check, or (3) approved purchase order, or (4) check to:
3855564
Cadmus Reprints P.O. Box 822942 Philadelphia, PA 19182-2942
BILLING NAME
DATE
ORGANIZATION
PHONE #
ADDRESS (no P.O. Box)
FAX # E-MAIL
CITY
STATE
REPRINTS INCLUDE COLOR?
ZIP CODE
# OF TOTAL ARTICLE PAGES ARTICLE TITLE # OF REPRINTS AUTHOR # OF COVERS PAYMENT METHOD: CHECK ONE: ___ CREDIT CARD
COMPUTE COST OF ORDER
CARD NUMBER ________________________
PRICE (per chart)
$
Add’l for Covers
$
SUBTOTAL
$
___ CHECK (shipment delayed until check clears)
PA residents add 6% tax
$
COMPLETE SHIPPING LABEL below. No P.O. Boxes. Include phone number on international shipments. International shipments can be made via AIR for additional charges; please indicate in Special Shipping Instructions if you desire this expedited service.
Add’l for Expedited Shipping $
______ VISA
EXPIRATION DATE _____________________
______ MASTERCARD
SIGNATURE ___________________________
___ MONEY ORDER/CERT. CHECK (make payable to Cadmus Reprints) ___ APPROVED PURCHASE ORDER (original PO must be attached)
TOTAL
(TYPE OR PRINT MAILING LABEL BELOW) SHIP TO:
Phone No. ____________________
Name _________________________________________________ Address ________________________________________________ _______________________________________________________ City ____________________ State __________ Zip _____________ Expedited Service (enter service required): _____________________
2
$
YES
NO
RATES EFFECTIVE WITH 2013 ISSUES Domestic (USA only)
Domestic (USA only) # of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
50
100
200
300
400
500
$110 $144 $186 $214 $252 $292 $330 $368 $406 $438 $486 $520 $136
$150 $194 $252 $314 $374 $428 $496 $558 $620 $686 $746 $814 $152
$170 $294 $400 $512 $618 $732 $848 $958 $1,076 $1,190 $1,308 $1,424 $202
$232 $390 $556 $714 $880 $1,038 $1,196 $1,370 $1,548 $1,722 $1,898 $2,072 $252
$296 $490 $714 $918 $1,128 $1,336 $1,594 $1,794 $1,994 $2,194 $2,394 $2,592 $310
$360 $592 $876 $1,116 $1,384 $1,646 $1,980 $2,212 $2,440 $2,670 $2,898 $3,128 $364
# of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
50
100
200
300
400
500
$240 $378 $418 $452 $496 $542 $708 $878 $1,046 $1,212 $1,382 $1,548 $136
$410 $496 $550 $594 $642 $692 $1,032 $1,366 $1,700 $2,036 $2,368 $2,708 $152
$580 $742 $858 $980 $1,108 $1,238 $1,752 $2,252 $2,760 $3,264 $3,772 $4,278 $202
$780 $1,088 $1,264 $1,438 $1,624 $1,806 $2,514 $3,222 $3,932 $4,642 $5,350 $6,060 $252
$906 $1,318 $1,662 $1,900 $2,136 $2,376 $3,222 $4,060 $4,900 $5,740 $6,578 $7,418 $310
$992 $1,542 $1,968 $2,356 $2,752 $3,160 $4,080 $5,018 $5,954 $6,890 $7,826 $8,762 $364
Black and White Reprint Prices Prepaid
Color Reprint Prices Prepaid
International (includes Canada and Mexico) # of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
International (includes Canada and Mexico)
50
100
200
300
400
500
$166 $230 $310 $370 $450 $516 $600 $664 $728 $794 $870 $934 $196
$200 $272 $372 $468 $570 $652 $758 $848 $940 $1,032 $1,124 $1,216 $210
$248 $446 $618 $802 $976 $1,156 $1,330 $1,516 $1,702 $1,864 $2,050 $2,236 $300
$354 $610 $886 $1,136 $1,408 $1,660 $1,912 $2,194 $2,474 $2,752 $3,030 $3,308 $412
$450 $780 $1,134 $1,474 $1,818 $2,160 $2,544 $2,878 $3,184 $3,514 $3,842 $5,522 $522
$556 $952 $1,402 $1,806 $2,234 $2,664 $3,160 $3,558 $3,956 $4,352 $4,750 $5,146 $630
# of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
50
100
200
$296 $464 $542 $610 $694 $768 $980 $1,174 $1,370 $1,568 $1,764 $1,962 $196
$462 $574 $670 $750 $838 $916 $1,294 $1,656 $2,018 $2,382 $2,746 $3,110 $210
$658 $896 $1,078 $1,270 $1,466 $1,660 $2,234 $2,808 $3,386 $3,938 $4,514 $5,090 $300
300
400
500
$900 $1,060 $1,188 $1,306 $1,606 $1,900 $1,594 $2,082 $2,496 $1,860 $2,458 $3,046 $2,152 $2,826 $3,604 $2,430 $3,200 $4,176 $3,228 $4,172 $5,260 $4,048 $5,144 $6,364 $4,860 $6,090 $7,470 $5,672 $7,058 $8,574 $6,484 $8,028 $9,678 $7,296 $10,346 $10,782 $412 $522 $630
Additional Rates Set title page, each $16.00 Each extra mailing $32.00 Remake pages, each $50.00
3
RATES EFFECTIVE WITH 2013 ISSUES Domestic (USA only) # of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
Domestic (USA only)
50
100
200
300
400
500
$120 $156 $202 $232 $274 $316 $358 $398 $440 $476 $528 $564 $148
$162 $210 $274 $340 $406 $464 $538 $606 $674 $744 $810 $882 $166
$184 $318 $434 $556 $670 $794 $920 $1,040 $1,168 $1,292 $1,418 $1,544 $218
$252 $422 $604 $774 $956 $1,126 $1,298 $1,488 $1,678 $1,868 $2,058 $2,248 $274
$322 $532 $774 $996 $1,224 $1,450 $1,730 $1,946 $2,164 $2,380 $2,596 $2,814 $338
$390 $644 $950 $1,210 $1,500 $1,786 $2,148 $2,400 $2,648 $2,896 $3,144 $3,392 $396
# of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
50
100
200
300
400
500
$260 $410 $454 $490 $538 $590 $770 $952 $1,136 $1,316 $1,498 $1,678 $148
$446 $538 $596 $644 $698 $750 $1,120 $1,482 $1,844 $2,210 $2,570 $2,938 $166
$628 $806 $932 $1,064 $1,202 $1,342 $1,900 $2,444 $2,994 $3,542 $4,092 $4,642 $218
$846 $1,180 $1,372 $1,560 $1,762 $1,960 $2,728 $3,496 $4,266 $5,036 $5,806 $6,576 $274
$984 $1,430 $1,804 $2,062 $2,318 $2,578 $3,496 $4,406 $5,316 $6,226 $7,138 $8,048 $338
$1,076 $1,674 $2,136 $2,556 $2,986 $3,428 $4,428 $5,444 $6,460 $7,566 $8,492 $9,506 $396
Black and White Reprint Prices Purchase Order
Color Prices Purchase Order
International (includes Canada and Mexico) # of 50 Pages $180 1-4 $250 5-8 $336 9-12 13-16 $402 17-20 $488 21-24 $560 25-28 $652 29-32 $720 33-36 $790 37-40 $860 41-44 $944 45-48 $1,014 Covers $212
International (includes Canada and Mexico)
100
200
300
400
500
$218 $294 $404 $508 $618 $708 $822 $920 $1,018 $1,118 $1,218 $1,318 $226
$268 $484 $672 $870 $1,058 $1,254 $1,444 $1,644 $1,846 $2,022 $2,224 $2,426 $326
$384 $660 $960 $1,232 $1,528 $1,802 $2,074 $2,382 $2,684 $2,986 $3,288 $3,590 $448
$488 $846 $1,230 $1,598 $1,972 $2,344 $2,762 $3,122 $3,454 $3,812 $4,170 $5,990 $566
$602 $1,032 $1,522 $1,960 $2,424 $2,890 $3,428 $3,860 $4,292 $4,722 $5,154 $5,584 $682
# of Pages 1-4 5-8 9-12 13-16 17-20 21-24 25-28 29-32 33-36 37-40 41-44 45-48 Covers
50
100
200
$320 $504 $588 $662 $754 $832 $1,062 $1,274 $1,486 $1,700 $1,914 $2,130 $212
$500 $622 $728 $812 $910 $994 $1,404 $1,796 $2,190 $2,584 $2,978 $3,374 $226
$712 $972 $1,170 $1,378 $1,592 $1,802 $2,422 $3,048 $3,872 $4,272 $4,898 $5,524 $326
Additional Rates Set title page, each $16.00 Each extra mailing $32.00 Remake pages, each $50.00
4
300
400
500
$976 $1,150 $1,288 $1,418 $1,742 $2,062 $1,728 $2,260 $2,708 $2,018 $2,666 $3,304 $2,334 $3,066 $3,910 $2,636 $3,472 $4,532 $3,504 $4,526 $5,708 $4,392 $5,582 $6,904 $5,272 $6,606 $8,104 $6,154 $7,658 $9,302 $7,036 $8,710 $10,500 $7,916 $11,226 $11,698 $448 $566 $682
Subscriptions and Special Offers In addition to purchasing reprints of their articles, authors may purchase an annual subscription, purchase an individual issue of the journal (at a reduced rate), or request an individual issue at no cost under special “hardship” circumstances. To place your order online, visit http://www.apa.org/pubs/journals/subscriptions.aspx; or you may fill out the order form below (including the mailing label) and send the completed form and your check or credit card information to the address listed on the order form. For information about becoming a member of the American Psychological Association, visit http://www.apa.org/membership/index.aspx; or call the Membership Office at 1-800-374-2721. 2013 APA Journal Subscription Rates NonAPA Agent Journal* Individual Member Rate Rate American Psychologist $ 343 $ 112 Jrnl of Family Psychology Behavioral Neuroscience $ 355 $ 167 Jrnl of Personality & Social Psychology Developmental Psychology $ 433 $ 178 Neuropsychology Emotion $ 119 $ 64 Professional Psych: Research & Practice Experimental & Clinical Psychopharmacology $ 185 $ 59 Psychological Assessment Health Psychology $ 250 $ 100 Psychological Bulletin Jrnl of Abnormal Psychology $ 198 $ 80 Psychological Methods Jrnl of Applied Psychology $ 309 $ 112 Psychological Review Jrnl of Comparative Psychology $ 119 $ 59 Psychology & Aging Jrnl of Consulting & Clinical Psychology $ 309 $ 134 Psychology of Addictive Behaviors Jrnl of Counseling Psychology $ 161 $ 59 Psychology, Public Policy, and Law Jrnl of Educational Psychology $ 198 $ 86 Rehabilitation Psychology JEP: Animal Behavior Processes $ 119 $ 59 Clinician’s Research Digest – Adult JEP: Applied $ 119 $ 59 Populations JEP: General $ 119 $ 59 Clinician’s Research Digest – Child and JEP: Human Perception, and Performance $ 433 $ 178 Adolescent Populations JEP: Learning, Memory, and Cognition $ 433 $ 178 *For journal descriptions, see APA’s website: http://www.apa.org/pubs/journals Journal*
NonAgent Individual Rate $ 185 $ 605 $ 185 $ 161 $ 185 $ 309 $ 119 $ 198 $ 198 $ 119 $ 119 $ 119 $ 119
APA Member Rate
$ 119
$ 59
$ 64 $ 250 $ 64 $ 59 $ 59 $ 113 $ 59 $ 84 $ 80 $ 84 $ 59 $ 59 $ 59
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Instructions: Check the appropriate box, enter journal title and price information, and complete the mailing label in the right column. Enclose a check made out to the American Psychological Association, and mail it with the form to the APA Order Department or complete the credit card information below. Orders can also be placed online at http://www.apa.org/pubs/journals/subscriptions.aspx. Annual Subscription (available on January-December basis only). To subscribe, specify calendar year of the subscription. Refer to the Subscription Rates shown above. Journal: _____________________________________________ Calendar year of subscription: ___________ Price: ___________ Special Offers! If you are a journal article author, you may take advantage of two Special Offers. Individual Copy. You may order individual copies of the entire issue in which your article appears. As an author, you receive a special reduced rate of $5 per copy for up to a maximum of 25 copies. No phone requests accepted.
be held liable and will not honor any claims for undelivered, delayed, or lost shipments.
Hardship Request. If you do not have a personal subscription to the journal and you do not have access to an institutional or departmental subscription copy, you may obtain a single copy of the issue in which your article appears at no cost by filing out the information below. Journal: ___________________________________________ Vol. no. : _________________ Issue no.: _________________ Issue month: ________________________________________ CREDIT CARD PAYMENT ___ VISA ___ MASTERCARD ___ AMERICAN EXPRESS CARD NUMBER__________________________________________ Expire Date ___________ Signature __________________________ PRINT CLEARLY – THIS IS YOUR MAILING LABEL SHIP TO:
Phone No. ____________________
Journal: ____________________________________________ Name _______________________________________ Vol. no.: _______ Issue no.: _______ Issue month: _________ ____ copies @ $5 a copy = $________ (order amount) +________ (handling; see below) TOTAL enclosed: $________
Up to $14.99
$5.00
Guaranteed Non-U.S.* $50.00
$15 to $59.99
$6.00
$75.00
$16.00
10% Order Total
$125.00
$20.00
$60.00+
U.S. & Puerto Rico
_____________________________________________ City ________________ State ________ Zip ________ Expedited Service (enter service required): ___________
Shipping & Handling Fees Order amount:
Address ______________________________________
Economy Non-U.S.** $15.00
Send the completed form and your check, made out to the American Psychological Association, or your credit card information to: APA Order Department 750 First Street, NE Washington, DC 20002-4242
*International rates for guaranteed service are estimates. **I agree that international economy service is non-guaranteed and does not provide tracking or date/time specific delivery. Delivery time for this level of service can take up to 8 weeks. If this level of service is selected, APA will not
All orders must be prepaid. Allow 4-6 weeks after the journal is published for delivery of a single copy or the first copy of a subscription.
5
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
Training and Education in Professional Psychology 2013, Vol. 8, No. 1, 000
© 2013 American Psychological Association 1931-3918/13/$12.00 DOI: 10.1037/tep0000031
A Systematic Review and Reformulation of Outcome Evaluation in Clinical Supervision: Applying the Fidelity Framework AQ: au
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
AQ: 1
AQ: 2
Robert P. Reiser
Derek L. Milne
Reiser Healthcare Consulting, San Anselmo, California
Newcastle University
Although a strikingly diverse range of outcomes have been measured within clinical supervision research, a dominant perspective is that clinical outcomes remain the “acid test” of its effectiveness (Ellis & Ladany, 1997). We question the wisdom of this acid test logic in 2 ways. First, we summarize alternative conceptualizations of outcome from within the supervision field, highlighting several important reasons for considering clinical benefit as but one of several equally valid, stepwise outcomes. The fidelity framework (Borrelli et al., 2005) is drawn upon to show how these complementary outcomes may be logically and advantageously combined. This framework’s dimensions are the design, training, delivery, receipt, and enactment of an intervention. Second, a sample of 12 interpretable studies of the clinical outcomes of supervision is evaluated in terms of the studies’ attention to these 5 dimensions. From this conceptual and empirical review, it is concluded that an overemphasis on clinical outcomes carries unnecessary risks (e.g., weak causal reasoning and a failure to identify mechanisms of change), while underemphasizing the several benefits of a more inclusive approach (e.g., increasing outcome research and improving supervision). Keywords: clinical supervision, outcome, fidelity
weaknesses within this research. For example, Ellis et al. (1996) highlighted ambiguous hypotheses, the use of psychometrically weak instruments, and small sample sizes. They also noted a shift to pragmatic field studies that may have contributed to these weaknesses. For such methodological reasons, reviewers have concluded that there was insufficient evidence to infer a clear effect of supervision on patient outcomes, bearing out the interpretation offered by Watkins (2011).
Supervision has become an internationally accepted element within modern mental health services (Watkins & Milne, ●●●), a critical component in the training and regulation of therapists (Holloway & Neufeldt, 1995), and a major device for enhancing the quality of care (Falender & Shafranske, 2008). But this endorsement of supervision has taken place despite a weak evidence base (Ellis & Ladany, 1997). This mismatch between practice and research is more unsatisfactory, given the growing emphasis on competency-based practice (Falender et al., 2004; Rodolfa et al., 2013), accountability, and evidence-based practice (American Psychological Association, 2006). In response to this unsatisfactory gulf between research and practice, Watkins (2011) argued that it was imperative to demonstrate that supervision contributed to patient outcome. This has been termed the “acid test” of supervision: improving patient outcomes, usually measured in terms of symptomatic relief (Ellis & Ladany, 1997). In the past decade, there have been at least eight reviews dealing more or less directly with the effect of supervision on patient outcomes, including Watkins (2011), Ellis, Ladany, Krengel, and Schult (1996), Freitas (2002), Holloway and Neufeldt (1995), Milne, Aylott, Fitzpatrick, and Ellis (2008), Roth, Pilling, and Turner (2010), Tsui (1997), and Wheeler and Richards (2007). These reviews all critiqued a carefully selected sample of empirical studies, concluding that there were myriad methodological
Beyond the Acid Test Onet response to this lack of proof of supervision’s effectiveness, as well as to the lack of an established methodology for obtaining such proof, is to focus on more proximal and researchamenable outcomes, as argued by others (e.g., Holloway & Neufeldt, 1995). Although we do not question that clinical outcomes are a necessary element within a comprehensive outcome evaluation, there are several reasons for questioning that it represents a definitive demonstration of the effectiveness of supervision. One reason is that the highest duty of supervision is surely to ensure safe practice (client protection), to oversee the therapy that is conducted by supervisees to try and ensure that it causes no harm (the Hippocratic oath). From his review, Milne (2009) concluded that the purpose of supervision was to ensure safe and effective practice. For therapy to be safe and effective, supervision needs to prioritize changes in the supervisee (e.g., in terms of attitudes or competencies). To illustrate, Freitas (2002, p. 363) considered it “axiomatic that clinical supervision is conducted for the benefit of . . . trainees” (i.e., supervisees). Then there is a methodological reason to go beyond the acid test. Knowledge of outcomes in the absence of knowledge of the explanatory pro-
ROBERT P. REISER, ●●● DEREK L. MILNE, ●●● CORRESPONDENCE CONCERNING THIS ARTICLE should be addressed to Robert P. Reiser, Reiser Healthcare Consulting, 945 Butterfield Road, San Anselmo, CA 94960. E-mail:
[email protected] 1
AQ: 3
AQ: 4
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
REISER AND MILNE
2
cesses does little to advance interventions like supervision (Donabedian, 1988; Rossi, Freeman, & Lipsey, 2003).
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Before Methodology
AQ: 5
AQ: 6
In thinking about outcomes that may complement the acid test and that contribute to the main purpose of supervision— ensuring safe and effective practice—it may be advantageous to give greater attention to how we conceptualize supervision’s effectiveness. Specifically, how might we best construe the relationship between patient benefit and its causal factors, such as the ways that patients and supervisees interact (Holloway, 1984)? We wish to argue that the conceptualization of supervision outcomes is as deficient, but as necessary, for the proof of supervision’s effectiveness as are the more commonly critiqued methodological considerations. Taken together, the reviews summarized above have tended to adopt a methodological review approach, using either established research evaluation criteria in a systematic review approach (e.g., 37 validity threats were applied to the sampled studies within Ellis & Ladany, 1997) or general scientific reasoning within a less structured, narrative review approach (e.g., Freitas, 2002). This emphasis on methodology contrasts with a very limited emphasis on conceptualization. In addition to definitional imprecision and the lack of consensus over what is deemed an outcome, the above reviews tended to lack a conceptual framework, adopting a more focused, ad hoc, and atheoretical approach to their analyses, corresponding to the reviewed studies (Tsui, 1997). The same criticism has been made of methodological reviews, that is, that they have tended to evaluate the sampled studies in an unsystematic way (Ellis & Ladany, 1997). We found few examples in the above sample in which reviewers systematically utilized a conceptual framework to analyze their study samples. Exceptions included Holloway (1984) and Milne et al. (2008). Such explicit and coherent conceptual approaches helped to clarify some dominant themes that might otherwise have been missed, due to this diverse approach to supervision outcomes. As a result, a more precise and testable supervision model was outlined.
Rationale for the Fidelity Framework A conceptual approach to literature review offers broadly the same advantages as a theoretical model. In particular, such a model helps to highlight the strengths and weaknesses of a literature within a logically coherent analysis, better clarifying what we know (and do not know), thus suggesting practice developments and research implications. An example is the fidelity framework, which refers to “methodological strategies used to monitor and enhance the reliability and validity of behavioral interventions” (Borrelli et al., 2005, p. 852). The value of high levels of treatment fidelity in the implementation of clinical trials has been summarized by Bellg et al. (2004) and others (e.g., Schoenwald et al., 2011). We selected this framework for the present review because it best captured the issue of patient outcomes related to the prior steps within supervision: It embodies a taxonomy, teasing apart the successive steps in getting from supervision to patient outcome. Such a stepwise approach seems especially valuable, given the clear distinctions that are made by researchers between steps like
supervisee training, adherence, and transfer (Milne, 2009). In addition, the fidelity framework affords an expanded view of supervision, yet reflects current models of supervision—models that recognize complex processes of reciprocal influence (Holloway, 1984). Third, the fidelity framework was chosen because it uses dimensions and concepts that are implicitly accepted and applied by supervision researchers. An additional reason for adopting the fidelity framework is that it helps to specify the intervention and thus reduce unintended variability. It also increases the statistical power and improves the generalizability of study findings, by offering more readily replicated procedures for disseminating interventions like supervision. By contrast, low fidelity has a significant cost in terms of reducing the power of studies to detect significant effects, and in requiring larger and more expensive sample sizes (Borrelli et al., 2005). A five-part framework for addressing fidelity in clinical trials— including study design, training, delivery, receipt, and enactment— has been developed by Bellg et al. (2004) and Borrelli et al. (2005). Fidelity in terms of treatment design involves detailing information about the dose of the experimental and comparison conditions (number of contacts, length of session, duration of contact over time, training of providers, the credentials of providers providing the treatment, and specifying the underlying theoretical model or clinical guidelines utilized. Fidelity in terms of training providers requires that the training of providers (for purposes of our review, the supervisors) is appropriately specified and standardized, and that there are measures of skill acquisition to determine how provider skills are improved and maintained over time. We have supplemented these criteria with those from Roth et al. (2010): that there is a published description of the training procedures (e.g., a manual; see Table 1). This was our only addition to the fidelity framework, included to better specify the original meaning of training in Bellg et al. (2004) and Borrelli et al. (2005). Fidelity in terms of the delivery of treatment involves a method to document and specify what has been delivered, including the content, dose, and a mechanism to ensure the supervisor actually adhered to the intervention plan; the assessment of nonspecific effects; and the use of an intervention (supervision) manual. Fidelity in terms of the receipt of treatment involves an assessment of the supervisees’ comprehension of key elements of the intervention (supervision training) and a strategy to improve supervisee performance of the skills addressed in the intervention. Finally, enactment of treatment skills refers to assessment of the clinical outcomes of the supervisee’s actual performance of such skills (in our case, objective assessments of the client symptoms, problems, functioning, or quality-of-life changes). It is recognized that the term “enactment” has additional meanings within the fidelity framework (e.g., generalization across settings) as well as across different therapies. By adopting treatment fidelity recommendations (Belg et al., 2004; Borrelli et al., 2005; Moncher & Prinz, 1991) and applying them to the design and implementation of supervision studies, we selected a well-recognized approach to improving implementation and design in clinical trials. The fidelity framework is entirely consistent with, and supports, the purpose of supervision, that is, to ensure safe and effective practice.
AQ: 7
AQ: 8 T1
AQ: 9
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z
S⫽1 11/14/13 11:25 Art: 2013-1076
CLINICAL SUPERVISION AND CLIENT OUTCOMES
3
Table 1 Fidelity Review Checklist Table (Adapted From Borrelli et al., 2005) Fidelity framework element
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Design of supervision
Training of supervisors
Delivery of supervision
Receipt of supervision
Enactment of supervision
Dimensions (with guiding criteria/examples) Dimension I. Criteria/examples: 1. Provided information about supervision dose in the intervention condition: Length of contact session(s) Number of contacts Duration of contact over time 2. Provided information about supervision dose in the comparison condition Length of contact session(s) Number of contacts Duration of contact over time 3. Mention of provider credentials 4. Mention of a theoretical model or clinical guidelines on which the supervision/intervention is basedb 5. Was the supervision content defined (that is, a manual or treatment descriptions in books or papers)?a,b If a manual/supervision description was used, was there either: If the manual is in the public domain, the principal citation/ source; ORa,b If not in the public domain, a brief description of manual content (for example, whether the manual gives a comprehensive account of the complete intervention)a,b OR is an outline guide?a,b Dimension II. Criteria/examples: 1. Description of how supervisors were trained 2. Standardized supervisor training 3. A description of training offered to supervisors (for example, outline content, number of sessions individual or group based, length of training)b 4. Measured supervisor skill acquisition posttraining 5. Described how supervisor skills maintained over time 6. Information about the supervisor(s) (for example, qualifications and experience)a 7. A description of supervision training arrangements (for example, number of sessions, frequency, and duration)a 8. Whether the results of integrity checks (adherence/competence/treatment fidelity measures) were used to signal the need for additional supervision/training if therapists performed below criterion levelsa Dimension III. Criteria/examples: 1. Included method to ensure that the content of the supervision/intervention was being delivered as specified (e.g., supervision manual, checklist, computer program) 2. Included mechanism to assess if the supervisor actually adhered to the intervention (e.g., audiotape, observation, selfreport of provider, exit interview with participant) 3. Assessed nonspecific supervision effects 4. Used supervision manual (see above: public domain, comprehensive account or outline only?a 5. A description of supervision arrangements (for example, number of sessions, frequency, and duration)a 6. Whether the results of integrity checks (adherence/competence/treatment fidelity measures) were used to signal the need for additional supervision/training if therapists performed below criterion levelsa Dimension IV. Criteria/examples: 1. Assessed trainee comprehension of supervision during the intervention period 2. Included a strategy to improve trainee comprehension of supervision above and beyond what is included in the intervention 3. Assessed trainee’s ability to perform the skills acquired during the intervention period 4. Included a strategy to improve trainee performance of intervention skills during the intervention period Dimension V. Criteria/examples 1. Assessed client outcome in terms of changes in problems, symptoms, quality of life (satisfaction and working alliance not considered full clinical outcome) 2. Assessed strategy to improve client performance of the acquired clinical skills in real-world settings
Note. From “A New Tool to Assess Treatment Fidelity and Evaluation of Treatment Fidelity Across 10 Years of Health Behavior Research,” by B. Borrelli et al., 2005, Journal of Consulting and Clinical Psychology, 73, p. 855. Copyright 2005 APA. Adapted with permission. a Adapted from Roth, Pilling, and Turner (2010). b Required field for meeting minimum dimension criteria.
Review Objectives AQ: 10
1.
Profile the extent to which the sampled studies address the fidelity framework dimensions, summarizing strengths and weaknesses;
2. Draw out the theoretical, methodological, and practice implications of our findings; and 3. Recommend future research directions and priorities.
Method Inclusion Criteria Past reviews have illustrated the dangers of the overinclusion of studies in reviews of supervision outcome, resulting in many studies in which client outcome was not directly addressed as a dependent variable (Watson, 2011). Hence, the following specific criteria determined study inclusion. Articles were only included if
AQ: 11
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
REISER AND MILNE
4
AQ: 12
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
AQ: 13
AQ: 14
AQ: 15
they involved clinical supervision that was consistent with Milne’s (2007) elaboration of the Bernard and Goodyear (2004) definition: The formal provision (i.e., sanctioned by relevant organization/s); by senior/qualified health practitioners (or similarly experienced staff) of an intensive education (general problem solving capacity; developing capability) and/or training (competence enhancement) that is casefocused and which supports, directs and guides (including also “restorative” and/or “ normative” topics, addressed by means of professional methods, including objective monitoring, feedback and evaluation) the work of junior colleagues (supervisees). (Milne, 2007, p. 3).
For the purposes of this review, we included supervision in any format (e.g., both individual and group supervision), with “senior/ qualified health practitioners” broadly defined as professionals including psychiatrists, psychologists, social workers, nurses with mental health specialization, other licensed or master’s-prepared mental health counselors, mental health trainees (e.g., psychology practicum students, psychiatric residents, or other mental health professional trainees), mental health workers, or case managers. Additionally, studies from the clinical supervision literature were included if they satisfied the following criteria: (a) focused on case-based clinical supervision (studies involving primarily one-time training workshops or staff development activities were excluded, as they are typically short-term and not case-focused); (b) were published in a peer-reviewed scientific journal in English over the past 30 years; (c) involved a direct mental health measure of client outcome, including an assessment of symptoms, clinical problems, psychosocial functioning, or quality of life, as determined by self-report or objective assessments (studies that solely examined client satisfaction or the working alliance were excluded); (d) involved a mental health service (we therefore excluded a number of studies in the area of developmental and learning disabilities from past reviews; Milne & James, 2000); (e) included only studies in which inference was possible as to the positive effect of clinical supervision on client outcomes (i.e., studies reporting primarily quantitative data and including experimental designs, although we allowed for certain other designs, including mixed-effect regression models); and (f) were directly relevant to real-world clinical practice (thus, we excluded volunteers, pseudoclients, simulated role-plays, and so forth).
Sample of Studies and Search Criteria Our search strategy involved sifting through previous major reviews of clinical supervision in handbooks or journal articles up through the Watkins (2011) review. We then conducted our own literature search using the following databases: PsycINFO, PsycARTICLES, PsycBOOKS, Psychology and Behavioral Sciences Collection, Academic Search Premier, and MEDLINE. We used the keywords “psychotherapy supervision outcomes,” “clinical supervision outcomes,” and “supervision outcomes” to look for any additional references to studies. We then contacted each of the lead researchers, asking them to provide any additional references. The final selection of studies was determined by balancing the need for a sufficiently broad review sample in order to maintain generalizability against the danger of overinclusion of studies that might aggregate findings with very different that might conflate very divergent populations and methods of supervision or its evaluation. By applying these exclusion criteria to the initial search
sample of 48 studies, and taking account of duplicate studies in the sample, our final sample consisted of 12 studies. This narrowing of selection is consistent with Watkins (2011), who noted in an similar review that 30% of studies selected for inclusion in previous reviews of supervision and client outcomes did not actually directly assess client outcomes as a dependent variable. Our search terms were applied to psychotherapy supervision in general and did not restrict the type of study by treatment type or orientation. However, the final sample included a disproportionate number of studies involving supervision of cognitive– behavioral therapy (CBT). It is likely that our inclusion criteria—and especially our requirement that studies include an outcome measure directly related to symptoms, functioning, and quality of life—may have biased the sample toward studies with a CBT focus. In addition, CBT supervision lends itself to a manualized approach and evidence-based practice.
Review and Coding Procedures Based on the Fidelity Framework In the next step, we reviewed our final sample of 12 studies using the fidelity framework criteria drawn directly from Borrelli et al. (2005) and supplemented by selected criteria from Roth et al. (2010), as noted above. Our plan was to review the 12 studies against a well-defined and conceptually coherent set of minimum standards, comprising the five elements of the fidelity framework, in a reliable and replicable manner, as per a checklist or audit approach.
Development of Criteria for Fidelity Framework Checklist The first author initially reviewed the 12 studies against the checklist standards, recording compliance with the standards set out in Table 1. The authors then independently coded four of the 12 studies (33%), well above the recommended proportion of 10% (Borrelli et al., 2005). Two of the randomly selected studies were the first ones coded by the first author, and the other two were the ones last coded by him. This was to ensure reliable coding and to address potential reviewer drift, respectively. We obtained exact percent agreement between the two coders of 90% and 80% for these two assessment stages, respectively. Disagreements were reviewed for the purpose of improving interrater reliability and identifying any areas in the criteria table that needed to be more clearly defined. These assessments indicated that the fidelity checklist (see Table 1) could be applied reliably over the review period.
Results Reporting of Treatment Fidelity Strategies The main results of our review are presented in Table 2, which summarizes findings in terms of study fidelity for each of the five dimensions of the fidelity framework. Because we defined enactment as including a direct assessment of client outcomes, a key study inclusion criterion, all studies by definition met this requirement.
T2
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
CLINICAL SUPERVISION AND CLIENT OUTCOMES
5
Table 2 Studies of Clinical Supervision and Client Outcome, Reviewed Using the Fidelity Framework (Borrelli et al., 2005)
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Study
Date
Dodenhoff (1981) Couchon & Bernard (1984) Steinhelber, Patterson, Cliffe, & LeGoullon (1984) Harkness & Hensley (1991) Triantafillou (1997) Bambling, King, Raue, Schweitzer, & Lambert (2006) Bradshaw, Butterworth, & Mairs (2007) Grey, Salkovskis, Quigley, Clark, & Ehlers (2008) Callahan, Almstrom, Swift, Borja, & Heath (2009) Reese et al. (2009) Schoenwald, Sheidow, & Chapman (2009) White & Winstanley (2010) Percent of total studies meeting fidelity checklist criteria a
Study met fidelity checklist criteria.
b
Design a
Training b
Delivery
Receipt
Enactment
Percent
1981 1984 1984 1991 1997
1 0 1 0 1
0 0 0 1 1
1 1 0 1 1
0 0 0 0 1
1 1 1 1 1
60% 40% 40% 60% 100%
2006 2007 2008 2009 2009 2009 2010
1 1 1 1 1 1 1
1 1 0 0 0 1 1
1 1 0 0 0 1 1
1 0 1 0 0 1 0
1 1 1 1 1 1 1
100% 80% 60% 40% 40% 100% 80%
83%
50%
67%
33%
100%
67%
Study did not meet fidelity checklist criteria.
Design. Apart from enactment, the highest level of adherence to the fidelity framework was in the design category, in which 83% of the studies included in our sample met criteria for adherence. For the purposes of our review, a key criterion for design fidelity was reference to a theoretical model or clinical guidelines (Borrelli et al., 2005, p. 858, Table 1). Borrowing from the Roth et al. (2010, p. 297) recommendations, we required that a supervision manual or guideline be either in the public domain or outlined within the text. We did not consider specification of contact time, length, and duration of supervision alone to be sufficient to operationalize the dependent variable in these studies. This level of adherence to the design aspect of the fidelity framework was roughly comparable with the mean of .80 reported by Borrelli et al. (2005) in their review of 342 articles. In the area of adherence to design, there appeared to be a positive trend over time, with more articles in the past 10 years demonstrating good adherence to design. Training. Lower levels of adherence were identified in relation to the training of supervisors, with only 50% of studies adequately specifying elements of training. This level of adherence compared favorably with the findings in the Borrelli et al. (2005) study that reported a mean adherence level of .22 in the category of training. Delivery. In terms of the delivery of supervision, 67% of studies met minimum standards of adherence. This compares quite favorably with the findings in Borrelli et al. (2005), which indicated a mean adherence level of .35 in this category. There was a slight trend toward improvement over time, but it should be noted that three out of seven studies in the past 10 years (43%) failed to meet minimum criteria in our fidelity checklist review approach. Receipt. The lowest levels of adherence were in the receipt of supervision, with only 33% of studies meeting the criteria of adequate reporting of any assessment of the trainees’ acquisition of skills or improved comprehension based on supervision. In the design element of receipt, our bar was set relatively low and we included any direct assessment of the trainee’s knowledge, skills, or attitudes related to supervision training, only excluding assessments of trainee satisfaction or the supervisory alliance. This compares unfavorably with the findings in Borrelli et al. (2005), who reported a mean adherence level of .49 in the category of
receipt. On this dimension, there was no clear trend toward improvement over time, and four out of seven studies over the past 10 years (57%) failed to meet the standard specified in our fidelity checklist approach.
Summary of Results In summary, across all 12 studies of clinical supervision and client outcome reviewed, there was a mean overall adherence level of .67 to the treatment fidelity framework. The weakest areas of adherence were in the elements of receipt (.33) and training (.50), followed by delivery (.67) of supervision. This is consistent with findings in the Roth et al. (2010) review of clinical trials, in which they had difficulty identifying methods of training and supervision in the clinical trials literature. It is also broadly consistent with Borrelli et al. (2005), who reported low levels of adherence in training (.22), delivery (.35), and receipt (.49) (Borrelli et al., 2005, p. 857).
Discussion We devised a fidelity assessment tool, based on Borrelli et al. (2005), applying it reliably to 12 studies relating clinical supervision to client outcomes, in order to assess treatment fidelity practices in the supervision literature. To our knowledge, this is the first review that applies this treatment fidelity approach specifically to studies of clinical supervision. Our analysis indicated moderately good adherence overall to the treatment fidelity framework (.67). The definition of high treatment fidelity used by Borrelli et al. (2005) was “studies that had a .80 or greater proportion adherence to our checklist across all strategies” (Borrelli et al., 2005, p. 857). The weakest areas of adherence were in the elements of receipt (.33) and training (.50), followed by delivery (.67) of supervision. There was a general improvement over time, but only three out of 12 studies (25%) met 100% of the fidelity framework adherence benchmarks: Triantafillou (1997), Bambling, King, Raue, Schweitzer, and Lambert (2006), and Schoenwald, Sheidow, and Chapman (2009). Are these findings surprising? In some ways, these results are quite consistent with prior reviews of workshop-based training
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
6
AQ: 16
AQ: 17 AQ: 18
REISER AND MILNE
(Culloty, Milne, & Sheikh, 2010) and supervision in clinical trials (Roth et al., 2010). Culloty et al. (2010) concluded their review of the CBT training workshop literature by stating that “fidelity is rarely treated comprehensively” (Culloty et al., 2010, p. 135). Similarly, Roth et al. (2010) concluded that “information about training and supervision is presented inconsistently from paper to paper, and sometimes only in outline form” (p. 296). It is disappointing that there appears to be a continuing lack of adequate specification both in defining the elements of training and in including direct assessments of the changes in the trainees’ knowledge, skills, and attitudes. It is our view that this gap reflects the fact that we are still in the early stages of disseminating and operationalizing the competency-based model in supervision (Falender & Shafranske, 2010), and it may also reflect the emphasis on the acid test of clinical outcomes. In short, practice is still considering new conceptualizations of outcome and catching up with the higher levels of specification (e.g., Roth & Pilling, 2009). The lower levels of fidelity in training and receipt we identified reflect a disturbing deficiency in specifying, implementing, and documenting training practices. In our view, this, in turn, reflects the lack of progress in manualizing and standardizing supervisor training. The fact that only 25% of studies met 100% of the fidelity framework criteria suggests a continuing and significant gap in the design of supervision studies. This raises the question as to whether we have set the bar too high by specifying that highquality studies meet four of five areas of the fidelity checklist. By insisting that studies pay attention to all five criteria in the fidelity model, we are providing the best platform for future research. As noted in Borrelli et al. (2005), there are significant negative consequences associated with low fidelity studies: “Treatment fidelity prevents the premature rejection of treatments that could be effective as well as the acceptance of treatments that are nonreproducible because of low internal validity” (Borrelli et al., 2005, p. 858). Although our review has focused on the methodological soundness of 12 studies of client outcomes of supervision by reference to the fidelity framework, we would like to briefly consider what these studies tell us about client outcomes. Despite our attempt to include only the most relevant studies, one subgroup of these studies actually offered no direct information on supervision’s clinical outcomes as we defined them. For example, Couchon and Bernard (1984) concluded that the timing of supervision sessions was not significantly correlated with improved client outcomes, while White and Winstanley (2009) concluded that supervision did not contribute to the quality of clinical care or client satisfaction. A second subgroup of studies did report significant clinical outcomes, including Dodenhoff (1981), Steinhelber, Patterson, Cliffe, and LeGoullon (1984), Harkness and Hensley (1991), and Triantafillou (1997). However, they were methodologically compromised and were thus hard to interpret (e.g., use of nonstandard measures; confounding of the experimental and control conditions). This left us with a third subgroup for which relevant clinical outcomes were measured and that were interpretable studies. For example, Grey, Salkovskis, Quigley, Clark, and Ehlers (2008) determined that therapists who received ongoing clinical supervision (compared with treatment as usual) achieved significantly better outcomes for panic disorder in terms of panic attacks and improvements in self-rated anxiety and avoidance. Similarly clear,
interpretable outcomes were reported by Bambling et al. (2006), Bradshaw, Butterworth, and Mairs (2007), Callahan, Almstrom, Swift, Borja, and Heath (2009), and Schoenwald et al. (2009). Although these studies do not clarify precise causal mechanisms (i.e., which aspects of supervision actually contributed to client outcomes), there is reason to believe that supervision that improves adherence to an empirically supported protocol appears likely to improve client outcomes. Second, we can conclude that supervision is likely to outperform no supervision in terms of client outcomes. Future research will hopefully illuminate which specific aspects of supervision are associated with the most significant client improvements.
Recommendations We next draw on helpful examples from studies that were identified in our search but that were excluded from our final sample. They nonetheless afford outstanding illustrations of how one might address fidelity framework criteria. Following the approach used within Roth et al. (2010) and Ellis and Ladany (1997), we seek to highlight good practice and to suggest options for advancing research. Table 3 again summarizes the fidelity framework, this time set alongside a selection of studies that illustrate both the wide range of outcomes that have been measured and some innovative or best-practice methods for achieving those outcomes. The recommendations that follow from Table 3 include explicit conceptualization, providing a falsifiable model of the predicted relationship between supervision and its outcomes. This brings significant benefits, such as detailed guidance on the precise nature of the intervention, together with some prioritization of the expected effects (Chen, 1990). In the illustrative study within Table 3, Weingardt, Cucciare, Bellotti, and Lai (2009) specified a “blended learning” system, including online group supervision sessions. Their precision and use of the latest available technology are features that will help to increase the fidelity with which supervision is implemented. On the subject of training, the explicit application of the fidelity framework by Culloty et al. (2010) included attention to the “mini-impacts” on the supervisees’ experiential learning, a feature that draws attention to the logical possibility of introducing subcriteria within the framework. For example, if the model guiding Weingardt et al.’s (2009) blended learning system includes intermediate steps, these could advantageously be assessed to better pinpoint the strengths and weaknesses of such methods. This is conveniently illustrated in the study in Table 3 by Karlin et al. (2012), in their consideration of how their package had impacted supervisees’ self-efficacy and attitudes. These assessments were completed at several intervals: pre- and postworkshop training, after 6 months of consultation, and in a follow-up after the end of the training program, providing detailed stepwise knowledge of delivery and receipt of training at key stages of the program. The illustration from Westbrook, Sedgewick-Taylor, Bennett-Levy, Butler, and McManus (2008) goes into exceptional detail in linking the elements of training to a suitably diverse range of outcome measures. This brings into focus how longitudinal measurement of the key learning opportunities within supervision might enhance outcomes. In this sense, the current attention to outcome monitoring could usefully be extended to any of the other intermediate
T3, AQ:19
AQ: 20
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
CLINICAL SUPERVISION AND CLIENT OUTCOMES
7
Table 3 Applying the Fidelity Framework to the Evaluation of Outcomes From Clinical Supervision, With Illustrative Methods and Studies
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Fidelity dimension
Outcomes measured
1. Design (addresses the question “What is the right thing to do?”)
Knowledge gain Self-efficacy Burnout
2. Training (addresses the question “Has the right thing been done?”)
Competence of the trainer Satisfaction with training Transfer of training to supervision
3. Delivery (addresses the question “Has it been done right?”)
Satisfaction with training Supervisees’ self-efficacy, attitudes, behavioral intentions, competence Therapeutic alliance Quality of life and patient outcomes Maintenance of training/supervision effects Dropout from training Satisfaction with training Satisfaction with supervision Supervisees’ competence Patient outcomes
4. Receipt (addresses the question “Did it result in the right outcome?”)
5. Enactment (addresses the question “Did it result in the right impact?”)
Note.
Supervisees’ self-rated learning patient dropouts Clinical outcomes (panic severity ratings; avoidance; agoraphobic cognitions; ratings of general anxiety)
Illustrative supervision study Weingardt, Cucciare, Bellotti, & Lai (2009): Designed a “blended learning” system, featuring e-learning with a self-paced, online course in CBT plus a series of live, online virtual group supervision sessions, designed to promote adherence. Culloty, Milne, & Sheikh (2010): Adherence to supervisor training model was assessed by direct observation, questionnaire, and group interview, capturing the supervisees’ perceptions of the training and its transfer. This measured training adherence, the associated miniimpacts on the supervisees’ experiential learning, and generalization. Karlin et al. (2012): In addition to evaluating changes in supervisees’ competence and patient outcomes at several intervals within a training plus consultation package, this study considered how the package had impacted supervisees’ self-efficacy and attitudes to the therapy (CBT: i.e., “nonspecific supervision effects”). Westbrook, Sedgewick-Taylor, Bennett-Levy, Butler, & McManus (2008): Supervisees’ discussed tapes of selected patients’ sessions in supervision meetings. This linked training workshops focused on helping trainees acquire clinical skills as well as understanding theory by practicing skills through role-plays and other experiential exercises. Grey, Salkovskis, Quigley, Clark, & Ehlers (2008): Supervisees’ clinical outcomes in treating panic disorder were compared before and after training, including ongoing supervision. Significantly better outcomes achieved after training plus supervision, indicating successful dissemination of an approach used within clinical trials to primary care.
Adapted from Borrelli et al. (2005). CBT ⫽ cognitive– behavioral therapy.
outcomes, as defined within the study’s model (e.g., in Westbrook et al., 2008, the effectiveness of experiential exercises in enhancing supervisees’ competence). This kind of nested approach to monitoring and feedback is illustrated in the study by Grey et al. (2008), as, in monitoring agoraphobic cognitions, they measured at least one of their hypothesized mechanisms of change. This degree of precise conceptualization and monitoring helps to improve outcomes or to indicate that the assumed links are invalid and require improvement (Chen, 1990).
Limitations of the Present Review Although we made every effort to select a representative sample of studies that met our criteria, by selecting from 48 articles cited in several major past reviews, our exclusion criteria may have been overly restrictive and may thus limit the generalizability of our findings. It is possible that treatment fidelity is misrepresented in our study sample, as we chose studies in a relatively exclusionary fashion, eliminating studies that did not focus on mental health services and studies that did not consider client outcomes in terms of assessing symptoms, problems, functioning, and quality of life as the dependent variable. Similarly, we fully recognize that client outcomes are not the only relevant criterion for determining the sample of studies for review. In theory, one might, for example, start with any of the other outcomes listed within Table 3, which might well produce results more consistent with the fidelity framework. Our inclusion criteria also meant that studies that addressed fidelity (or related issues) using qualitative methods were excluded. We are aware that some highly relevant qualitative studies
have been published, and these also merit consideration in taking stock of what we know about supervision’s outcomes. For instance, Johnston and Milne (2012) explored the “receipt” dimension of fidelity with grounded theory, highlighting how learning outcomes were enhanced when supervisors encouraged valued processes, such as Socratic information exchange. In developing our fidelity checklist review system, we operationalized a well-established framework with only minor changes (e.g., supervisor substituted for therapist; trainee substituted for client). Future reviews might usefully extend our present analysis to include other criteria. For example, evaluation of the training and education of mental health professionals has often drawn on Kirkpatrick’s (1967) four consecutive levels of effectiveness: reactions, learning, transfer, and impact. To these might be added the prior need for participation (Belfield, Thomas, Bullock, Eynon, & Wall, 2001). Whatever criteria are used, it is important to check that their application is reliable, and we independently coded over 30% of our final study sample. However, only the two authors acted as raters, and without an independent rater who is blind to the study hypotheses, it is possible that our ratings were subject to bias.
Conclusions We have questioned the wisdom of treating client outcomes as the acid test of clinical supervision, arguing that there are good grounds for giving equal weight to a number of complementary outcome criteria. These are conveniently and systematically captured within the fidelity framework. Application of this framework
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
REISER AND MILNE
8
to 12 carefully selected studies of clinical supervision highlighted significant infidelity, which hampers progress. To assist future research, we have developed a way of assessing adherence to this framework, and we have recommended ways to enhance supervision research, consistent with the fidelity framework. This framework appears to offer a useful way to progress toward the long overdue need for proof of supervision’s effectiveness.
References
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
American Psychological Association. (2006). Evidence-based practice in psychology: APA presidential task force on evidence-based practice. American Psychologist, 61, 475– 476. Bambling, M., King, R., Raue, P., Schweitzer, R., & Lambert, W. (2006). Clinical supervision: Its influence on client-rated working alliance and client symptom reduction in the brief treatment of depression. Psychotherapy Research, 16, 317–331. doi:10.1080/10503300500268524 Belfield, C., Thomas, H., Bullock, A., Eynon, R., & Wall, D. (2001). Measuring effectiveness for best evidence medical education: A discussion. Medical Teacher, 23, 164 –170. doi:10.1080/0142150020031084 Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., . . . Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH behavior change consortium. Health Psychology, 23, 443– 451. doi:10.1037/0278-6133.23.5.443 Borrelli, B., Sepinwall, D. ., Ernst, D., Bellg, A. J., Czajkowski, S., Greger, R., DeFrancesco, C., . . . Orwig, D. (2005). A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. Journal of Consulting and Clinical Psychology, 73, 852– 860. doi:10.1037/0022-006X.73.5.852 Bradshaw, T., Butterworth, A., & Mairs, H. (2007). Does structured clinical supervision during psychosocial intervention education enhance outcome for mental health nurses and the service users they work with? Journal of Psychiatric and Mental Health Nursing, 14, 4 –12. doi: 10.1111/j.1365-2850.2007.01021.x Callahan, J. L., Almstrom, C. M., Swift, J. K., Borja, S. E., & Heath, C. J. (2009). Exploring the contribution of supervisors to intervention outcomes. Training and Education in Professional Psychology, 3, 72–77. doi:10.1037/a0014294 Chen, H. (1990). Theory-driven evaluation. Newbury Park, CA: Sage. Couchon, W. D., & Bernard, J. M. (1984). Effects of timing of supervision on supervisor and counselor performance. The Clinical Supervisor, 2, 3–20. doi:10.1300/J001v02n03_02 Culloty, T., Milne, D. L., & Sheikh, A. I. (2010). Evaluating the training of clinical supervisors: A pilot study using the fidelity framework. The Cognitive Behaviour Therapist, 3, 132–144. doi:10.1017/ S1754470X10000139 Dodenhoff, J. T. (1981). Interpersonal attraction and direct–indirect supervisor influence as predictors of counselor trainee effectiveness. Journal of Counseling Psychology, 28, 47–52. doi:10.1037/0022-0167.28.1.47 Donabedian, A. (1988). The quality of care: How can it be assessed? Journal of the American Medical Association, 260, 1743–1748. doi: 10.1001/jama.1988.03410120089033 Ellis, M., & Ladany, N. (1997). Inferences concerning supervisees and clients in clinical supervision: An integrative review. In C. E. Watkins (Ed.), The handbook of psychotherapy supervision (pp. ●●●–●●●). AQ: 21 Chichester, UK: Wiley. Ellis, M. V., Ladany, N., Krengel, M., & Schult, D. (1996). Clinical supervision research from 1981–1993: A methodological critic. Journal of Counseling Psychology, 43, 35–50. doi:10.1037/0022-0167.43.1.35 Falender, C. A., Cornish, J. A. E., Goodyear, R., Hatcher, R., Kaslow, N. J., Leventhal, G., . . . Grus, C. (2004). Defining competencies in psychology supervision: A consensus statement. Journal of Clinical Psychology, 60, 771–785. doi:10.1002/jclp.20013
Falender, C. A., & Shafranske, E. P. (Eds.). (2008). Casebook for clinical supervision: A competency-based approach. Washington DC: American Psychological Association. doi:10.1037/11792-000 Falender, C. A., & Shafranske, E. P. (2010). Psychotherapy-based supervision models in an emerging competency-based area: A commentary. Psychotherapy: Theory, Research, Practice, Training, 47, 45–50. doi: 10.1037/a0018873 Freitas, G. J. (2002). The impact of psychotherapy supervision on client outcome: A critical examination of two decades of research. Psychotherapy: Theory, Research, Practice, Training, 39, 354 –367. doi: 10.1037/0033-3204.39.4.354 Grey, N., Salkovskis, P., Quigley, A., Clark, D. M., & Ehlers, A. (2008). Dissemination of cognitive therapy for panic disorder in primary care. Behavioural and Cognitive Psychotherapy, 36, 509 –520. doi:10.1017/ S1352465808004694 Harkness, D., & Hensley, H. (1991). Changing the focus of social work supervision: Effects on client satisfaction and generalized contentment. Social Work, 36, 506 –512. Holloway, E. L. (1984). Outcome evaluation in supervision research. The Counseling Psychologist, 12, 167–174. doi:10.1177/0011000084124014 Holloway, E. L., & Neufeldt, S. A. (1995). Supervision: Its contribution to treatment efficacy. Journal of Consulting and Clinical Psychology, 63, 207–213. doi:10.1037/0022-006X.63.2.207 Johnston, L. H., & Milne, D. L. (2012). How do supervisees’ learn during supervision? A grounded theory study of the perceived developmental process. The Cognitive Behaviour Therapist, 5, 1–23. doi:10.1017/ S1754470X12000013 Karlin, B. E., Brown, G. K., Trockel, M., Cunning, D., Zeiss, A. M., & Taylor, C. B. (2012). National dissemination of cognitive behavioral therapy for depression in the Department of Veterans Affairs health care system: Therapist and patient-level outcomes. Journal of Consulting and Clinical Psychology, 80, 707–718. doi:10.1037/a0029328 Kirkpatrick, D. L. (1967). Evaluation of training. In R. L. Craig & L. R. Bittel (Eds.), Training and development handbook (pp. 87–112). New York, NY: McGraw- Hill. Lambert, M. J. (1980). Research and the supervisory process. In A. K. Hess AQ: 22 (Ed.), Psychotherapy supervision: Theory, research, and practice (pp. 423– 450). New York, NY: Wiley. Milne, D. L. (2007). An empirical definition of clinical supervision. British Journal of Clinical Psychology, 46, 437– 447. doi:10.1348/ 014466507X197415 Milne, D. L. (2009). Evidence-based clinical supervision: Principles and practice. Chichester, UK: BPS Blackwell. Milne, D. L. (in press). Beyond the acid test: A conceptual review of AQ: 23 outcome evaluation in clinical supervision. American Journal of Psychotherapy. Milne, D. L., Aylott, H., Fitzpatrick, H., & Ellis, M. V. (2008). How does clinical supervision work? Using a best evidence synthesis approach to construct a basic model of supervision. The Clinical Supervisor, 27, 170 –190. doi:10.1080/07325220802487915 Milne, D. L., & James, I. (2000). A systematic review of effective cognitive behavioral supervision. British Journal of Clinical Psychology, 39, 111–127. doi:10.1348/014466500163149 Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11, 247–266. doi:10.1016/02727358(91)90103-2 Reese, R. J. Usher, E. L., Bowman, D. C., Norsworthy, L. A., Halstead, J. L., Rowlands, S. R., & Chisholm, R. R. (2009). Using client feedback AQ: 24 in psychotherapy training: An analysis of its influence on supervision and counselor self-efficacy. Training and Education in Professional Psychology, 3, 157–168. doi:10.1037/a0015673 Rodolfa, E., Greenberg, S., Hunsley, J., Smith-Zoeller, M., Cox, D., Sammons, M., . . . Spivak, H. (2013). A competency model for the
APA NLM tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z xppws S⫽1 11/14/13 11:25 Art: 2013-1076
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
CLINICAL SUPERVISION AND CLIENT OUTCOMES practice of psychology. Training and Education in Professional Psychology, 7, 71– 83. doi:10.1037/a0032415 Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (2003). Evaluation: A systematic approach (7th ed.). Newbury Park, CA: Sage. Roth, A. D., Pilling, S., & Turner, J. (2010). Therapist training and supervision in clinical trials: Implications for clinical practice. Behavioural and Cognitive Psychotherapy, 38, 291–302. doi:10.1017/ S1352465810000068 Schoenwald, S. K., Garland, A. F., Chapman, J. E., Frazier, S. L., Sheidow, A. J., & Southam-Gerow, M. A. (2011). Toward the effective and efficient measurement of implementation fidelity. Administration and Policy in Mental Health, 38, 32– 43. doi:10.1007/s10488-010-0321-0 Schoenwald, S. K., Sheidow, A. J., & Chapman, J. E. (2009). Clinical supervision in treatment transport: Effects on adherence and outcomes. Journal of Consulting and Clinical Psychology, 77, 410 – 421. doi: 10.1037/a0013788 Steinhelber, J., Patterson, V., Cliffe, K., & LeGoullon, M. (1984). An investigation of some relationships between psychotherapy supervision and patient change. Journal of Clinical Psychology, 40, 1346 –1353. doi:10.1002/1097-4679(198411)40:6⬍1346::AID-JCLP2270400612⬎3 .0.CO;2-L Triantafillou, N. (1997). A solution-focused approach to mental health supervision. Journal of Systemic Therapies, 16, 305–328. Tsui, M.-S. (1997). Empirical research on social work supervision: The state of the art (1970 –1995). Journal of Social Service Research, 23, 39 –54. doi:10.1300/J079v23n02_03
9
Watkins, C. E. (2011). Does psychotherapy supervision contribute to patient outcomes? Considering thirty years of research. The Clinical Supervisor, 30, 235–256. doi:10.1080/07325223.2011.619417 Watkins, C. E., & Milne, D. L. (in press). International handbook of clinical supervision. Chichester, UK: Wiley. Weingardt, K. R., Cucciare, M. A., Bellotti, C., & Lai, W. P. (2009). A randomized trial comparing two models of web-based training in cognitive– behavioral therapy for substance abuse counselors. Journal of Substance Abuse Treatment, 37, 219 –227. doi:10.1016/j.jsat.2009.01 .002 Westbrook, D., Sedgwick-Taylor, A., Bennett-Levy, J., Butler, G., & McManus, F. (2008). A pilot evaluation of a brief CBT training course: Impact on trainees’ satisfaction, clinical skills and patient outcomes. Behavioural and Cognitive Psychotherapy, 36, 569 –579. doi:10.1017/ S1352465808004608 Wheeler, S., & Richards, K. (2007). The impact of clinical supervision on counsellors and therapists, their practice and their clients. A systematic review of the literature. Counselling & Psychotherapy Research, 7, 54 – 65. doi:10.1080/14733140601185274 White, E., & Winstanley, J. (2009). Clinical supervision for nurses working in mental health settings in Queensland, Australia: A randomised controlled trial in progress and emergent challenges. Journal of Research in Nursing, 14, 263–276. doi:10.1177/1744987108101612 AQ: 25
Received July 1, 2013 Revision received October 14, 2013 Accepted October 22, 2013 䡲
JOBNAME: AUTHOR QUERIES PAGE: 1 SESS: 1 OUTPUT: Thu Nov 14 11:25:52 2013 /tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z
AUTHOR QUERIES AUTHOR PLEASE ANSWER ALL QUERIES
1
AQau—Please confirm the given-names and surnames are identified properly by the colors. ⫽ Given-Name, ⫽ Surname The colors are for proofing purposes only. The colors will not appear online or in print. AQ1—Author: Per APA style, all numbers in the abstract should be written as numerals; this has been corrected for you. AQ2—Author: For the “Watkins & Milne (in preparation)” citation (where “in preparation” has been replaced with bullets), if this is now “in press,” as indicated in the references, please change the citation to “in press.” If it is not in press, then the date of the most recent draft of the work should be used. Please change, as appropriate, and make consistent throughout. AQ3—Author: Per APA style, once a concept or expression has been introduced in quotes, it is not necessary to continue to use the quotes throughout; thus, the quotes around “acid test” were retained only on first mention in the abstract and first mention in the text. AQ4 —Author: The sentence beginning “One reason is that the highest duty” is somewhat difficult to follow. Please review and revise for clarity, perhaps by adding punctuation (such as a comma, dash, or semi-colon). AQ5—Author: Beginning with the sentence “Taken together, the reviews summarized above,” per APA style, please revise so as to avoid using “above” when referring to information presented earlier in the article. Please either reword or provide the name of the section where the information can be found rather than using “above.” There are 4 instances of this, so please change this throughout. AQ6 —Author: Per APA style, when a complete sentence follows a colon, the first word following the colon is capitalized. This has been corrected for you. AQ7—Author: In the sentence “Fidelity in terms of treatment design,” the ending parenthesis is missing. Please insert a parenthesis, where appropriate. AQ8 —Author: The sentence beginning “We have supplemented these criteria” is somewhat difficult to understand as written. Please review and revise for clarity (e.g., “that there is a published description of the training procedures”). AQ9 —Author: Belg et al., 2004 is not included in your references. Please add to the reference list
JOBNAME: AUTHOR QUERIES PAGE: 2 SESS: 1 OUTPUT: Thu Nov 14 11:25:52 2013 /tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z
AUTHOR QUERIES AUTHOR PLEASE ANSWER ALL QUERIES
2
or delete the citation(s). AQ10 —Author: Per APA style, for the Review Objectives section, please provide a sentence or two introducing the numbered list. Also per APA style, the lowercase letters were changed to numbers. AQ11—Author: Watson, 2011 is not included in your references. Please add to the reference list or delete the citation(s). AQ12—Author: Bernard & Goodyear, 2004 is not included in your references. Please add to the reference list or delete the citation(s). AQ13—Author: For the quotation by Milne (beginning “The formal provision”), please confirm whether the semi-colon following “(i.e., sanctioned by relevant organization/s)” is correct – should this be a comma instead? AQ14 —Author: In the sentence beginning “We then conducted our own,” because the sentence was not parallel in structure as worded, the keywords were separated into another sentence. Please confirm the change is OK or make any necessary adjustments. AQ15—Author: In the sentence beginning “The final selection of studies,” it appears that there may be a word missing (e.g., “with very different that might conflate”). Please review and clarify. AQ16 —Author: Roth & Pilling, 2009 is not included in your references. Please add to the reference list or delete the citation(s). AQ17—Author: Your citation for Steinhelber (1984) was changed to Steinhelber, Patterson, Cliffe, & LeGoullon (1984), as in your references and tables. Please confirm this is correct or make any necessary changes. AQ18 —Author: Your citation for Triantafillou et al. (2007) was changed to Triantafillou (2007). Please confirm this is correct or make any necessary changes. AQ19 —Author: For Table 3, similar to Table 1, please confirm whether a statement regarding permission to use the table is required. If yes, please provide this statement in the table note. AQ20 —Author: The sentence beginning “The recommendations that follow from Table 3” is
JOBNAME: AUTHOR QUERIES PAGE: 3 SESS: 1 OUTPUT: Thu Nov 14 11:25:52 2013 /tapraid5/trn-tep/trn-tep/trn00114/trn0274d14z
AUTHOR QUERIES AUTHOR PLEASE ANSWER ALL QUERIES
3
somewhat unclear – instead of the comma, should there be a dash or colon, or could “that is” be inserted? It’s not quite clear which of these would be appropriate, so please correct this as per your preference. AQ21—Author: Please provide a page range for Ellis & Ladany (1997). AQ22—Author: Lambert, 1980 is not cited in the text. Please cite in the text or delete from the references. AQ23—Author: Milne, in press is not cited in the text. Please cite in the text or delete from the references. AQ24 —Author: Reese et al., 2009 is not cited in the text. Please cite in the text or delete from the references. AQ25—Author: This journal requires each author to provide a professional biographical paragraph to include the author’s name, highest degree, department and affiliation. If you have not already done so, please provide this information with your proof corrections.