Fidelity of Problem Solving in Everyday Practice

0 downloads 0 Views 518KB Size Report
(i.e., Chalfant, Pysh, & Moultrie, 1979; Graden, Casey, & Christenson, 1985). The ''prereferral'' .... student outcomes from problem solving teams on a 5-point system and found small but ... A manual and forms were created based on our current.
This article was downloaded by: [Susan Ruby] On: 19 August 2011, At: 14:24 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Educational and Psychological Consultation Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hepc20

Fidelity of Problem Solving in Everyday Practice: Typical Training May Miss the Mark a

b

Susan F. Ruby , Tricia Crosby-Cooper & Michael L. Vanderwood a

Eastern Washington University

b

Azusa Pacific University

c

University of California, Riverside

c

Available online: 19 Aug 2011

To cite this article: Susan F. Ruby, Tricia Crosby-Cooper & Michael L. Vanderwood (2011): Fidelity of Problem Solving in Everyday Practice: Typical Training May Miss the Mark, Journal of Educational and Psychological Consultation, 21:3, 233-258 To link to this article: http://dx.doi.org/10.1080/10474412.2011.598017

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan, sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Journal of Educational and Psychological Consultation, 21:233–258, 2011 Copyright © Taylor & Francis Group, LLC ISSN: 1047-4412 print/1532-768X online DOI: 10.1080/10474412.2011.598017

Fidelity of Problem Solving in Everyday Practice: Typical Training May Miss the Mark SUSAN F. RUBY Eastern Washington University

TRICIA CROSBY-COOPER Azusa Pacific University

Downloaded by [Susan Ruby] at 14:24 19 August 2011

MICHAEL L. VANDERWOOD University of California, Riverside

With national attention on scaling up the implementation of Response to Intervention, problem solving teams remain one of the central components for development, implementation, and monitoring of school-based interventions. Studies have shown that problem solving teams evidence a sound theoretical base and demonstrated efficacy; however, limited evidence supports effectiveness and consistent implementation on a wide-scale basis. Few studies have examined the quality and level of fidelity in school-based teams with typically available problem solving training models. We present 2 studies that used typical models to support the development of elementary teams and evaluated the impact of the training for the treatment groups compared with control groups using a Problem Solving Team (PST) Rubric adapted by Upah and Tilly (2002). Although some effects were determined for problem solving training, overall adherence to a problem solving model and quality of intervention plan development were significantly below the level considered adequate to ensure intervention effectiveness. We did not find significant effects of training for student outcome measures. We highlight the need for a change in the ways problem solving teams operate and are supported in schools. Schools have utilized a team approach to design and implement academic and/or behavioral interventions for individual ‘‘at-risk’’ students for more Correspondence should be sent to Susan F. Ruby, Department of Psychology, Eastern Washington University, Martin Hall 151, Cheney, WA 99004. E-mail: [email protected] 233

Downloaded by [Susan Ruby] at 14:24 19 August 2011

234

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

than 30 years. Problem solving teams were first introduced in the late 1970s/ early 1980s in response to the large number of students evidencing academic and behavior problems and the need to rule out lack of effective instruction as a primary cause of a student’s low academic performance (i.e., Chalfant, Pysh, & Moultrie, 1979; Graden, Casey, & Christenson, 1985). The ‘‘prereferral’’ intervention process was intended to differentiate those students who were performing poorly due to lack of an instructional match from those students who were performing poorly due to an inherent ‘‘withinchild’’ disability. Through a structured process, teams were asked to identify specific student problems, then select, implement, and monitor appropriate interventions that were designed to address the barriers that were preventing the student from accessing the general education curriculum. If the ‘‘prereferral’’ process was not able to eliminate the problem, then there was reason to suggest that the concern was not primarily caused by a lack of appropriate instruction and ‘‘specialized’’ instruction was necessary (Salvia & Ysseldyke, 2007). Over the past three decades, studies have investigated the ‘‘prereferral’’ problem solving team process implementation and perceived effectiveness utilizing various methods, ranging from self-report (Flugum & Reschly, 1994) to reviews of permanent products (Telzrow, McNamara, & Hollinger, 2000) and direct observation of team processes (Burns, Peters, & Noell, 2008; Kovaleski, Gickling, Morrow, & Swank, 1999). From this research, others (e.g., Lentz, Allen, & Ehrhardt, 1996; Macmann et al., 1996) have developed and disseminated guidelines for best practice in strong intervention design and have highlighted the need for careful attention to both process and outcomes in problem solving. Most models have expanded components consistent with behavioral consultation (Bergan, 1977; Bergan & Kratochwill, 1990), including (a) problem identification, (b) problem analysis, (c) plan implementation, and (d) problem/plan evaluation.

RESEARCH TO PRACTICE GAP A consistent theme in the problem solving team literature involves the ‘‘research to practice gap’’ originally identified by D. Fuchs, Fuchs, Harris, and Roberts (1996). Consistently across the literature, positive findings with problem solving team training have been documented, yet these effects have little evidence in terms of maintenance over time. Although sufficient philosophical and empirical evidence supports the validity of the problem solving team theoretical construct (see Burns, Vanderwood, & Ruby, 2005) and efficacy within well controlled university-based studies (Burns & Symington, 2002), implementation inconsistencies have prevented widespread effectiveness (Burns et al., 2005). Burns and Symington (2002) provided metaanalytic results indicating that teams supported by university trainers yield

Problem Solving Training

235

over twice the effect sizes in student outcomes; however, maintenance of these outcomes does not occur once the university staff stop participating (Burns et al., 2008; D. Fuchs & Fuchs, 1989). This pattern suggests a need to examine variables in the natural setting that are related to variance in implementation levels within the general (or everyday) practice of problem solving team implementation.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

GENERAL PRACTICE OF PROBLEM SOLVING TEAM IMPLEMENTATION Although the presence of quality indicators in problem solving has been linked to improved outcomes for students (e.g., Flugum & Reschly, 1994; Kovaleski et al., 1999; Telzrow et al., 2000), many interventions developed by teams lack adequate implementation with these critical components. As McKenna, Rosenfield, and Gravois (2009) pointed out, ‘‘a challenge confronting the use of any problem-solving or consultation model is evidence of treatment integrity or fidelity of the process itself’’ (p. 496). Direct examination of problem solving team inconsistencies reveals a disheartening picture. Truscott, Cohen, Sams, Sanborn, and Frank (2005) noted that although a large majority of states required team practices, few provided specific directions about how to implement problem solving team practices. Moreover, Truscott and colleagues found that teams often lacked consensus regarding their goals, overlooked ecological variables, and recommended low-intensity interventions. Powers (2001) examined problem solving team functioning in southern California and found that teams met irregularly in a reactive fashion, similar to a crisis, and regularly admired the problem rather than focusing on solutions. Although the research continues to highlight success stories with individual schools (e.g., Bahr & Kovaleski, 2006), it appears that, as Telzrow et al. (2000) reported, ‘‘: : : reliable implementation of problemsolving approaches in schools remains elusive’’ (p. 458). To better understand the implementation levels of the problem solving components, Flugum and Reschly (1994) initially administered questionnaires to members of teams with outcomes of students not qualifying for special education services. Respondents to their study included referring teachers and related service personnel (school psychologists, special education consultants, social workers, and other professionals) in Iowa schools. Flugum and Reschly asked their respondents to report whether specific components (six indices quality problem solving team configuration expanded from Bergan and Kratochwill’s 1990 model) were present in the prereferral interventions carried out prior to the special education evaluation and if the student behavior improved. Their findings revealed two general and significant findings: (a) the presence of specific problem solving components was positively related to perceived outcomes for students, and (b) for a

236

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

majority of cases, problem solving team implementation rates were low for five of their six indices. Specifically, teams demonstrated low implementation levels for behavioral definition of the academic or behavioral problem, direct measurement of the behavior(s) of concern, systematic planning of an intervention, graphing results, and comparing progress-monitoring data with baseline.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

LARGE-SCALE PROBLEM SOLVING TEAM IMPLEMENTATION In response to concerns regarding greater numbers of special education referrals, misidentification of students with learning disabilities, and overrepresentation of minority students in special education, several states developed initiatives in the 1990s for standard prereferral intervention practices utilizing problem solving teams. Although studies such as Burns and Symington’s (2002) meta-analysis provide a more exhaustive summary and synthesis of findings across studies examining problem solving in teams, we provide here an overview of two studies that highlight the main findings related to the large-scale implementation efforts. Telzrow and colleagues (2000) expanded the problem solving team research by (a) using permanent products rather than ratings of implementation, and (b) examining ‘‘best’’ practice plans within a planned largescale training initiative. Examining effects of Ohio’s ‘‘Intervention Based Assessment’’ initiative with multidisciplinary teams, Telzrow et al. (2000) reviewed team plans submitted by districts as their ‘‘best case’’ of problem solving throughout an academic year and rated the fidelity of eight problem solving components. Participating districts were invited by coordinators from regional resource centers to participate in training based upon specific criteria (leadership, presence of a building level team, and means for alternative delivery systems for students). Coordinators conducted training throughout the 1990s; the content and intensity of training varied across regions in the state, as no standardized training was set by the state. In reviewing permanent products, Telzrow et al. (2000) found that teams were best at defining the problem and setting goals and were most challenged with hypothesizing the reason for the problem and treatment integrity. The researchers also rated student outcomes from problem solving teams on a 5-point system and found small but significant relationships between six of the eight problem solving components and ratings of student outcome. Interestingly, the components ‘‘Hypothesized Reason for the Problem’’ and ‘‘Treatment Integrity’’ were not found to significantly correlate with student outcome. Overall, Telzrow et al. (2000) found that two components, ‘‘Clearly Identified Goal’’ and ‘‘Data Indicating Student Response to Intervention,’’ accounted for 8% of the variance in student outcome and noted the lack of standardized training for

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Problem Solving Training

237

intervention implementation and limited documentation as possible caveats of inconsistent and ineffective team practices. The Commonwealth of Pennsylvania implemented a more standardized statewide ‘‘Instructional Support Team’’ (IST) model during the 1990s, requiring that schools adopt a 50-school day process of assessment and intervention for students as a screening measure prior to special education referrals. Initial investigations showed that Pennsylvania’s model evidenced decreases in special education rates (Hartman & Fay, 1996) and increased teacher acceptability (Bean et al., 1994). Kovaleski et al. (1999) identified a need to examine more specific outcomes related to the problem solving team training and conducted a study that examined the relationship between implementation level and student academic outcomes (time on task, task completion, and task comprehension). Level of implementation was determined by state coordinated teams using a 103-item checklist with problem solving components, and academic outcomes were rated through direct observation of randomly selected students at both IST and non-IST schools. Not surprisingly, findings revealed that student outcomes were greater for teams with high degrees of fidelity to their problem solving model. However, this study brought about an alarming finding as well: at-risk students who received interventions from trained problem solving teams with lower fidelity to the model evidenced no greater academic outcomes than did at-risk students who received no problem solving team support.

RESPONSE TO INTERVENTION AND THE ROLE OF PROBLEM SOLVING TEAMS Response to Intervention (RTI) is the practice of providing research-based, quality interventions to all students, monitoring student progress, and making educational decisions based on student response to the intervention (National Association of State Directors of Special Education, 2005). Within RTI, interventions are provided in a multitiered model, with heavy emphasis placed on implementation of quality core (Tier 1) and supplemental (Tier 2) interventions. Students needing more intensive interventions are systematically identified and provided with a combination of standard protocol treatments and interventions designed through problem solving. The most intensive needs students are generally referred to building problem solving teams for individualized intervention development, implementation, and monitoring (generally referred to as Tier 3 interventions; however, buildings vary in their identifications of the tiers and the role of problem solving teams). According to findings from a recently released Response to Intervention (RTI ) Adoption Survey (Spectrum K12 School Solutions, 2009), the implementation of RTI has continued to progress at a rapid rate. Results from

238

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

Downloaded by [Susan Ruby] at 14:24 19 August 2011

the national adoption survey indicate that 71% of districts within the sample are either implementing RTI in select school buildings, are in the process of districtwide implementation, or are fully utilizing an RTI model districtwide. Although these data impressively speak to the concepts of RTI going to scale, the ratings of behavior implementation levels for specific components of the model reveal significant gaps in practice. Notably, 72% of the district representatives in the sample reported that a common screening assessment is not implemented, 51% reported that research-based interventions are not implemented, and 65% reported that a clearly defined process for RTI is not in place. Remarkably, the most progressive area of RTI implementation reported on the RTI Adoption Survey appears to be in the area of problem solving: 45% reported partial implementation, and 14% reported fully implementing a problem solving approach.

RATIONALE FOR THE CURRENT STUDIES Given that the problem solving literature consistently identifies a researchto-practice gap, and the role of problem solving is heightened in systems adopting RTI practices, we believe it is imperative to further examine the impact of two commonly used approaches in training teams to adopt a problem-solving process. One study uses what we label the ‘‘district office’’ support model. A manual and forms were created based on our current knowledge of problem solving teams’ best practices and a brief presentation was given to staff members at each school. In the second study, teams received what we label a ‘‘university support’’ model that included several days of intensive training followed by observations and feedback at meetings at school sites. In both studies, we compared the impact of the training model with schools in the same district that did not receive training. Our studies expand the original work of Flugum and Reschly (1994) to examine everyday practice of teams in schools, this time with direct measurement rather than self-report methodology. Both Study 1 and Study 2 examine adherence (fidelity) to a problem solving model. In Study 1, we review permanent products (intervention plans) from team meetings to rate adherence to the problem solving process, and in Study 2, we directly observe and rate adherence to the problem solving process. Across both studies, we use the same rubric to examine the impact of training on treatment schools compared with control schools and also conduct additional analyses in each study to examine qualitative factors that may impact the effectiveness of problem solving teams. Given the fact that we are examining the two most common approaches to supporting problems solving teams, these results should inform future practice given the movement toward multitiered support systems (e.g., RTI).

239

Problem Solving Training

STUDY 1

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Method Participants. Six schools (4 from one large urban school district and 2 from a nearby large urban district) in California volunteered to participate in the study; we randomly assigned 3 schools to the control condition and 3 to treatment. Participating schools had large percentages of students identified as having free/reduced lunch (range from 63% to 95%) and English Language Learner status (22% to 83%). The percentage of students performing at the proficient level on the California Standards Test in Language Arts ranged from 19% to 53.5% and 32.3% to 74.7% in Mathematics. See Table 1 for descriptive statistics. Measure. We utilized Upah and Tilly’s (2002) 12-item adaptation of Hord, Rutherford, Huling-Austin, and Hall’s (1987) innovative configuration to measure quality interventions in schools. Innovative configurations provide a clear way for practitioners to understand the skills related to an

TABLE 1 Study 1 and 2 Participating School Demographic Information

Group Study 1 Training Study 1 Control Mean Study 1 (SD) Study 2 Training

Study 2 Control

Mean Study 2 (SD)

School ID

School size

English learners

Free/ Reduced lunch

Proficient English/ Language arts standards

2 3 5 1 4 6

439 908 431 628 790 616 658.8 (196.7) 518 589 970 1,023 790 794 904 557 810 861 816 749 781.8 (158.2)

80 44 22 79 83 32.6 56.8 (27.1) 27 19 43 16 36 46 24 36 64 35 12 56 34.5 (15.9)

90 64 63 93 95 73 79.7 (14.7) 96 88 96 81 95 97 94 96 96 65 65 100 89.1 (12.3)

24.3 43 43.4 22 19.3 53.5 34.3 (14.2) 13 22 11 26 14 11 16 9 8 33 39 — 18.4 (10.3)

A1 A2 A3 A4 A5 A6 B2 B5 B6 B7 B8 B9

Proficient math standards 32.3 52.5 52.5 38.5 35.5 74.7 47.7 (15.8) 36 26 17 38 22 24 28 18 18 41 47 — 28.6 (10.3)

Note. Numbers represent percentages of students attending schools. Missing data for School B9 were due to irregular reporting on the state standards test.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

240

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

innovation (such as problem solving) in concrete, observable, and measureable terms. Upah and Tilly’s (2002) configuration evolved from Flugum and Reschly’s (1994) research proposing a 6-component indicator and Tilly and Flugum’s (1995) 9-component model for best practice in the development and documentation of interventions. Additional research by Upah (1998) found that several of the items proposed in the Tilly and Flugum research could be broken down even more resulting in the current 12-component model presented as best practice for ‘‘: : : designing, implementing, and evaluating quality interventions’’ (Tilly & Flugum, 1995, p. 484). Scores are based on an ordinal scale ranging from 1 to 5, with total possible range of scores for the 12-component rubric ranging from 12 to 60. A score of 5 is considered ‘‘best practice’’ of the problem solving team component, whereas a score of 4 on a specific component is considered ‘‘acceptable.’’ Variations of the same component that are rated a score of 3, 2, or 1 are considered ‘‘unacceptable’’ and ‘‘: : : may render the intervention ineffective’’ (Upah & Tilly, 2002, p. 495). We evaluated Inter-rater Agreement (IRA) using 25% of team meeting notes and determined IRA to be 89% using the percent agreement formula. See the Appendix for a copy of Upah and Tilly’s (2002) Problem Solving Team (PST) Rubric. Problem solving training. We provided a 1-hr on-site standardized problem solving training at 3 of the 6 participating elementary schools, with content including an overview of the problem solving process with specific focus on roles of each team member and how to develop behavior definitions and establish a progress-monitoring schedule. We reviewed definitions and purposes of problem solving and provided a uniform procedure for parents and teachers to request a team meeting; centralize meeting location; and define the members on the team and their roles before, during, and after meetings. Training emphasized (a) the importance of collaborating with colleagues regarding resources, time, and skill to implement interventions; (b) the importance and necessity of collecting baseline data; and (c) monitoring student progress and effectiveness of interventions.

Data Collection Following the problem solving training, we obtained informed consent from the staff and teachers. School team coordinators presented parents/guardians with a consent form in their native language (English or Spanish) and sought child assent if the student was noted as present at the team meeting. We collected the district problem solving team meeting forms with all identifying information removed. District team meeting forms were coded using Upah and Tilly’s (2002) PST Rubric. At the end of the academic year, we also obtained qualitative information from both teachers and principals regarding their perceptions of the team processes and perceived effects.

Problem Solving Training

241

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Results A total of 45 problem solving team meeting notes were utilized to evaluate adherence to a problem solving model. Mean scores on components of Upah and Tilly’s (2002) PST Rubric (with possible scores of 1 to 5, with 5 being the highest) ranged from 1.0 (Progress Monitoring, Formative Evaluation, Treatment Integrity check, and Summative Evaluation) to 2.04 (Measurement Plan). All mean scores were below the level 4 deemed ‘‘acceptable’’ for quality interventions by Upah and Tilly. To evaluate differences between groups on the PST Rubric, we utilized an Independent Samples t test. Results were found to be significant, t (43) D 3.90, p < .001. Schools with problem solving training received higher total scores on the PST Rubric (M D 17.15, SD D 2.13) than control schools (M D 14.80, SD D 1.89). We further examined differences in three items deemed ‘‘critical components’’ by Flugum and Reschly (1994): Behavioral Definition, Baseline, and Intervention Plan. The analyses failed to reveal a significant effect for behavioral definition, t (43) D 1.86, p D .07, and baseline, t (43) D 1.94, p D .06, but was significant for intervention plan, t (43) D 1.6, p < .05. Control schools (M D 2.0, SD D .98) received higher ratings on the intervention plan component from the PST Rubric than schools with training (M D 1.50, SD D .61). The analysis for progress monitoring was not calculated as the standard deviation for both the control and treatment group was 0. In addition to quantitative data, 117 teachers provided qualitative feedback on their experience with the team process. One common theme expressed by teachers involved level of administrative support; many teachers indicated they did not receive the level of support they felt they needed. At the same time, a majority of the teachers, regardless of treatment group assignment (i.e., training or no training), indicated they had received either no training or training several years prior. Many teachers reported they felt ill prepared regarding the understanding and purpose of problem solving teams.

STUDY 2 Method Design. Six elementary schools within a large, ethnically diverse, primarily low socioeconomic status school district in California volunteered to receive problem solving training and evaluation of their team process in order to receive feedback regarding their adherence to the process. We randomly chose another 6 elementary schools within the same district as controls. Three of the selected control schools chose not to participate; thus, we invited an additional 3 schools from our randomly generated list. Study 2 represents a quasi-experimental mixed design with time serving as

Downloaded by [Susan Ruby] at 14:24 19 August 2011

242

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

the within-group factor and training as the between-groups factor. Team meetings were nested within schools. We hypothesized that teams receiving problem solving training would demonstrate higher quality of intervention design. Participants. An overview of demographic variables, including population size, percentage of English Language Learners, percentage of students receiving free and reduced lunch, and percentage of students meeting proficiency on the state standards test appears in Table 1. Schools in Study 2 had significantly lower percentages of students meeting proficiency on the state standards test. Percentage of English Language Learners ranged from 12% to 64%. Free and reduced lunch status ranged from 65% to 100%, with training schools having slightly higher and less variable rates (M D 92.2, SD D 6.4) than control schools (M D 86, SD D 16.4). Teams from schools in Study 2 were being asked by their district to hold prereferral meetings prior to making a referral for special education. Teams reported holding at least two to three prereferral meetings per week prior to our study. We conducted observations at 90 meetings over three time periods. Participants were attendees at the meetings, including team members, teachers who referred students to the team, and parents. Two meetings were dropped due to the fact that one student was moving directly to a restrictive school campus and the other was moving directly after the meeting. One meeting was included in the quality data but was not followed after the meeting at parent request. The final sample included participants from 47 meetings in schools receiving problem solving training and participants in 40 meetings from control schools. Roles and participation varied across schools. Teams had an average of 4 participants, not including the parents. Parents were in attendance at 58% of the observed meetings, with higher parent attendance in control school meetings (N D 28, 70.0%) than at treatment school meetings (N D 21, 45.7%). The average length of time for meetings was 40 min. Students referred to the problem solving team meetings ranged in age from 5 to 12 and were enrolled in Grades K–6. Training. University trainers with recognized expertise in problem solving, curriculum-based measurement, and design/implementation of academic and behavior interventions conducted the training with school-based teams. The coordinator of school psychologists in the district offered an information meeting about the training opportunities. At this meeting, schools interested in receiving further training and evaluation were asked to volunteer for designation as treatment site schools. Six schools volunteered to serve as training sites, and the coordinator of school psychologists scheduled and arranged three separate half-day training sessions for the problem solving treatment schools. The first half-day training session described the training process and provided an overview of the problem solving process. The second half-day session focused on development/delivery of academic interventions, and the third half-day session focused on development/delivery of behavior interven-

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Problem Solving Training

243

tions. On average, two to three team members per school attended, including administrators, counselors, and/or special education teachers. From these meetings, a problem solving team manual and district forms were created and placed on the district’s intranet for access. At the beginning of the school year, administrators from all schools (including training and control schools) were presented with an overview of new district forms. These forms were not mandated for district teams at that time but were available for all through the district intranet server. Following Time 1 of the current evaluation process, we invited team members from the treatment schools to attend a feedback and training session. Trainers provided a mock problem solving team meeting as a model. Members from treatment schools assumed roles in the mock problem solving team meeting, and a university consultant facilitated the meeting. One school contacted the university and asked for site-based training. The consultant visited the site three times and served as facilitator for naturally occurring meetings.

Data Collection At three separate times over the course of the 2003–2004 and 2004–2005 school years, evaluators attended meetings from each of the 12 schools to evaluate the overall quality and adherence to steps associated with a problem solving model. We trained graduate student researchers to observe meetings in order to code for quality of intervention design and adherence to the problem solving model. Evaluators utilized a checklist of problem solving features as well as Upah and Tilly’s (2002) PST Rubric. Quality measure. Study 2 also utilized Upah and Tilly’s (2002) adaptation of Hord et al.’s (1987) configuration (rubric) to measure quality interventions in schools (see Appendix). In addition to the configuration, the evaluation form contained a checklist to determine the pace and focus of the meeting, premeeting activity, presence or absence of essential people, and the collaborative nature of specific meetings observed. We developed interrater reliability (IRR) for the quality measure by dividing number of agreements by number of observations for 18 (20%) team meetings and found average IRR to be 92% (range from 83% to 100%). Effectiveness measures. We utilized curriculum-based measures (CBM) of reading, math, and/or writing. We selected the measures based on teacher primary concerns for students. We administered the CBM probes 1 week following the problem solving team meetings (pretreatment CBM) and 6 weeks later (posttreatment CBM). Rate of improvement (ROI). We calculated differences between the preand posttreatment CBM by calculating the ROI or slope (L. S. Fuchs, Fuchs, Hamlett, Walz, & Germann, 1993). We divided the difference between postand pretreatment CBM by the number of weeks between the data points.

244

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Goal attainment scaling (GAS). We utilized a GAS version developed by Kratochwill and Stoiber (1999) for cases involving behavior interventions and as a supplementary measure in the evaluation of effectiveness for intervention plans. We asked referring teachers to select a goal for students at the 1 week visit following problem solving meetings. We consulted with teachers to scale, weight, and operationalize established goals on a range of positive to negative outcomes (i.e., C2, C1, 0, 1, 2). At the end of the interventions (6 weeks following problem solving meeting), the established GAS provided an assessment of student progress to determine if the goal chosen by the teacher was met. GAS has been utilized for several decades across multiple medical and mental health fields as indicators of response to treatment. Reliability is dependent on the specificity of behavioral expectations; however, no specific psychometric information was located in the educational literature regarding the psychometric properties of GAS.

Results We examined differences in fidelity to the problem solving process, or quality of intervention design, for teams receiving problem solving training and control schools. Additionally, we analyzed differences in student outcome data in terms of ROI and GAS to consider whether students referred to teams receiving problem solving training demonstrated significantly greater academic or behavioral improvement following the team process. Quality of problem solving teams. To examine differences in quality of intervention design, we analyzed results using a 2 (Training)  3 (Time) fixed effects factorial analysis of variance (ANOVA). The interaction term was examined first and found to be nonsignificant, F (2, 87) D .165, p D .848. The test for main effect of problem solving training also revealed a nonsignificant effect, F (1, 87) D .302, p D .584. The test for main effect of Time was significant, F (2, 87) D 3.84, p < .05, indicating that significant differences exist between total quality measures within the three time groups. Effect size was also analyzed and revealed a small effect, R2 D .086. The homogeneity of variance assumption was evaluated using Levene’s test for equality of variance. This analysis revealed a nonsignificant effect, F (5, 82) D 1.58, p D .174, and indicated that the assumption was met and variances were indeed equal. Tukey’s HSD multiple comparison procedure revealed that intervention quality was significantly higher for Time 3 (M D 18) than for Time 1 (M D 15) regardless of training. The ANOVA results, including effect sizes for each variable and the interaction term, are displayed in Table 2. All means and standard deviations fell below the level described by Upah and Tilly (2002) as acceptable. Table 3 provides means and standard deviations for specific components of the PST Rubric by treatment group. Table 4 provides means and standard deviations for specific components by time. Aside from the quality measure of problem solving, frequencies

245

Problem Solving Training

TABLE 2 Study 1 ANOVA Summary Table for Differences in Problem Solving Team Quality Source

df

SS

MS

F

R2

Training Time Training  Time Within-groups error Total

1 2 2 82 87

4.73 120.00 5.168 1281.2 1411.10

4.73 60.00 2.58 15.63

.30 3.83*

.025

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Note. N D 88. *p < .05.

for meetings evidencing major features of problem solving teams were determined. No meetings actually had an agenda present or a timekeeper actively visible or designated. Three control school meetings and no training school meetings had a goal stated with a value of 3 or above. By Time 3 observations, three out of six training school meetings and two out of six control school meetings were using new district problem solving forms. Student outcomes. The Pearson product-moment correlation between the ROI and GAS outcome measures indicated a moderate, positive relation (r D .54, p < .05) and suggested that direct measurement with ROI provided criterion-related evidence for use of GAS. Thus, results were analyzed using two independent samples t tests for ROI and GAS scores. The analysis for ROI failed to reveal a significant difference between the two groups, t (32) D 1.24, p D .552. The analysis for GAS also failed to reveal a significant difference between the two groups, t (34) D .449, p D .512.

GENERAL DISCUSSION In an attempt to examine the impact of typical training models designed to help teams develop problem solving skills, we conducted two studies to compare a typical ‘‘district’’ and ‘‘university’’ support model with teams that did not receive training. Both studies examined fidelity of problem solving with teams that had received little to no training in problem solving prior to our support. All teams reported prior awareness that problem solving team activity and prereferral intervention must take place in order to provide students with more intense services (i.e., special education). Both studies attempted to examine the impact of training and support by comparing treatment with control schools, but the form and intensity of training varied by study. Although Study 1 revealed overall differences between problem solving and control group products as measured by the PST Rubric, differences between training and control teams were not determined for the ‘‘critical’’ elements of problem solving identified by Flugum and Reschly (1994). Study 2 revealed statistically significant improvement in problem

246

88

Total

2.35 (0.74)

2.56 (0.60)

2.00 (0.58) 2.56 (1.01) 2.00 (0.00) 2.13 (0.64) 3.00 (0.76) 3.67 (0.58)

2.15 (0.89)

2.43 (1.02) 2.29 (0.76) 2.70 (1.06) 2.56 (0.88) 1.50 (0.71) 1.40 (0.89)

Def

(0.76) (0.38) (0.99) (1.32) (0.00) (0.00)

(0.49) (1.05) (0.52) (0.99) (1.41) (1.00)

1.74 (0.74)

2.02 (0.91)

1.29 2.11 1.33 1.88 2.50 3.00

1.47 (0.58)

1.43 1.14 1.90 2.33 1.00 1.00

Base

2.06 (0.77)

2.19 (0.85)

1.71 (0.76) 2.22 (1.20) 1.50 (0.55) 2.38 (1.06) 2.63 (0.92) 2.67 (0.58)

1.94 (0.70)

2.00 (0.96) 1.57 (0.54) 2.90 (0.74) 2.67 (1.23) 1.50 (0.71) 1.00 (0.00)

PVal

(0.80) (0.90) (1.08) (1.00) (0.00) (0.00)

(0.58) (0.53) (0.89) (0.93) (0.76) (1.16)

2.02 (0.72)

2.21 (0.81)

2.00 1.56 2.00 2.00 3.00 2.67

1.83 (0.63)

2.21 1.86 2.60 2.33 1.00 1.00

PAnal

1.21 (0.26)

1.33 (0.32)

1.00 (0.00) 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) 1.63 (0.74) 2.33 (1.16)

1.10 (0.20)

1.07 (0.27) 1.14 (0.38) 1.40 (0.52) 1.00 (0.00) 1.00 (0.00) 1.00 (0.00)

Goal

(0.92) (0.54) (1.10) (0.44) (0.71) (0.00)

(0.54) (0.50) (0.98) (0.46) (0.52) (0.58)

1.71 (0.61)

1.87 (0.60)

1.43 1.33 2.17 1.25 2.38 2.67

1.54 (0.62)

2.07 1.57 1.90 1.22 1.50 1.00

IPlan

1.17 (0.27)

1.20 (0.28)

1.00 (0.00) 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) 1.50 (0.54) 1.67 (1.15)

1.14 (0.26)

1.14 (0.36) 1.29 (0.49) 1.40 (0.70) 1.00 (0.00) 1.00 (0.00) 1.00 (0.00)

Meas

(0.27) (0.00) (0.00) (0.00) (0.00) (0.00)

(0.00) (0.00) (0.82) (0.00) (0.46) (1.53)

1.17 (0.26)

1.32 (0.47)

1.00 1.00 1.33 1.00 1.25 2.33

1.01 (0.05)

1.07 1.00 1.00 1.00 1.00 1.00

Rule

1.13 (0.21)

1.17 (0.23)

1.00 (0.00) 1.22 (0.44) 1.00 (0.00) 1.00 (0.00) 1.13 (0.35) 1.67 (0.58)

1.09 (0.20)

1.07 (0.27) 1.29 (0.49) 1.20 (0.42) 1.00 (0.00) 1.00 (0.00) 1.00 (0.00)

PMon

(0.27) (0.00) (0.32) (0.00) (0.00) (0.00)

(0.00) (0.88) (0.00) (0.00) (0.00) (0.00)

1.06 (0.12)

1.09 (0.15)

1.00 1.56 1.00 1.00 1.00 1.00

1.03 (0.10)

1.07 1.00 1.10 1.00 1.00 1.00

Integ

(7.56) (3.35) (9.36) (7.84) (1.41) (1.79)

(3.02) (6.25) (3.01) (5.75) (8.75) (8.08)

31.24 (5.51)

33.87 (5.81)

26.86 31.11 28.67 29.25 40.00 47.33

28.61 (5.22)

31.14 28.29 36.20 32.22 23.00 20.80

Qual

Note. Def D Problem Definition; Base D Baseline; PVal D Problem Validation; PAnal D Problem Analysis; Goal D Goal Setting; IPlan D Intervention Plan; Meas D Measurement Strategy; Rule D Decision Rule; PMon D Progress Monitoring; Integ D Treatment Integrity; Qual D Percentage of Total Possible Quality PST Sum; PST D Problem Solving Team. See Appendix for specific Quality variable definitions. Scores of 5 considered best practice of the PST component; 4 considered acceptable; but variations of the same component that are rated a score of 3, 2, or 1 are considered unacceptable and may render the intervention ineffective (Tilly & Flugum, 1995).

41

7 9 6 8 8 3

Control group

Control group B2 B5 B6 B7 B8 B9

47

14 7 10 9 2 5

PST group A1 A2 A3 A4 A5 A6

PST group

N

School

TABLE 3 Study 2 Quality Means and Standard Deviations by Schools Within Groups

Downloaded by [Susan Ruby] at 14:24 19 August 2011

247

37 27 24

1 2 3

2.16 (0.83) 2.26 (0.71) 2.92 (1.06)

Def

1.41 (0.73) 1.74 (0.86) 2.33 (1.31)

Base 1.95 (0.85) 2.44 (1.05) 2.13 (1.12)

PVal 2.16 (0.90) 1.89 (0.80) 2.29 (1.08)

PAnal 1.08 (0.28) 1.15 (0.46) 1.33 (0.64)

Goal 1.54 (0.56) 1.63 (0.74) 1.90 (1.10)

IPlan 1.16 (0.37) 1.11 (0.32) 1.21 (0.59)

Meas 1.00 (0.00) 1.07 (0.27) 1.29 (0.75)

Rule

1.11 (0.32) 1.07 (0.27) 1.17 (0.38)

PMon

1.03 (0.16) 1.04 (0.19) 1.21 (0.59)

Integ

29.19 (7.16) 30.81 (6.91) 35.83 (9.90)

Qual

Note. Def D Problem Definition; Base D Baseline; PVal D Problem Validation; PAnal D Problem Analysis; Goal D Goal Setting; IPlan D Intervention Plan; Meas D Measurement Strategy; Rule D Decision Rule; PMon D Progress Monitoring; Integ D Treatment Integrity; Qual D Percentage of Total Possible Quality PST Sum; PST D Problem Solving Team. See Appendix for specific Quality variable definitions. Scores of 5 considered best practice of the PST component; 4 considered acceptable; but variations of the same component that are rated a score of 3, 2, or 1 are considered unacceptable and may render the intervention ineffective (Tilly & Flugum, 1995).

N

Time

TABLE 4 Study 2 Quality Means and Standard Deviations by Time Within Groups

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Downloaded by [Susan Ruby] at 14:24 19 August 2011

248

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

FIGURE 1 Mean Ratings of Problem Solving Team (PST) Components in Meetings and Forms. Note. Def D Problem Definition; Base D Baseline; PVal D Problem Validation; PAnal D Problem Analysis; Goal D Goal Setting; IPlan D Intervention Plan; Meas D Measurement Strategy; Rule D Decision Rule; PMon D Progress Monitoring; Form D Formative Evaluation; Integ D Treatment Integrity; Sum D Summative Evaluation. Scores of 5 considered best practice of the PST component; 4 considered acceptable; but variations of the same component that are rated a score of 3, 2, or 1 are considered unacceptable and may render the intervention ineffective (Tilly & Flugum, 1995).

solving fidelity over time for trained and untrained teams. Consistently across both studies, even the highest scores on the PST Rubric did not approach a level considered acceptable for quality intervention design and implementation (see Figure 1). Most important, little to no differences in overall effect was determined with significantly more intense training provided in Study 2. This suggests that other factors besides training play a role in whether or not teams engage in a problem solving process for students who have academic or behavior concerns. In an attempt to highlight the importance of our results, we propose factors that influence the effectiveness of problem solving teams and suggest the need for investigation of teaming within a multitiered support system.

Possible Explanatory Factors for the Low Fidelity Measures Although a logical first hypothesis for low fidelity measures might be that training was not sufficient for teams, previous studies with much more intensive training have shown initial impressive response to problem solving training (e.g. Burns et al., 2008; D. Fuchs et al., 1996) and poor maintenance

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Problem Solving Training

249

over time. We believe the answer is much more complex and involves systems issues within schools. Several factors that were not addressed in the designs of our studies may play a significant role in determining the effectiveness of problem solving training. First, both studies examined training for problem solving teams within the context of a traditional refer-test-place model. Although teams were aware that intervention was required before a special education referral could occur, and during training the impact of data-based decision making and problem solving on student outcomes was emphasized, teams were not required to use their data as part of the process to identify a student as eligible for special education. The benefit of problem solving emphasized in the training manuals and during professional development opportunities was that through problem solving, we would be able to provide early, systematic intervention in hopes of reducing or eliminating the need for special services. Based on follow-up surveys, it appears that some staff members viewed the problem solving team efforts as simply a step in the special education referral process. To address concerns similar to those we identified in our studies, Burns and colleagues (2005) suggested it is important to determine if problem solving teams are more effective (i.e., improve student outcomes) when they are made a critical part of the special education process. For example, if teams fully implement multitiered support systems, problem solving becomes part of the school culture and a need for data-based decision making is created (D. Fuchs, Mock, Morgan, & Young, 2003). It seems logical to assume that when the essential components of problem solving, such as progress monitoring, goal setting, and data-based decision making, have a road map for use, teams have an accountability system to assure that treatments are empirically based (as defined by the Individuals with Disabilities Educational Improvement Act of 2004 in terms of eligibility in response to intervention clause) and delivered with integrity. A second factor that may have impacted our results is team membership. A possible hypothesis to explain why 2 control schools in Study 1 demonstrated higher quality of problem solving than most of our treatment schools may in fact represent issues of who was on the team. As mentioned in the literature review, Burns and Symington (2002) found that having special education professionals on teams actually improved student outcomes. Study 1 had very little participation of special education professionals on the teams in the treatment schools, yet there was some participation of special education teachers for the control group schools. The skill to create a goal and develop a measurement strategy is clearly something that special education professionals can bring to a problem solving team. Kovaleski (2002) notes that regardless of individual consultation skills, there are a number of system factors for prereferral teams to be accepted in schools and to function effectively, including an established team format, principal leadership, mandated prereferral intervention policy, assignment

Downloaded by [Susan Ruby] at 14:24 19 August 2011

250

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

of staff, and training. These variables represent constructs that must be investigated through both indirect and direct assessment of school climate. Although we indirectly addressed school climate through open-ended questions, we did not thoroughly examine factors such as staff perceptions of principal leadership and leadership roles within the problem solving process. Another factor that may have influenced our results is related to the amount of time it takes to create systemic change in culture and staff beliefs. In both studies, comments from teachers clearly identified misunderstandings and frustrations about the purpose of the problem solving team process even though the purpose of improving student outcomes was emphasized strongly in the professional development materials and the problem solving team manual. Systems-level change must be considered when districts seek to make large-scale change as in this study. Curtis and Stollar (2002) highlight the need for commitment and involvement of decision makers and stakeholders within systems and suggest that realistic, specific goals must be established early when implementing organizational change efforts. Many of the limitations mentioned earlier are addressed when problem solving teams are embedded in a schoolwide delivery system. Rosenfield and Gravois’s Instructional Consultation Teams (IC Teams; 1996) provide a model where all members of the building problem solving team are trained and expected to serve as case managers and consultants to classroom teachers. Rosenfield (2008) argues that building level team meetings provide insufficient time for work involved with problem solving and that large team meetings are not conducive for development of working relationships with teachers. The IC Team model does not negate the building level team; this team supports, manages, and monitors the progress of work completed in teacher-consultant dyads. McKenna et al. (2009) provide evidence for fidelity of the problem-solving process with IC Teams and provide a Level of Implementation scale that may serve as a model for other consultation models.

FUTURE RESEARCH NEEDED As the movement toward implementing multitiered support systems grows, it appears that it is very important that time should be spent developing problem solving skills and efforts should be put into place to examine the quality and outcomes of the problem solving process. If schools in fact move toward an RTI approach, the purpose and functions of teams may need to be reconsidered. Kovaleski and Glew (2006) point out that the previous focus of problem solving teams has been on the individual child. With an RTI approach, the authors point out that a ‘‘one-student-ata-time approach’’ (p. 22) may not produce systematic change; instead, the purpose of teams should align with goals to also assist with the restructuring

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Problem Solving Training

251

of general and remedial education programs. To do so, teams will need to consider schoolwide, grade, program, and classroom level data to make decisions and assume responsibilities for data analysis and problem solving at multiple levels. Rather than implementing problem solving in isolated team meetings, it may be important to train schools to adopt the concepts of problem solving and data-based decision making for educational service delivery systems across an entire district, school, classroom, and specific groups (e.g., students at risk for academic problems). Fletcher, Coulter, Reschly, and Vaughn (2004) identify specific challenges that should be carefully considered by local educational agencies and trainers in considering RTI implementation. These questions are based on recent research in the field (Denton, Vaughn, & Fletcher, 2003; D. Fuchs et al., 2003): (a) preparing professionals to adequately implement researchbased screenings, (b) preparing and offering ongoing technical assistance and support for educators to progress monitor and implement RTI effectively, (c) monitoring the progress of students who have individualized intervention plans in place, and (d) obtaining materials and resources to successfully implement these research based practices. Fletcher et al. point out that ‘‘although it may appear that resources are inadequate to implement these changes, the real task is to utilize more effectively the resources that presently exist with a focus on improved student outcomes through better educational practices’’ (2004, p. 321). It is clear that problem solving skills are necessary to improve student outcomes. In the large-scale initiatives that have been somewhat successful (e.g., Telzrow et al., 2000), significant time was spent on helping teams develop these skills, yet other factors such as school climate and accountability for the results and outcomes were also emphasized. It is clear from our two studies that training, whether it is the typical district model of brief professional development with supporting forms and manuals or more intensive support provided by university faculty, is not sufficient in settings that have not created a culture of problem solving. It is possible that the movement toward multitiered support systems will help create this culture, but clear and direct research about problem solving within these settings is critical and should inform future training and systems reform efforts.

REFERENCES Bahr, M. W., & Kovaleski, J. F. (2006). The need for problem-solving teams: Introduction to the special issue. Remedial and Special Education, 27, 2–5. Bean, R. M., Trovato, C. A., Armitage, C., Dugan, J., Gromet, J., & Hoffman, D. (1994). Coordination of Chapter I reading programs and the instructional support team process: Summary report. Harrisburg, PA: Pennsylvania Department of Education. Bergan, J. R. (1977). Behavioral consultation. Columbus, OH: Merrill.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

252

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York, NY: Plenum. Burns, M. K., Peters, R., & Noell, G. H. (2008). Using performance feedback to enhance implementation fidelity of the problem-solving team process. Journal of School Psychology, 46, 537–550. Burns, M. K., & Symington, T. (2002). A meta-analysis of pre-referral intervention teams: Student and systemic outcomes. Journal of School Psychology, 40, 437–447. Burns, M. K., Vanderwood, M. L., & Ruby, S. (2005). Evaluating the readiness of pre-referral intervention teams for use in a problem solving model. School Psychology Quarterly, 20, 89–105. Chalfant, J. C., Pysh, M. V., & Moultrie, R. (1979). Teacher assistance teams: A model for within-building problem solving. Learning Disabilities Quarterly, 2, 85–95. Curtis, M. J., & Stollar, S. A. (2002). Best practices in systems-level change. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., pp. 223– 234). Bethesda, MD: National Association of School Psychologists. Denton, C. A., Vaughn, S., & Fletcher, J. M. (2003). Bringing research-based practice in reading intervention to scale. Learning Disabilities Research & Practice, 18, 201–211. Fletcher, J., Coulter, W. A., Reschly, D. J., & Vaughn, S. (2004). Alternative approaches to the definition and identification of learning disabilities: Some questions and answers. Annals of Dyslexia, 54(2), 304–331. Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes. Journal of School Psychology, 32, 1–14. Fuchs, D., & Fuchs, L. S. (1989). Exploring effective and efficient prereferral interventions: A component analysis of behavioral consultation. School Psychology Review, 18, 260–283. Fuchs, D., Fuchs, L. S., Harris, A. H., & Roberts, P. H. (1996). Bridging the researchto-practice gap with mainstream assistance teams: A cautionary tale. School Psychology Quarterly, 11, 244–266. Fuchs, D., Mock, D., Morgan, P. L., & Young, C. (2003). Responsiveness to intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157–171. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27–48. Graden, J. L., Casey, A., & Christenson, S. (1985). Implementing a prereferral intervention system: I. The model. Exceptional Children, 51, 377–384. Hartman, W. T., & Fay, T. A. (1996). Cost-effectiveness of instructional support teams in Pennsylvania. Journal of Education Finance, 21, 555–580. Hord, S. M., Rutherford, W. L., Huling-Austin, L., & Hall, G. E. (1987). Taking charge of change. Austin, TX: Southwest Education Development Laboratory. Individuals with Disabilities Education Improvement Act of 2004, 20 U.S.C.  1400 et seq. Kovaleski, J. F. (2002). Best practices in operating pre-referral intervention teams. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., pp. 645–655). Washington, DC: National Association of School Psychologists. Kovaleski, J. F., Gickling, E. E., Morrow, H., & Swank, P. R. (1999). High versus low implementation of instructional support teams: A case for maintaining program fidelity. Remedial and Special Education, 20, 170–183.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Problem Solving Training

253

Kovaleski, J. F., & Glew, M. C. (2006). Bringing instructional support teams to scale: Implications of the Pennsylvania experience. Remedial and Special Education, 27, 16–25. Kratochwill, T. R., & Stoiber, K. C. (1999). Goal attainment scaling: Designing and using goal attainment scaling to monitor progress and document intervention outcomes (Instructor Manual). Des Moines, IA: Iowa Department of Education. Lentz, F. E., Jr., Allen, S. J., & Ehrhardt, K. E. (1996). The conceptual elements of strong interventions in school settings. School Psychology Quarterly, 11, 118– 136. Macmann, G. M., Barnett, D. W., Allen, S. J., Bramlett, R. K., Hall, J. D., & Ernhardt, K. E. (1996). Problem solving and intervention design: Guidelines for the evaluation of technical adequacy. School Psychology Quarterly, 11, 137– 148. McKenna, S. A., Rosenfield, S., & Gravois, T. A. (2009). Measuring the behavioral indicators of instructional consultation: A preliminary validity study. School Psychology Review, 38, 496–509. National Association of State Directors of Special Education. (2005). Response to Intervention: Policy considerations and implementation. Alexandria, VA: National Association of State Directors of Special Education. Powers, K. (2001). Problem solving student support teams. The California School Psychologist, 6, 19–30. Rosenfield, S. (2008). Best practice in instructional consultation and instructional consultation teams. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (5th ed., pp. 1645–1659). Bethesda, MD: National Association of School Psychologists. Rosenfield, S., & Gravois, T. A. (1996). Instructional consultation teams: Collaborating for change. New York, NY: Guilford Press. Salvia, J., & Ysseldyke, J. (2007). Assessment in special and inclusive education. New York, NY: Wadsworth. Spectrum K12 School Solutions. (2009). Response to Intervention (RTI) adoption survey. Retrieved from http://www.spectrumk12.com Telzrow, C., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443–461. Tilly, W. D., III, & Flugum, K. R. (1995). Best practices in ensuring quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (3rd ed., pp. 485–500). Bethesda, MD: National Association of School Psychologists. Truscott, S. D., Cohen, C. E., Sams, D. P., Sanborn, K. J., & Frank, A. J. (2005). The current state(s) of prereferral intervention teams: A report from two national surveys. Remedial and Special Education, 26, 130–140. Upah, K. R. F. (1998). School-based problem-solving interventions: The impact of training and documentation on the quality and outcomes (Unpublished doctoral dissertation). Department of Psychology, Iowa State University, Ames, IA. Upah, K. R. F., & Tilly, W. D., III. (2002). Best practices in designing, implementing, and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., pp. 483–499). Bethesda, MD: National Association of School Psychologists.

254

S. F. Ruby, T. Crosby-Cooper, and M. L. Vanderwood

Susan F. Ruby, PhD, NCSP, is an Assistant Professor and Director of the School Psychology Specialist Level Program at Eastern Washington University. She is involved with training and research efforts related to systems change in schools adopting Response to Intervention models. Tricia Crosby-Cooper, PhD, NCSP, is an Assistant Professor at Azusa Pacific University and a practicing school psychologist. She teaches classes in Response to Intervention (RtI) and positive behavioral supports and recently co-developed a Student Success Team (SST) for her current school district.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

Michael L. Vanderwood, PhD, NCSP, is an Associate Professor and Director of the School Psychology Program at the University of California, Riverside. He has conducted extensive training and research related to assessment and intervention with English Language Learners and Response to Intervention. Note: The authors report that to the best of their knowledge neither they nor their affiliated institutions have financial or personal relationships or affiliations that could influence or bias the opinions, decisions, or work presented in this manuscript.

255

4 Data collected on the behavior prior to implementing the intervention; however, only two data points are reported.

4 The magnitude of the discrepancy is quantified, based on a comparison between the student’s performance and standards outside the local educational setting.

5 The magnitude of the discrepancy is quantified, based on a comparison between student’s performance and the local educational setting demands.

4 Definition meets only two of the three criteria (i.e., objective, clear, complete).

5 Data collected on the behavior prior to implementing the intervention consisting of repeated measures of the target behavior over several (at least three) sessions, days, or even weeks until a stable range of behavior has been identified.

5 Definition is (a) objective— refers to observable and measurable characteristics of behavior; (b) clear—so unambiguous that it could be read, repeated, and paraphrased by observers; and (c) complete—delineates both examples and nonexamples of the behavior.

3 The magnitude of the discrepancy is quantified but is based on an opinion.

Problem validation

3 Data collected on the behavior prior to implementing the intervention; however, only one data point is reported.

Baseline data

3 Definition meets only one of the three criteria (i.e., objective, clear, complete).

Behavioral definition

2 The magnitude of the discrepancy is described qualitatively.

2 Data collected on the behavior prior to implementing the intervention; however, the dimension(s) addressed are not the most appropriate for the selected target behavior.

2 Problem behavior is stated in general terms (e.g., reading comprehension, aggressive behavior, etc.).

APPENDIX PST Rubric From Upah & Tilly (2002)

Downloaded by [Susan Ruby] at 14:24 19 August 2011

(continued)

1 Problem is not validated; magnitude of the discrepancy is not described.

1 Baseline data not gathered prior to implementing the intervention.

1 Behavioral definition is not written.

256

4 Goal represented graphically on performance chart specifying time frame, behavior, criterion, and condition—not stated narratively.

4 Plan stated procedures/ strategies. But one of the following components is missing: materials, when, where, or persons responsible.

5 Plan stated (a) procedures/ strategies, (b) materials, (c) when, (d) where, and (e) persons responsible.

4 Examined relevant and alterable factors from two to three domains only using two to three procedures to gather information. Used this information to develop a specific intervention to change the behavior.

5 Goal stated narratively and represented graphically on performance chart specifying time frame, condition, behavior, and criterion, which is based on a comparison between the student’s baseline data and the expectations.

5 Examined relevant and alterable factors from curriculum, instruction, environment, and student domains using a variety of procedures (RIOT: review, interview, observe, test) to collect data from a variety of relevant sources and settings. Used this information to develop a specific intervention to change the behavior.

3 Plan stated procedures/ strategies. But two of the following components are missing: materials, when, where, or persons responsible.

Intervention plan

3 Goal stated narratively specifying time frame, behavior, criterion, and condition—not represented graphically.

Plan implementation stage rubric Goal setting

3 Examined relevant and alterable factors from only the student domains using a variety of procedures (RIOT) to collect data from a variety of relevant sources and settings. Used this information to develop a specific intervention to change the behavior.

Problem analysis stage rubric Problem analysis

APPENDIX (Continued)

2 Generic description of intervention strategy (e.g., behavior management) is stated. Materials, when, where, and persons responsible may be present.

2 Goal stated narratively and/or represented graphically on performance chart but does not specify all four components (time frame, condition, behavior, criterion).

2 Examined relevant and alterable factors from the domains only using a variety of procedures to gather information from a variety of sources. However, there is no indication this information was used to develop a specific intervention to change the behavior.

Downloaded by [Susan Ruby] at 14:24 19 August 2011

(continued)

1 Intervention plan not written. Or generic descriptions of intervention (e.g., behavior management) only.

1 Goal is not measurable or is not set.

1 Problem analysis is not conducted.

257

4 Data are collected and charted/graphed once a week. Appropriate graphing/charting conventions were used.

4 The decision-making plan indicates three of the four components.

5 The decision-making plan indicates (a) how frequently data will be collected, (b) the strategies to be used to summarize the data for evaluation, (c) how many data points or how much time will occur before the data will be analyzed, and (d) what actions will be taken based on the intervention data.

5 Data are collected and charted/graphed two to three times per week. Appropriate graphing/charting conventions were used (e.g., descriptive title, meaningful scale captions, appropriate scale units, intervention phases labeled).

4 A measurement strategy is developed but only answers four of the five questions: how? what? where? who? and when?

5 A measurement strategy is developed answering how? what? where? who? and when?

3 Data are collected and charted/graphed irregularly and infrequently (less than once a week but more than pre and post). Appropriate graphing/charting conventions may or may not be used.

Plan evaluation stage rubric Progress monitoring

3 The decision-making plan indicates two of the four components.

Decision-making plan

3 A measurement strategy is developed but only answers three of the five questions: how? what? where? who? and when?

Measurement strategy

APPENDIX (Continued)

2 Data are collected but not charted or graphed. Or only preinformation and postinformation was collected and/or charted/graphed.

2 The decision-making plan indicates only one of the four components.

2 A measurement strategy is developed but only answers three of the five questions: how? what? where? who? and when?

Downloaded by [Susan Ruby] at 14:24 19 August 2011

(continued)

1 Progress-monitoring data not collected.

1 The decision-making plan is not documented.

1 Measurement strategy is not developed. Or the measurement strategy only answers one of the five questions.

258

4 Outcome decisions are based on minimal data (i.e., pre- and posttests).

5 Outcome decisions are based on the progress-monitoring data.

3 Outcome decisions are based on subjective data.

Summative evaluation

3 Degree of treatment integrity addressed. Plan was implemented with variations from the original design with no basis for change stated.

Treatment integrity

3 Modifications or changes were made to the intervention based on subjective data.

2 Outcome decision stated but no indication of what data were used to make the conclusion.

2 Treatment integrity addressed, but intervention was not implemented as planned.

2 Modifications or changes were made to the intervention but no indication as to what data were used to make these changes.

Note. Copyright 2002 by the National Association of School Psychologists, Bethesda, MD. Reprinted with permission of the publisher.

4 Degree of treatment integrity addressed. Plan was implemented as designed and modified as necessary on the basis of subjective opinions.

4 There is evidence the decision rule was followed and visual analysis was conducted, but the data were not used to modify or change the intervention as necessary.

5 Degree of treatment integrity measured and monitored. Plan is implemented as designed, including decision-making rules. Intervention changed/ modified as necessary on the basis of objective data.

5 There is evidence the decision rule was followed and visual analysis was conducted. These data were used to modify or change the intervention as necessary.

Formative evaluation

APPENDIX (Continued)

Downloaded by [Susan Ruby] at 14:24 19 August 2011

1 No summative evaluation took place.

1 Treatment integrity not considered.

1 No formative evaluation was conducted.