How interdisciplinary pediatric practitioners choose assessments Jessica Kramer
I
Patricia Bowyer
Key words: Clinical reasoning
I
I
Jane O’Brien
I
Gary Kielhofner
Pediatric assessment
Mots clés : Raisonnement clinique
I
I
I
Vanessa Maziero-Barbosa
Participation assessment
Évaluation pédiatrique
I
Évaluation de la participation
Abstract Background. The assessment process affects the direction and quality of the services children and youth with disabilities receive. However, little is known about how practitioners choose tools and strategies to assess clients. Purpose. To identify processes practitioners use to gather information and choose methods of assessment in pediatric practice. Methods. Three focus groups were held with teams of interdisciplinary pediatric practitioners. Key themes were identified. Findings. Two primary themes emerged: “Things practitioners want to know” and “Choosing what and how to assess.” Practitioners began the assessment process wanting to gather information about children and their environment. Practitioners then used the initial information to decide what and how to further assess as described by three subthemes: “fitting” the child, balancing formal and informal information, and professional context. Implications. Practitioners generally made individualized assessment choices for each child based on the initial information they gathered and then used a balance of formal and informal assessments. However, they were more likely to formally assess children at the level of body structures and function rather than participation, and continued to rely upon such standardized assessments to meet reimbursement and policy requirements.
Résumé Description. Le processus d’évaluation oriente et rehausse la qualité des services offerts aux enfants ayant des handicaps. Toutefois, on en sait peu sur les façons dont les praticiens choisissent les instruments et les méthodes d’évaluation des clients. But. Identifier les processus utilisés par les praticiens pour recueillir de l’information et pour choisir des méthodes d’évaluation dans la pratique en pédiatrie. Méthodologie. Trois groupes de discussion rassemblant des praticiens de plusieurs disciplines en pédiatrie ont été tenus. Les thèmes clés ont été mis en relief. Résultats. Deux principaux thèmes ont été mis en évidence : « Les choses que les praticiens veulent savoir » et « Déterminer ce qu’il faut évaluer ainsi que les méthodes d’évaluation ». Les praticiens interrogés commençaient le processus d’évaluation en cherchant à recueillir de l’information sur les enfants et leur environnement. Puis, ils utilisaient l’information initiale pour déterminer ce qu’il fallait évaluer ainsi que les méthodes d’évaluation, tel que décrit par les trois thèmes secondaires : « correspondance entre l'évaluation et » l'enfant, établir un équilibre entre l’information formelle et l’information informelle et, le contexte professionnel. Conséquences. Les praticiens effectuaient généralement des choix d’évaluations individualisées pour chaque enfant, en fonction de l’information initiale qu’ils avaient recueillie, puis ils utilisaient un ensemble équilibré d’évaluations standardisées et non standardisées. Toutefois, ils étaient davantage susceptibles d’évaluer les enfants de manière formelle en ce qui concerne les structures corporelles et les capacités fonctionnelles plutôt que d’évaluer la participation, et ils continuaient de se baser sur ce genre d’évaluations standardisées pour répondre aux exigences en matière de remboursement et de politique. he assessment process affects the direction and quality of the occupational therapy services that children and youth with disabilities receive. Best practice requires that information gathered during the assessment process should determine the intervention goals and approaches (American Occupational Therapy Association [AOTA], 2002; Canadian Association of Occupational Therapists [CAOT], 2002; World Federation of Occupational Therapists [WFOT], n.d.). However, little is known about how practitioners gather information and choose their tools and strategies for assessing young clients.
T 56
FÉVRIER
2009
I
The profession’s call for occupation- and participationfocused practice (Australian Association of Occupational Therapists, 2001; CAOT, 2002) reflects the current occupational paradigm of our profession (Kielhofner, 2004). In addition, the adoption of the International Classification of Functioning, Disability, and Health has led all health professionals to become more attentive to the assessment of participation (World Health Organization [WHO], 2001). Historically, assessments developed under the previous, mechanistic paradigm sought to evaluate underlying body structures and functions, such as motor proficiency
I NUMÉRO 1 I VOLUME 76 REVUE CANADIENNE D’ERGOTHÉRAPIE Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on April 22, 2016
© CAOT PUBLICATIONS ACE
KRAMER ET AL. (Bruininks, 1978), development (Bayley, 1969), sensory processing (Ayres, 1989), and visual perception (Colarusso & Hammill, 1972). The current occupational paradigm has led to the development of occupation and participation-focused assessments appropriate for pediatric practice, including the School Functional Assessment (Coster, Deeney, Haltiwanger, & Haley, 1998), the Short Child Occupational Profile (SCOPE) (Bowyer et al., 2008), and the Canadian Occupational Performance Measure (COPM) (Law et al., 2005). Research suggests that practitioners still predominantly use impairment-oriented assessments (Brown, Rodger, Brown, & Roever, 2005; National Board of Certification for Occupational Therapy [NBCOT], 2004). Moreover, in contrast to calls for evidence-based practice, many practitioners use “homegrown” assessments that do not have psychometric evidence (Lee, Taylor, Kielhofner, & Fisher, 2008). These findings suggest that there is a gap between the field’s values and practitioners’ clinical reasoning during the assessment process. A better understanding of how practitioners gather information and choose methods to assess pediatric clients could inform the development of occupation-focused assessments that are likely to have good clinical utility, support best practices, and contain the resources needed to support practitioners’ decision-making processes for selecting assessment methods. Strategies for choosing occupational therapy assessment procedures have been proposed in the literature. Hocking (2001) argued that when implementing an occupation- based assessment, therapists should determine if the assessment measures an aspect of occupation and if the assessment itself requires occupational performance. Law et al. (1987) proposed an instrument-evaluation framework that considers the extent to which an assessment is standardized, its intended purpose, and the level of client involvement in the assessment process. Kielhofner and Forsyth (2008) proposed a therapeutic reasoning process that begins when practitioners ask theory based questions about a client and then proceeds to answer those questions using both formal and informal assessment procedures. It is not known to what extent practitioners use these types of strategies and processes to choose methods of assessment. The literature does identify factors that influence practitioners’ use of specific assessments. Availability of assessments and related resources, time for administration, training in assessment administration, and managerial support have all been identified as factors that influence assessment use (Barbara & Whiteford, 2005; Blenkiron, 2005; Chard, 2000; Cooke, McKenna, & Fleming, 2005; Strong et al., 2004). Practitioners have also indicated that flexibility of assessment procedures and the extent to which an assessment can be individualized is important (Barbara & Whiteford; Blenkiron; Strong et al.). The literature also suggests that the theoretical foundation of an assessment influences its use (Chard; Gustafsson, Stibrant © CAOT PUBLICATIONS ACE
Sunnerhagen, & Dahlin-Ivanoff, 2004; Warren, 2002). That is, practitioners using specific theories or conceptual practice models to guide their clinical reasoning are more likely to use tools based on those theories or that assess specific concepts addressed by that model. The purpose of this study was to identify, from practitioners’ perspectives, the process used to gather information and choose assessment tools when working with young clients. The research question was, “What processes do practitioners use when gathering information and choosing methods of occupational therapy assessment in pediatric practice?” Subquestions included, “What factors influence practitioners’ assessment choices?” and “To what extent do practitioners gather participation-based versus impairmentbased information?”
Methods Focus groups were an appropriate method for this study because they allowed practitioners to discuss how they make decisions about gathering information and choosing assessments (Krueger & Casey, 2000; Patton, 2002). Focus groups “provide the opportunity for data to emerge as a result of the dynamic interactions between group members” (Lysack, Luborsky, & Dillaway, 2006, p. 349), and allow individuals to compare their ideas and build upon ideas presented by others. This group interaction allows practitioners to identify what they consider to be the most salient issues when choosing methods of assessment. Since decisions about pediatric assessment processes are often made by teams of practitioners, not just occupational therapists, focus groups also enabled the researchers to recruit existing teams of practitioners to participate in this study. A semi-structured interview guide was developed for the focus groups. Questions were based on existing literature that described strategies for choosing assessments and the factors that influence assessment use. The main questions asked in each focus group are listed in Table 1. Since data analysis was concurrent, several additional questions were added to the second and third focus groups based on the responses from the previous focus group.
TABLE 1 Focus Group Questions Focus Group Main Questions • Imagine that you have a new client/student. What is one question that you first ask yourself about that client/student? • How do you determine what assessment would best answer those questions?” Example Follow-Up/ Probing Questions • What specific assessments do you use? When do you assess? • Do you use formal or informal assessments? How do you make decisions to use formal or informal? • Do you use the same reasoning process for each child?
I NUMBER I VOLUME ANADIAN Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on AprilOURNAL 22, 2016
76
1
C
J
OF
OCCUPATIONAL THERAPY
I
FEBRUARY 2009
57
KRAMER ET AL.
Procedure Institutional Review Board approval was received before proceeding with this study. This study was part of a larger study exploring the clinical utility of a newly developed occupational therapy assessment, the Short Child Occupational Profile (SCOPE). A variety of practice sites in the United States were using the SCOPE in practice and participated in a concurrent study developing the assessment’s psychometric properties. In each setting, either a core team or all practitioners were using the SCOPE assessment as part of their standard practice at the site. Often, in addition to occupational therapists, this team of practitioners included interdisciplinary professionals such as physical therapists or special education teachers. Participation in the development of this new assessment led to a prime opportunity to better understand not only the clinical utility of the SCOPE but also the general processes practitioners used when gathering information and choosing assessments in their pediatric practice sites. The sites invited to participate in this study were a convenience sample; they were located in the same regions as the researchers. Before each focus group began, informed consent was obtained and participants filled out a demographic form. Ground rules were set by the facilitator. All focus groups were audio-recorded. In addition, the facilitator took notes on a flip chart and a co-facilitator took additional handwritten field notes during the discussion. The focus groups lasted about 90 minutes.
Participants A total of 21 practitioners representing a range of disciplines participated in the three focus groups (Table 2). These practitioners worked at a public school for children with disabilities, an interdisciplinary pediatric community clinic, and an occupational therapy community clinic affiliated with a university occupational therapy program. Focus groups were held in the United States in Illinois and Maine.
Analysis The audio-taped discussion, the handwritten field notes, and the flip chart notes were typed into one transcript/field note. Analysis was concurrent with ongoing data collection and followed a five-step process (Marshall & Rossman, 1999). First, as each focus group was completed, focus group transcripts/field notes were reviewed several times. Second, each transcript/field note was coded separately by the first and second authors. An inductive analysis based on analystconstructed typologies (Patton, 2002) was used to code the data. That is, codes created by each analyst were not explicitly based on the focus group participants’ words, but rather on their meaning. Next, both authors worked together to reach consensus on a common list of codes derived from each focus group (Hill, Thompson, & Williams, 1997). These codes were 58
FÉVRIER
2009
I
TABLE 2 Focus Group Demographics Practitioner demographics by focus group Focus Focus Group 1 Group 2 n=6 n=7 Discipline Occupational therapy Physical therapy Special education Social work Speech/Language Gender Female Male Highest academic degree Associate’s/Bachelor’s Degree Master’s Degree
Focus Group 3 n=8
6 0 0 0 0
5 1 0 0 1
3 1 2 2 0
6 0
7 0
7 1
3 3
5 2
3 5
6–30 years
2–20 years
6–29 years
2–20 years
Years of experience Working in discipline 2.5–25 years Working with children with disabilities 4–25 years
Note: Group 1 was conducted at a community occupational therapy clinic , group 2 at an interdisciplinary community rehabilitation clinic, group 3 at a public school for children with disabilities, early childhood–high school
then tested for convergence across all focus groups and, finally, a refined list of codes was created and used to identify themes.
Findings Two main themes emerged to illustrate the process practitioners use when gathering information and choosing methods of assessment: “Things practitioners want to know,” and “Choosing what and how to assess.”
Things practitioners want to know Practitioners began the process of assessment knowing that they wanted to gather certain information about every child. Practitioners sought to gather this information as soon as possible, often before they even met the child, by reviewing the child’s records or by talking with a parent or other professional. They also relied on their initial observations of a child. This initial, informal information was used later to orient the practitioner to possible formal assessment choices.
About the child Practitioners in each focus group generated long lists of demographic information that they wanted to have about each child. The first questions asked often pertained to the child’s age, developmental level, diagnosis, and intervention and medical history. Practitioners then sought information about each child’s current abilities and needs via initial observations of the child, review of the child’s records, and by talking with
I NUMÉRO 1 I VOLUME 76 REVUE CANADIENNE D’ERGOTHÉRAPIE Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on April 22, 2016
© CAOT PUBLICATIONS ACE
KRAMER ET AL. parents and other professionals. They sought to understand the child’s level of functioning across environments (“How do they participate in daily activities?”), skills and abilities (“What are their movement patterns like?” “Can they attend to tasks?” “Can they process sensory information?”), and behaviour (“Will they follow adult directives?” “Are there self-injurious behaviours?”). The answers to these questions enabled practitioners to begin to understand a child’s challenges and needs. They mentioned less frequently, that they wanted to gather information about the child’s play, involvement in the community, and mental health status. Practitioners also wanted to know what motivated the child, such as favourite toys or special activities, in order to “tap into something that they find motivational” (Focus Group 1) and build therapeutic rapport. Knowing more about a child’s motivation for exploring his or her environment and trying new things was also considered important. To answer this question, practitioners would observe children’s actions during the initial meeting: “How are they motivated to move around ... [and] explore their environment?” (Focus Group 2). However, across focus groups, only a few practitioners reported that they asked the child to share his or her own concerns or goals.
“fit” a child, practitioners felt it resulted in the most successful evaluation of the child’s abilities and needs. A quotation from one practitioner illustrates how a child’s age and needs influenced her choice of assessment: “If their referral issue is problems with handwriting and they’re five, then it might turn me to do the Bruininks [Bruininks-Oseretsky Test of Motor Proficiency] or the VMI [The Beery-Buktenica Developmental Test of Visual-Motor Integration]” (Focus Group 2). Based on the initial information gathered about each child, practitioners would also consider if a child “could handle standardized assessment” (Focus Group 1). That is, practitioners determined if the child had the ability to attend to and follow the directions required for standardized assessment. The process of fitting the child was guided by the practitioner’s clinical judgment. Rather than using the same assessments for each child, practitioners pulled from a range of assessments, recognizing that not every assessment worked for every child. Previous experience working with children with similar diagnoses and challenges, and knowledge of assessments that were most appropriate for those children, facilitated their clinical judgment. One practitioner explained: “From clinical experience ... you sort of put kids in a cluster, so that from that you can clinically reason to figure out why you would choose to do what assessment” (Focus Group 2). Further, practitioners realized that they needed to gather different types of information depending on the needs of each child, ranging from information about body structure and function, to activity and participation. One practitioner described how she thought about the different assessments she used in practice: The Peabody, VMI, Bruininks, its more like testing for specific skills. Whereas the SCOPE [Short Child Occupational Profile], even the Sensory Profile, the SFA [School Functional Assessment], that’s more like assessing the child’s skills in functional areas. In the Peabody you are doing fine motor specific lacing, but when you go and ask the SCOPE fine motor [questions]—well, the child can’t tie his shoes, it’s more like a functional type of setting ... (Focus Group 2). This quotation illustrates how practitioners used assessments such as the VMI and Bruininks to assess body structures and functions, and assessments such as the SCOPE, SFA, and Sensory Checklist to assess how the child actually performed in everyday environments. This suggests that practitioners were cognizant of the need to assess children’s participation. Practitioners felt that some children didn’t “fit” any available, formalized assessment. Often, this was because of a child’s unique combination of impairments, needs, or personal and family context. For example, one practitioner stated: “There are some kids, a standardized test would not assess the problems they came here with” (Focus Group 2). At the same time, another practitioner noted:
About the child’s environmental context Practitioners also wanted to know about a child’s physical and social environment in order to determine how different environments affected function. They asked questions about the child’s family and school routines in order to determine if those environments were supporting or restricting the child’s participation in activities. The availability of resources, such as equipment and transportation, and how those resources or lack thereof affected a child’s participation was something a few of the practitioners in this study mentioned exploring. Finally, practitioners wanted to know what concerns, expectations, and goals parents and other professionals had in regards to the child’s needs. One practitioner illustrated the impact parental concerns had on the assessment process when she shared, “What parents say that the child has difficulty with influences the assessments I choose” (Focus Group 1).
Choosing what and how to assess Practitioners used the initial information about each child to decide what and how to further assess. The process they used can be described by three subthemes: “fitting” the child, balancing formal and informal information, and professional context.
“Fitting” the child Based on the initial information, practitioners sought to use an assessment that would “fit,” or match a child’s impairments and developmental status, ability to successfully perform under the testing conditions, and needs as hypothesized by the practitioner based on the initial information. When an assessment © CAOT PUBLICATIONS ACE
I NUMBER I VOLUME ANADIAN Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on AprilOURNAL 22, 2016
76
1
C
J
OF
OCCUPATIONAL THERAPY
I
FEBRUARY 2009
59
KRAMER ET AL. [The screening tool] is asking kids things they are not exposed to, especially in our community ...we’ll have kids fail the screening ... but it’s because you ask them to do all these things that ... they are not exposed to because of the communities we work with ... even though they are standardized and normed, they’re not necessarily normed for our communities (Focus Group 3). When this occurred, practitioners felt frustrated and looked for alternative methods of assessment: “Some kids—I wouldn’t have an assessment that fits them, so I wouldn’t do ... a standardized assessment ... instead [just] asking the questions and [using] observations” (Focus Group 2). Frustration also led practitioners to actively seek out new assessments, often based on their perceptions of best practice, such as the play-based assessment that the team members in Focus Group Three described: TEAM MEMBER 1: Well, what we were doing is that the kids that came out of preschool screenings, we were doing a standardized ... the Peabody [Peabody Developmental Motor Scales] ... or the CELF [Clinical Evaluation of Language Fundamentals] ... and the kids weren’t able to perform on them. TEAM MEMBER 2: ... So by having the play-based assessment [Linder Transdisciplinary Play-Based Assessment], they get to play ... we’re able to get a much better picture of the child’s capabilities and skills. TEAM MEMBER 1: The way that [standardized] test is completed is not quite kid friendly—they are taken away from their parents and walked down a hallway that ... looks big and scary—and even though we’re friendly and holding the kid’s hand, kids are told “don’t talk to strangers,” and then we bring them in this room ... some of the kids take really well to a preschool screening ... and others are like, I don’t know this strange adult who’s asking me these weird questions I don’t know the answers to so I’m just gonna scream my head off ” ... the play process is for sure more kid friendly (Focus Group 3). This conversation illustrates the variety of factors practitioners considered when identifying new assessments, including the extent to which processes were “kid-friendly” and enabled children to receive support from their parents and other adults.
Balancing formal and informal information Practitioners recognized that formal and informal assessment yielded different information and could be used for different purposes. The most commonly mentioned type of formal assessment was standardized assessment that resulted in a scaled, numeric score. Standardized assessments and the resulting scores were seen as most useful for documenting a child’s progress in therapy. One practitioner stated: I want a number. I don’t love numbers, but ... let’s say the total is 30. So if we have a kid that comes ... and they get 60
FÉVRIER
2009
I
a 10 and when they leave they get a 20, then that shows progress. And that’s how I look at it. Sometimes you need a number to show; you might have to analyze and explain it ... but sometimes, visually, you’ll see “10 to 20, wow look at that!” (Focus Group 3). Another practitioner shared reasons that it was sometimes important to obtain scores: Sometimes when you’re with the kids so much, you don’t see those little changes that are happening ... then all of the sudden you retest and go “Whoa ... look how much they grew in this time period.” Maybe it was so subtle that you couldn’t quite see it (Focus Group 1). Standardized assessment was also valued for its ability to indicate that a child needed intervention and concretely communicate findings to parents and other professionals. The importance of using standardized assessments in documentation to justify services and the decisions that practitioners make was captured by this statement: When they’re asking about “What do you think about equipment” ... “What’s this child’s needs?” it’s very hard going based just on your clinical judgment ... they will put that [the assessment results] in front of you and say, “This is what you have here and this is what you’ve done—explain this” (Focus Group 3). All the practitioners in this study valued informal assessment and deemed the information generated though informal assessment essential for successful intervention planning. Informal assessment included conversations with parents, clinical observations, and site-specific methods of organizing information (for example,“intake forms” that the occupational therapists in Focus Group 1 used). The following quotations illustrate the importance of informal assessment: The whole narration piece with the family, where’s it gonna go—the family story. That takes you places and you make decisions based on that (Focus Group 1). All my kids get evaluated informally based on those [initial] questions (Focus Group 2). Informal assessment allowed practitioners to make “connections” between all the information gathered in the assessment process. Even if they relied on the results of formalized, standardized assessments to document the need for intervention, the information they gathered through informal methods provided the subtext that guided the specific needs and goals addressed during intervention. Finally, practitioners shared that a balance between formal and informal assessment was needed to ensure that the practitioner was aware not only of the child’s abilities but also of other adults’ concerns and perceptions of the child’s needs. The need for this balance was illustrated by this practitioner’s story: Because when you do the informal on the phone you’re hearing from the parent “oh this child is just a mess, they can’t do anything, it’s awful,” and then you do the formal,
I NUMÉRO 1 I VOLUME 76 REVUE CANADIENNE D’ERGOTHÉRAPIE Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on April 22, 2016
© CAOT PUBLICATIONS ACE
KRAMER ET AL. and you’re realizing that ... maybe it’s not that bad, and maybe you need to do more work with the parents and not necessarily the child. So it’s just kind of balancing (Focus Group 1). This subtheme implies that practitioners believe that the comprehensive view of a child gained through a balance of formal and informal assessment results in more meaningful intervention planning and, therefore, leads to more effective intervention.
the assessment, including the related time and costs. If assessments took too much time to administer, or if assessments required a submission of ratings to a central office to generate scores, which further extended the time required to complete the assessment, practitioners shared that they were less likely to use the assessment even if they felt the assessment was “wonderful.” Costs associated with purchasing assessments and attending required training also decreased the likelihood that a practitioner would choose an assessment.
Discussion
Professional context Practitioners’ process of choosing assessment methods was also influenced by professional considerations, restrictions, and conventions. The largest professional influences were assessment processes required by laws, regulations, and reimbursement agencies. Although practitioners identified a range of questions that were important to ask about a child during assessment, the assessments that they could use to answer those questions was restricted by practice regulations or by reimbursement expectations. It’s based not necessarily on all those clinical observations, but based upon what we have to do for [the] insurance company (Focus Group 2). The state agencies we work for want a standardized score (Focus Group 1). These quotations illuminate that practitioners’ clinical reasoning and the process of “fitting” the child and the assessment was often superseded by these external requirements. While practitioners in Focus Group 3 were not restricted by reimbursement requirements, they did report their obligation to meet legislative mandates for re-evaluation as well as other legal processes related to the provision of educational services. For example: A couple of us have gone through due process... . I think from having been through those ... I kind of gear toward something that does have some numbers ... Something that I can give to kind of indicate where the child is functioning, so it’s not just a subjective point of view of the child (Focus Group 3). In each of these instances, practitioners’ decisions to choose assessments were driven by the desire to avoid negative professional consequences rather than meeting the needs of the child or perceived standards for best practice. Assessment choices were also influenced by assessment conventions, that is, what was typically done in each practice setting and the related professional identity of that setting. For instance, one practitioner shared that “the sensory profile is thrown in for everything ... most of our referrals are sensory in nature ... that’s just kind of the reputation of the clinic, although we do a lot more than that, we’re known [for sensory]” (Focus Group 1). Finally, assessment choices were influenced by the availability of assessments and the feasibility of administering © CAOT PUBLICATIONS ACE
Practitioners in this study began the assessment process by asking questions to get information they wanted for each child and, to some extent, the child’s surrounding physical and social contexts. Some of those questions were based on theory; for example, questions about a child’s volition stemmed from the use of the model of human occupation, and questions about a child’s sensory processing stemmed from the use of sensory integration theory. Then, practitioners “fit,” or matched, the child to the appropriate assessment based on the initial information, used a balance of formal and informal assessment to gather additional information about the child’s circumstances, and chose assessments that would be in line with each practitioner’s professional context and related expectations. This process mirrors the steps of therapeutic reasoning proposed by Kielhofner and Forsyth (2008) in that practitioners began by asking questions and used the initial answers to guide assessment choices; it also indicates that they valued and used a balance of formal and informal assessment methods. Additionally, practitioners used some of the strategies put forth by Law et al. (1987) when “fitting” each child to an assessment. For example, decisions to use certain assessments were based on whether an assessment was standardized and its underlying purpose. However, the current study reveals the extent to which practitioners’ assessment choices are influenced by practice regulations, legislative mandates, and reimbursement, and the resulting effect this has on the type of information gathered. While practitioners in the current study recognized the need to gather information across the various domains of the International Classification of Functioning, Disability, and Health ([ICF] WHO, 2001), most formal, standardized assessments mentioned by practitioners in this study assessed children’s body structures and functions. The focus group responses revealed that practitioners working with children begin the assessment process by asking questions about the child’s impairments and developmental level, and then they chose assessments based on this information. This finding is aligned with recent studies of practitioners that show impairment and developmentally based assessments are more frequently used in practice than occupation and participation-focused assessments (Brown et al., 2005; NBCOT,
I NUMBER I VOLUME ANADIAN Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on AprilOURNAL 22, 2016
76
1
C
J
OF
OCCUPATIONAL THERAPY
I
FEBRUARY 2009
61
KRAMER ET AL. 2004). In addition, practitioners’ responses suggest that outcomes are measured via change scores on standardized assessments rather than by documenting changes in children’s level of participation. More frequently attending to issues of impairment than to participation in the assessment process reflects a bottomup approach and suggests that practitioners find value in gathering information about a child’s underlying body structures and function. Certainly, it is necessary to assess mental functions, sensory functions, and musculoskeletal functions to ensure that children with disabilities receive the most appropriate services and supports (Weinstock-Zlotnick & Hinojosa, 2004). However, all rehabilitation professions working with children with disabilities have been called upon to assess and address participation and involvement in activities in addition to body structure and function (Msall, 2005; WHO, 2001). While practitioners in this study attempted to shift their assessment processes to respond to these best practice expectations and assess children’s participation, they still were more likely to formally assess children’s body structures and functions. The decision to rely upon standardized assessments of body structure and function appears to be driven in part by practitioners’ perception that reimbursement requirements and legislative regulations are best documented through the use of standardized, scaled assessments. It is possible that this concern is unique to the U.S. practice context. Practitioners working in other countries with different regulations or nationalized public health services may not be influenced by these factors. However, the literature suggests that practitioners working in other countries have similar concerns. For example, in Ontario, the introduction of managed competition in home health care led to reduced clinical autonomy of occupational therapists (Randell, 2005). In Australia, occupational therapists’ decisions to use specific intervention approaches were influenced by the nature of service delivery, such as the typical duration of therapy in a specific service setting (Copley, Nelson, Underwood, & Flanigan, 2008). This suggests that across practice contexts, practitioners’ decisions about the assessment and intervention process may be influenced by service system requirements and related policies. Practitioners’ responses also indicate that motivation, environmental context, and participation are often assessed using informal methods, such as observation and conversation with parents and other professionals. Some named exceptions to this pattern included the Short Child Occupational Profile (SCOPE) (Bowyer et al., 2008) and the School Functional Assessment (SFA) (Coster et al., 1998). Practitioners’ descriptions of these assessments also imply that they considered the information gathered by these assessments valuable and unique. This finding suggests that practitioners are interested and willing to formally assess the participation and activity domains of the ICF. However, they 62
FÉVRIER
2009
I
never indicated that they used these assessments to justify intervention services or to apply for reimbursement. This suggests a discrepancy between best practice within the field of rehabilitation and the expectations of the various entities regulating or funding rehabilitation services. It also highlights the perception of both practitioners and rehabilitation entities regarding assessment: that is, assessments requiring therapeutic reasoning and clinical judgment, such as the SFA and the SCOPE, are considered less reliable and less accurate depictions of children’s needs and abilities than assessments that evaluate children’s ability to execute concrete tasks or actions, such as the VMI or the BruininksOseretsky Test of Motor Proficiency. Practitioners did care about the relevancy of assessments being used with young clients and wanted to ensure the success of children and youth during assessment. All practitioners wanted to “fit” the child to an assessment in order to present the best picture of the child’s circumstances. Their clinical reasoning and previous experience led them to recognize when children’s abilities, strengths, and needs were not being captured by assessments. Practitioners considered a range of factors, such as the extent to which children could receive support from adults and the extent to which assessment tasks were familiar to children, in order to ensure the assessment process was as successful an experience as possible for each child. Practitioners’ assessment choices were also influenced by the goals and concerns of parents. The concern of individualizing the assessment as much as possible reflects a client- centered approach, a value in the occupational therapy profession (AOTA, 2002; WFOT, 2004). However, this individualized approach was often superseded by reimbursement requirements, legislative mandates, and other criteria that must be met in order to justify provision of services. While policy is an important influence on the process of providing services to children with disabilities, care should be taken to ensure that practitioners are able to choose assessments based on the individualized needs of the child. Finally, these findings confirm that factors identified in the literature as impacting assessment use are taken into consideration by practitioners when choosing methods of assessment. Factors such as time, ease of use, and availability are part of the decision making processes practitioners use when choosing assessments (Barbara & Whiteford, 2005; Blenkiron, 2005; Chard, 2000; Cooke, McKenna, & Fleming, 2005; Strong et al., 2004). This study also reveals the impact that professional identity and related expectations have on the assessments practitioners choose. For example, every occupational therapist mentioned the importance of asking questions about and assessing a child’s sensory processing, in keeping with the profession’s frequent use of sensory integration theory in pediatric practice (Brown et al., 2005; NBCOT, 2004). Practitioners also stated that the SCOPE was
I NUMÉRO 1 I VOLUME 76 REVUE CANADIENNE D’ERGOTHÉRAPIE Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on April 22, 2016
© CAOT PUBLICATIONS ACE
KRAMER ET AL. useful because it reminded them to gather information about a child’s volition, a concept that is part of the Model of Human Occupation (Kielhofner, 2008). This finding is in keeping with previous studies that highlighted the impact an assessment’s theoretical background can have on choice of assessment (Chard, Gustafsson, Stibrant Sunnerhagen, & Dahlin-Ivanoff, 2004; Warren, 2002).
influenced by policy and reimbursement requirements. Future research should explore if other groups of practitioners use this process to choose methods of assessment.
Conclusion This study was an initial exploration of how interdisciplinary pediatric practitioners gather information and choose methods of assessments. Practitioners said that they began gathering information by asking initial questions, and then made individualized assessment choices for each child based on the initial information. Practitioners used a balance of formal and informal assessments in order to obtain the best evaluation of each child’s situation. However, they continued to focus on children’s body structure and function and less frequently considered participation. Finally, practitioners continued to rely upon standardized assessments in order to meet reimbursement and policy requirements.
Limitations While focus groups provided insight into practitioners’ thoughts and decisions as a group, the responses of individual practitioners may have been influenced by the presence of others. As a result, some individuals whose thoughts differed from the majority may have been hesitant to share their thoughts. In addition, practitioners’ verbal reports may differ from what they do in practice. Finally, the practitioners who participated in this study were a convenience sample, and these findings may not represent the processes used to determine methods of assessment at other pediatric sites.
Acknowledgements The authors would like to thank and acknowledge the practitioners at the Community Occupational Therapy Clinic in Biddeford, Maine; City Kids Inc. in Chicago, Illinois; and Proviso Area for Exceptional Children in Maywood, Illinois, for their time and contribution to this study. All practitioners consented to site recognition as part of the Scholarship of Practice. During the time of the study, Jessica Kramer and Patricia Bowyer were affiliated with the Department of Occupational Therapy at the University of Illinois at Chicago.
Implications for future research and practice Developers of occupational therapy assessments should consider providing practitioners with training and resources that encourage questions about participation over impairment so that occupation-based assessments are more likely to be used in the occupational therapy assessment process. In addition, training on rehabilitation frameworks and theories that are aligned with the ICF may encourage practitioners to assess participation. This training should be targeted to practitioners who may not have been exposed to such models of practice during their initial training. Models of such training exist in the occupational therapy literature and have been shown to change occupational therapists’ approach to practice (Forsyth, Mann, & Kielhofner, 2005; Forsyth, Melton, & Mann, 2005). Furthermore, there is a need to develop assessments that allow practitioners to systematically collect and document information usually assessed using informal methods. Assessment developers can also expedite practitioners’ process of fitting the child by carefully outlining the types of children and the purpose of an assessment in assessment manuals and other easy-to-find venues, such as Web sites, training programs, and advertising. Developers should also carefully weigh the balance between assessment sophistication and the time and costs needed to complete it, as this study suggests that even if practitioners like an assessment they will not use it if it requires inordinate costs and time. Finally, occupational therapists as individuals and as a profession may wish to advocate for a better understanding of occupational therapy services by lawmakers, insurance companies, and local authorities. This can help to ensure that practitioners’ reasoning for choosing methods of assessments is reflective of the needs of the child and not unduly © CAOT PUBLICATIONS ACE
Key messages • Practitioners begin the assessment process by gathering initial information about a child’s impairment and developmental level and to a lesser extent focus on function, participation, and the environment. • Practitioners “fit,” or match, the child to an assessment based on his or her impairments and developmental status, ability to successfully perform under the testing conditions, and needs. • Practitioners consider informal assessment most useful for intervention planning, while standardized assessment is considered most useful for documenting the need for intervention and outcomes.
References American Occupational Therapy Association. (2002). American Occupational Therapy Association practice framework: Domain and process. American Journal of Occupational Therapy, 56, 609639. Australian Association of Occupational Therapists. (2001). Code of ethics. Retrieved November 1, 2007, from OT Australia NSW, http:// www.otnsw.com.au/download/NationalCodeEthics090801.pdf Ayres, A. J. (1989). Sensory integration and praxis test manual. Los Angles:
I NUMBER I VOLUME ANADIAN Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on AprilOURNAL 22, 2016
76
1
C
J
OF
OCCUPATIONAL THERAPY
I
FEBRUARY 2009
63
KRAMER ET AL. Western Psychological Services. Barbara, A., & Whiteford, G. (2005). Clinical utility of the Handicap Assessment and Resource Tool: An investigation of its use with the ages people in hospital. Australian Occupational Therapy Journal, 52, 17- 25. Bayley, N. (1969). Bayley Scales of Infant Motor Development- II. New York: The Psychological Corporation. Blenkiron, E. L. (2005). Uptake of standardized hand assessments in rheumatology: Why is it so low? British Journal of Occupational Therapy, 68, 148-157. Bowyer, P., Kramer, J., Ploszaj, A., Ross, M., Schwartz, O., Kielhofner, G., & Kramer, K. (2008). The Short Child Occupational Profile (SCOPE) (version 2.2). Chicago, IL: The Model of Human Occupation Clearinghouse, Department of Occupational Therapy, College of Applied Health Sciences, University of Illinois at Chicago. Brown, G. T., Rodger, S., Brown, A., & Roever, C. (2005). A comparison of Canadian and Australian paediatric occupational therapists. Occupational Therapy International, 12, 137- 161. Bruininks, R. H. (1978). Bruininks-Oseretsky Test of Motor Proficiency Examiner’s Manual. Circle Pines, MN: American Guidance Service Canadian Association of Occupational Therapists. (2002). Enabling occupation: An occupational therapy perspective. Ottawa, ON: CAOT Colarusso, R. P., & Hammill, D. D. (1972). Motor-Free Visual Perception Test. Novato, CA: Academic Therapy Publications Copley, J., Nelson, A., Turpin, M., Underwood, K., & Flanigan, K. (2008). Factors influencing therapist’ interventions for children with learning difficulties. Canadian Journal of Occupational Therapy, 75, 105-113. Chard, G. (2000). An investigation into the use of the Assessment of Motor and Process Skills (AMPS) in clinical practice. British Journal of Occupational Therapy, 63, 481- 488. Cooke, D. M., McKenna, K., & Fleming, J. (2005). Development of standardized occupational therapy screening tool for visual perception in adults. Scandinavian Journal of Occupational Therapy, 12, 59-71. Coster, W. J., Deeney, T., Haltiwanger, J., & Haley, S. (1998). School Function Assessment. San Antonio, TX: The Psychological Corporation/ Therapy Skill Builders. Forsyth, K., Melton, J., & Mann, L. S. (2005). Achieving evidence-based practice: A process of continuing education through practitioneracademic partnership. Occupational Therapy in Health Care, 19, 211-227. Forsyth, K., Mann, L. S., & Kielhofner, G. (2005). Scholarship of practice: Making occupation-focused, theory-driven, evidence-based practice a reality. British Journal of Occupational Therapy, 68, 260-268. Gustafsson, S., Stibrant Sunnerhagen, K., & Dahlin-Ivanoff, S. (2004). Occupational therapists’ and patients’ perceptions of ABILHAND, a new assessment tool for measuring manual ability. Scandinavian Journal of Occupational Therapy, 11, 107- 117. Hill, C. E., Thompson, B. J., & Williams, E. N. (1997). A guide to conducting consensual qualitative research. The Counseling Psychologist, 25, 517- 572. Hocking, C. (2001). The issue is: Implementing occupation- based assessment. American Journal of Occupational Therapy, 55, 463-469. Kielhofner, G. (2008). The model of human occupation: Theory and application (4th ed.). Baltimore: Lippincott, Williams, & Wilkins. Kielhofner, G. (2004). Conceptual Foundations of Occupational Therapy (3rd ed.). Philadelphia, PA: F. A. Davis. Kielhofner, G., & Forsyth, K. (2008). Therapeutic reasoning: Planning, implementing, and evaluating the outcomes of therapy. In G. Kielhofner (Ed.), Model of human occupation: Theory and application (4th ed., pp. 143-154). Baltimore: Lippincott, Williams, & Wilkins. Krueger, R. A., & Casey, M. A. (2000). Focus groups: A practical guide for applied research (3rd ed.). Thousand Oaks, CA: Sage. Law, M. (1987). Measurement in occupational therapy: Scientific criteria for evaluation. Canadian Journal of Occupational Therapy, 54, 133- 138. Law, M., Baptiste, S., Carswell, A., McColl, M. A., Polatajko, H., & Pollock, N. (2005). The Canadian Occupational Performance Measure (4th
64
FÉVRIER
2009
I
ed.). Toronto, ON: CAOT Publications. Lee, S., Taylor, R., Kielhofner, G., & Fisher, G. (2008). Theory use in practice: A national survey of therapists who use the model of human occupation. American Journal of Occupational Therapy, 62, 106-117. Lysack, C., Luborsky, M. R., & Dillaway, H. (2006). Gathering qualitative data. In G. Kielhofner (Ed.), Research in occupational therapy: Methods of inquiry for enhancing practice (pp. 341- 357). Philadelphia, PA: F. A. Davis. Marshall, C., & Rossman, G. B. (1999). Designing qualitative research (3rd ed.). Thousand Oaks, CA: Sage. Msall, M. E. (2005). Measuring functional skills in preschool children at risk for neurodevelopmental disabilities. Mental Retardation and Developmental Disabilities Research Reviews, 11, 263-273. National Board for Certification in Occupational Therapy (2004). A practice analysis of entry-level occupational therapist registered and certified occupational therapy assistant practice. OTJR: Occupation, Participation, and Health, 24 (supplement 1), S1-S31. Patton, M. Q. (2002). Qualitative Research & Evaluation Methods (3rd ed.). Thousand Oaks, CA: Sage. Randall, G. E. (2005). The impact of managed competition on home care in Ontario: The case of rehabilitation services and professionals. Unpublished doctoral dissertation, University of Toronto, Canada. Strong, S., Baptiste, S., Cole, D., Clarke, J., Costa, M., Shannon, H. et al. (2004) Functional assessment of injured workers: A profile of assessor practices. Canadian Journal of Occupational Therapy, 71, 13-23. Warren, A. (2002). An evaluation of the Canadian Model of Occupational Performance and the Canadian Occupational Performance Measure in Mental Health Practice, British Journal of Occupational Therapy, 65, 515-521. Weinstock-Zlotnick, G., & Hinojosa, J. (2004). The Issue Is: Bottom- up or top- down evaluation: Is one better than the other? American Journal of Occupational Therapy, 58, 594- 599. World Federation of Occupational Therapists. (n.d.). About occupational therapy. Retrieved October 2, 2007, from http://www.wfot.org/ office_ files/ABOUT%20OCCUPATIONAL%20THERAPY%282%29.pdf. World Federation of Occupational Therapists. (2004). World Federation of Occupational Therapists definition of occupational therapy. Retrieved October 2, 2007, from http://www.wfot.org/office_files/Definition %20of%20OT%20CM 2004%20Final.pdf. World Health Organization. (2001). International Classification of Functioning, Disability and Health (ICF). Geneva, Switzerland: Author.
Authors Jessica Kramer, PhD, OTR/L, is a Postdoctoral Fellow, Health and Disability Research Institute, Boston University, 635 Commonwealth Ave (SAR 534), Boston, Massachusetts 02155. Phone: 1-617353-7492. E-mail:
[email protected]. (Corresponding Author) Patricia Bowyer, EdD, OTR/L, BCN, is Associate Professor and Associate Director, Texas Woman’s University, School of Occupational Therapy, 6700 Fannin Street, Houston, Texas 77030-2343. Jane O’Brien, PhD, OTR/L, Associate Professor, Department of Occupational Therapy, University of New England, 716 Stevens Avenue, Room 309, Portland, Maine 04103. Gary Kielhofner, DrPH, FAOTA, OTR/L, is Professor, Wade/Meyer Chair, Department of Occupational Therapy, University of Illinois at Chicago (MC 811), 1919 West Taylor Street, Chicago, Illinois 60612. Vanessa Maziero-Barbosa, PhD, OTR/L, Pediatric Team Leader, Occupational Therapy, University of Illinois Medical Center at Chicago, Chicago, Illinois. Mary Egan, Associate Editor, managed the review process for this article because of the Editor's conflict of interest.
I NUMÉRO 1 I VOLUME 76 REVUE CANADIENNE D’ERGOTHÉRAPIE Downloaded from cjo.sagepub.com at TEXAS WOMANS UNIV on April 22, 2016
© CAOT PUBLICATIONS ACE