Virtual Reality for Psychological Assessment in ... - Semantic Scholar

1 downloads 0 Views 160KB Size Report
Keywords: virtual reality, neuropsychology, psychological assessment, ecological ..... Test of. Variables of. Attention;. VR. Virtual. Reality;. WISC-III. Wechsler.
Practice Innovations 2016, Vol. 1, No. 3, 197–217

© 2016 American Psychological Association 2377-889X/16/$12.00 http://dx.doi.org/10.1037/pri0000028

Virtual Reality for Psychological Assessment in Clinical Practice Thomas D. Parsons and Amanda S. Phillips

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

University of North Texas Clinical psychologists are often asked to make recommendations regarding a client’s ability to function in everyday activities (e.g., work, classroom, or shopping). Common approaches include a combination of paper-and-pencil (including some computer automated) assessments, behavioral observations of the client’s performance during testing, and self-report measures about their perceived deficits. From this combination of assessments, observations, and self-reports, the psychologist is expected to make predictions about the client’s ability to return to the classroom, return to work, and successfully complete other activities of daily living. While there are advantages to this approach, there are also some shortcomings—perhaps most notable is the lack of ecological validity. Recent advances in virtual reality technologies allow for enhanced computational capacities for administration efficiency, stimulus presentation, automated logging of responses, and data analytic processing. These virtual environments allow for controlled presentations of emotionally engaging background narratives to enhance affective experience and social interactions. Within this context virtual reality can allow psychologists to offer safe, repeatable, and diversifiable assessments of real word functioning. Although there are a number of purported advantages of virtual reality technologies, there is still a need for establishing the psychometric properties of virtual reality assessments and interventions. This review investigates the advantages and challenges inherent in the application of virtual reality technologies to psychological assessments and interventions. Keywords: virtual reality, neuropsychology, psychological assessment, ecological validity

Clinical psychologists are increasingly being asked to make prescriptive statements about every-day functioning. Unfortunately, results from many psychological tests are not easily generalizable to real-world functioning. Common approaches include a combination of history taking, self-reports, paper-and-pencil cognitive assessments, and the psychologist’s observations of the client’s behavior. From this combination, the psychologist is expected to make predictions about the client’s ability to return to the classroom, return to work, and

This article was published Online First August 18, 2016. Thomas D. Parsons and Amanda S. Phillips, Department of Psychology, Computational Neuropsychology and Simulation Lab, University of North Texas. Correspondence concerning this article should be addressed to Thomas D. Parsons, Computational Neuropsychology and Simulation Lab, National Academy of Neuropsychology, Department of Psychology, University of North Texas, 1155 Union Circle #311280, Denton, TX 76203. E-mail: [email protected]

successfully complete other activities of daily living. The limitations inherent in this process have led to increasing calls for assessment methods that provide more generalizable data about client functioning (Burgess et al., 2006; Jurado & Rosselli, 2007). Virtual reality (VR) assessments have been developed to provide an enhanced understanding of client functioning in activities of daily living (Campbell et al., 2009; Matheis et al., 2007). While VR assessments are often presented as tools for neurocognitive assessment in research settings, this article aims to provide an introduction to VR assessment for clinical practice. First, current assessment methods are compared with assessment in virtual environments. Next, examples of VR assessments with clinical case examples are provided. Finally, considerations for the adoption of VR technologies in clinical practice are explored. We propose that the addition of VR to current psychological assessment batteries can improve the generalizability of test results and increase the utility and rele-

197

198

PARSONS AND PHILLIPS

vance of psychologists’ recommendations to clients.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Self-Reports and Behavioral Observations Although traditional assessment methods provide valuable information, there are a number of limitations in using this approach (see Table 1). Determining a client’s functional capabilities requires precise control over the environment and the ability to adjust the potency or frequency of stimuli (White et al., 2014). This control is difficult to ensure in the traditional assessment environment. Given that psychological tests are typically administered oneon-one in a controlled environment; the psychologist may not receive a clear picture of the client’s cognitive functions in everyday activities. Although psychologists often use information from parent and teacher reports to get an idea of the patient’s everyday functioning, studies indicate that the agreement between parents and teachers is modest. For example, the concordance between parents and teachers on diagnosing attention-deficit-hyperactivity disorder (ADHD) varies from .30 to .50 depending on the behavioral dimensions being rated (Bieder-

man, Faraone, Milberger, & Doyle, 1993; Biederman, Keenan, & Faraone, 1990; Mitsis, McKay, Schulz, Newcorn, & Halperin, 2000; Newcorn et al., 1994; de Nijs et al., 2004). Additionally, there is often not a strong overlap between these rating scales and standard tests of cognitive functioning, suggesting that these assessments may be reflective of different aspects of behavior (see Table 1 for a list of potential confounds). Normative Comparison of Cognitive Performance For many psychologists, the way to get beyond these limitations is to emphasize a normative comparison approach to psychological assessment. For example, a psychologist can give a continuous performance test (CPT) to assess a cognitive construct (e.g., attentional processing). Results from the CPT reveal the client’s performance in areas of inattentiveness, impulsivity, sustained attention, and vigilance. The psychologist can compare the client’s CPT results with reference groups to determine whether a client is performing as would be

Table 1 Comparison of the Advantages and Disadvantages of Traditional Assessment Advantages Self-reports

Behavioral observation Normative comparison

Allows identification of areas of client functioning that have been impacted or preserved. Provides information about the client’s perceptions of their performance. Third-party reports provide data about client functioning in various environments.

Provides insights into how the client may react when faced with difficult tasks. Allows for measurement of expected performance. Quantifies discrepancies in expected performance.

Disadvantages Controlled environment limits the generalizability of results. Third-party reports may have poor inter-rater reliability. Third-party reports often fail to strongly overlap with cognitive tests. Potential influence of self-report format on responses. Unable to measure automatic processing. Post hoc appraisal of past behavior. Controlled environment limits ecological validity of observations. Tests are often measure abstract constructs rather than functioning. Many cognitive tests were developed for use with normal populations, not for functional assessment. Tests may not be representative of real-world situations. Tests lack generalizability to real-world situations. Test sensitivity and potency cannot be altered. Time-consuming and expensive administration.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

expected given their achievements and educational attainment. An unfortunate limitation of this approach is that the tests used are typically based upon construct-driven assessments that do little to replicate the client’s everyday activities. Furthermore, many of the tests commonly used today were never meant for clinical assessments. Burgess and colleagues (2006) point out that cognitive construct measures like the Tower of London and the Wisconsin Card Sorting Test (WCST) were not originally designed to be used as clinical measures. Instead, these measures were found to be useful tools for cognitive assessment among normal populations and then later found their way into the clinical realm to aide in assessing constructs that are important to carrying out real-world activities. This approach forces the psychologist to rely on measures that were designed for purposes other than predictions of real-world functioning. Burgess and colleagues (2006) argued that we need assessments capable of broadening our understanding about the ways in which the brain enables persons to interact with their environment and organize everyday activities. Although many cognitive tests do give us some insight into the client’s everyday performance, they do not provide direct knowledge about shortcomings in the functional capabilities of the client, which limits the accuracy and utility of the psychologist’s recommendations (Chaytor & Schmitter-Edgecombe, 2003; Manchester, Priestley, & Jackson, 2004). Virtual Environments One potential answer to the issues raised above is the addition of virtual environments to

199

the psychologists’ cognitive assessment battery, which allow clients to become immersed within a computer-generated simulation (Campbell et al., 2009; Matheis et al., 2007). Potential virtual environment use in assessment and rehabilitation of human cognitive processes is becoming recognized as technology advances, and has a number of advantages (see Table 2). While a complete listing of all the virtual environmentbased psychological assessments is outside the scope of this article, we provide examples virtual environments with case examples. Virtual Classroom Various virtual classrooms are being validated for assessment of supervisory attentional processing. Virtual Classrooms are simulation environments that were designed to assess potential attention deficits by embedding cognitive tasks (e.g., CPT; Stroop) into a virtual classroom environment (Díaz-Orueta et al., 2013; Iriarte et al., 2016; Parsons et al., 2007; Rizzo et al., 2006; see Table 3). In these virtual classrooms the participant is seated at one of the desks and is surrounded by desks, children, a teacher, and a white board much like they would be in a real-world classroom. Various cognitive tasks can be presented on the whiteboard in the front of the room and the participant performs a task (e.g., Stroop or CPT) with auditory (e.g., airplane passing overhead, a voice from the intercom, and the bell ringing) and visual (e.g., children passing notes, a child raising his hand, the teacher answering the classroom door, and principal entering the room) distractors in the background. The number and frequency of these distractors can be adjusted based on age, grade level, or other

Table 2 Comparison of the Advantages and Disadvantages of Assessment in a Virtual Environment Advantages

Disadvantages

Precise and consistent presentation of stimuli. Clinician control of dynamic stimuli. Greater ecological validity. Availability of function led assessments. Programs administer, record, score, and analyze data. Environmental stimuli and test sensitivity can be adjusted. Environment can be adjusted to reflect clients’ everyday surroundings.

Construct-driven assessments will still lack some ecological validity. Lack of guidelines for development, administration, and interpretation. Pediatric and geriatric clients may need additional guidance. Geriatric clients may have trouble adjusting to the visual environment. Clients with Autism Spectrum Disorder may become over stimulated. Clients with psychiatric disorders or high suggestibility may respond unfavorably.

Research design Comparison of participant performance on the Continuous Performance Test with and without the Virtual Classroom.

ADHD and controls children were first tested with the traditional computerized CPT. After 10 min they were tested with the virtual CPT.

Convergent validity study between both a Virtual CPT and traditional computerized CPT. Compared children undergoing medical treatment with a nonmedicated group

Sample

N ⫽ 35 boys ages 8–14 years; 19 participants with ADHD were compared to 16 agematched controls.

N ⫽ 36 boys ages 7–10 years; 20 participants with ADHD were compared to 16 agematched controls.

N ⫽ 57 participants with ADHD between the ages of 6–16 years (26.3% female).

Study

Adams et al. (2009)

Bioulac et al. (2012)

Díaz-Orueta et al. (2014)

Traditional Tests

• Conners’ Continuous Performance Test • WISC-III (Selected subtests)

• Continuous Performance Test (CPT II) • Conners’ parents rating scale • Child Behavior Check List • State Trait Inventory Anxiety Inventory

• Behavior Assessment System for Children (BASC) • VIGIL Continuous Performance Test

Table 3 Recent Construct-Driven Studies (Within Past 10 Years) Using a Virtual Classroom Environment

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

• Greater specificity was found for Virtual CPT. • Although differences between the two groups were not significant, a strong trend was observed for correct target identification and commission errors. • ADHD participants showed a significant performance decrement; decrease of correct hits; and increased reaction time. • ADHD children performed worse than controls on both the Virtual CPT and the traditional computerized CPT. • Both Virtual CPT and the traditional computerized CPT showed significant correlations. • The Virtual CPT (but not Conners’ CPT) was able to differentiate between ADHD children with and without pharmacological treatment (inattention, impulsivity, processing speed, motor activity, and attention focus).

Results

200 PARSONS AND PHILLIPS

Research design Comparison of Virtual CPT and the traditional tests. Cross sectional design.

Cross-sectional design.

A normative study. Single application with descriptive design.

Sample

N ⫽ 54. 29 with Neurofibromatosis type 1 (NF1); 69% female; mean age 12.2). 25 controls 72% female; mean age ⫽ 12.2).

N ⫽ 76. 41 children ages 8– 16 with acquired brain injury; 35 age- and gendermatched controls.

N ⫽ 1282 children ages 6 to 16.

Study

Gilboa et al. (2011)

Gilboa et al. (2015)

Iriarte et al. (2016)

Table 3 (continued)

None

• Test of Everyday Attention for Children • Wechsler Abbreviated Scale of Intelligence (Matrix Reasoning and Vocabulary) • Conners Parent Rating Scales-Revised

• Conners’ Parent Rating Scales-Revised

Traditional Tests

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Results • Significant differences between the NF1 and controls on omission errors and commission errors in the Virtual CPT. • Poorer performance by NF1 children. • Significant correlations between number of targets correctly identified, the number of commission errors, and reaction time. • Significant between group differences for number of targets correctly identified in the Virtual CPT. • 45% of the children with ABI suffered marked deficits in sustained attention on the Virtual CPT. • Attentional performance was found to be related to age, age at injury/diagnosis and treatment. • Results were clustered into different categories for posterior analysis. • Differences by age and gender were analyzed, resulting in 14 groups, 7 per sex group. • Differences between visual and auditory attention were also obtained. (table continues)

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE 201

Repeated measures comparisons.

Comparison of the traditional CPT and Virtual CPT was counterbalanced across participants.

Intergroup comparison of participants with ADHD and normal controls.

8 children with acquired brain injury, ages 8 to 12 years.

N ⫽ 50 25 sports-concussed and 25 matched control adolescents.

N ⫽ 20 10 boys diagnosed with ADHD and 10 matched controls.

Nolin et al. (2009)

Nolin et al. (2012)

Parsons et al. (2007)

Descriptive/correlational study of a Virtual Classroom Stroop task. Convergence validity study.

N ⫽ 38 adolescents ages 13– 17 years.

Lalonde et al. (2013)

Research design

Sample

Study

Table 3 (continued)

SWAN Behavior Checklist Conners’ CPT II Stroop Trail Making tests NEPSY (Visual attention, design fluency, verbal fluency) • WISC-III (Digit Span, Coding, Arithmetic, Vocabulary) • Judgement of Line Orientation

• • • • •

• VIGIL Continuous Performance Test

• VIGIL Continuous Performance Test

• Delis-Kaplan Executive Function System (Trail Making, Tower, Twenty Questions, Verbal Fluency, Color-Word Interference) • Behavior Rating Inventory of Executive Function • Child Behavior Checklist

Traditional Tests

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

• VR-Stroop task correlated with D-KEFS and BRIEF. • Performance on the VRStroop task was correlated with paper–pencil Stroop task. • VR-Stroop more accurately reflected everyday behavioral functioning. • No difference between the Virtual CPT and the traditional computerized CPT on total of omissions. • Significantly more commissions and longer reaction times in the Virtual CPT. • Virtual CPT showed greater sensitivity to the subtle effects of sports concussion. • The sports concussion group reported more symptoms of cybersickness than the control group. • ADHD group exhibited more omission errors, commission errors, and overall body movement in the Virtual CPT. • ADHD group was more impacted by distraction n the Virtual CPT. • Virtual CPT was correlated with traditional ADHD assessment tools, behavior checklist, and traditional computerized CPT.

Results

202 PARSONS AND PHILLIPS

Normative study comparing Virtual Stroop to traditional tasks.

Study 2: 8 students with high functioning autism (mean age ⫽ 22.88) and 10 matched controls.

Double-blind, placebocontrolled, crossover design.

N ⫽ 27 16 boys and 11 girls, with clinical diagnosis of ADHD.

Pollak et al. (2010)

Virtual Classroom on regular computer screen Test of Variables of Attention (TOVA; Greenberg & Waldman, 1993)

• Delis-Kaplan Executive Functioning System: Color Word Interference Test • Stroop task from Automated Neuropsychological Assessment Metrics • Wechsler Abbreviated Scale of Intelligence-Second Edition • Test of Variables of Attention (TOVA) • Short Feedback Questionnaire

• Wechsler Test of Adult Reading

Traditional Tests

• ADHD group performed less well on all CPT tasks. • Virtual CPT showed effect sizes similar to the TOVA. • Self-reported preference for Virtual CPT. • Metilphenidate (MPH) reduced omission errors to a greater extent on the VRCPT compared to the no VR-CPT and the TOVA, and decreased other CPT measures on all types of CPT to a similar degree. • Children rated the VR-CPT as more enjoyable compared to the other types of CPT.

• Virtual Stroop task was correlated with traditional tasks and elicited an interference effect similar to those found in classic Stroop tasks. • During the distraction condition of the Virtual Stroop ASD group performance declined.

Results

Note. ADHD ⫽ Attention-deficit/hyperactivity disorder; ASD ⫽ Autism Spectrum Disorder; BASC ⫽ Behavior Assessment System for Children; CPT ⫽ Continuous Performance Test; DKEFS ⫽ Delis–Kaplan Executive Function System; BRIEF ⫽ Behavior Rating Inventory of Executive Function; NEPSY ⫽ abbreviation of the term “neuropsychology”; NF1 ⫽ Neurofibromatosis type 1; TOVA ⫽ Test of Variables of Attention; VR ⫽ Virtual Reality; WISC-III ⫽ Wechsler Intelligence Scale for Children.

Crossover design comparing Virtual Classroom on regular computer screen.

N ⫽ 37 boys ages 9–17 years, with (n ⫽ 20) and without ADHD (n ⫽ 17).

Pollak et al. (2009)

Study 2: Cross sectional design.

Study 1:

Two studies reported: Study 1: 50 undergraduate students (mean age ⫽ 20.37; 78 % female).

Parsons and Carlew (2016)

Research design

Sample

Study

Table 3 (continued)

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE 203

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

204

PARSONS AND PHILLIPS

testing needs. In addition, there are a number of auditory distractors that can be adjusted, such as the sounds of vehicles passing by and ambient classroom noise. Other aspects of the virtual environment can also be adjusted, including seating position of the client, the number of virtual students, and the sex of the teacher (Rizzo et al., 2006). The case of G. T. A psychologist receives a referral for assessment of attentional problems. The client is G. T., a 10-year-old biracial male who is experiencing difficulty sustaining attention during class and listening to instructions. The information from the parent and teacher versions of the behavior rating scales are somewhat inconsistent. His teacher endorses a number of attention problems for G. T. such as frequent forgetting of instructions before completing schoolwork; G. T. fidgets often; and G. T. inappropriately leaves his seat during class. According to his teacher, G. T. also appears to be easily distracted during teacher presentations of class material. However, the parent rating reveals only minor difficulties with attention. As part of a larger psychological battery, G. T. was assented to complete Virtual Classroom CPT and Stroop tasks. The virtual classroom was modified to closely reflect his typical classroom environment by including the same number of students, assigned desk location, and same sex of the teacher. While immersed in the Virtual Classroom, G. T. sat at a desk in the middle of the virtual classroom that corresponded with where he sits in his real classroom at school. During the Stroop and CPT tasks, head movement tracking sensors logged G. T.’s frequent head movements as he looked away from the blackboard to observe vehicles passing by outside and virtual students moving around in the room. Moreover, the movement sensors logged his body movement and revealed that he was very physically active during the tasks. He showed poor performance (compared with normreference groups) on the cognitive tasks. The results of testing indicated G. T. was showing several behavioral symptoms of ADHD. Review of the data showed that symptoms of inattention, such as maintaining attention, are likely because of being easily derailed by classroom and outdoor distractors. This also affected his ability to follow through with in-

structions and listen to the virtual teacher. Of particular note are the symptoms of hyperactivity that were directly recorded. He frequently moved his legs and arms and squirmed around in his seat. He had difficulty sitting still and remaining seated. Several recommendations were made based on these observations. G. T. performed better when placed in the front of the virtual classroom, so it was recommended that this adjustment be made in his actual classroom. It was recommended that classroom windows remain closed if possible to reduce noise from outside sources. Additionally, it was recommended that people entering and exiting the room be reduced as much as possible. Finally, the specific distractors were provided to the parent to share with G. T.’s teacher, so when these distractors occur the teacher will know it is necessary to redirect G. T.’s attention. In summary, the Virtual Classroom has several advantages as opposed to traditional pen-and-paper testing. It represents the typical classroom in which students with ADHD and autism often struggle to maintain attention and engagement, which increases ecological validity compared with traditional testing methods. In addition, functional impairments can be directly observed, which improves the quality of recommendations. The Virtual Classroom provides a record of which distractions caused the client to look away from the board and how distracting they were (how often the client looked away). These data can inform recommendations to parents and teachers to remove specific distractors from the learning environment. The impact of placing the client in different locations in the virtual classroom can also be assessed. The results of such testing allow for recommendations based on direct observation about where to seat the client in the classroom so they can perform at their best. The virtual classroom records the head and body movements of the child in real time. Thus, the level of activity is accurately recorded without the potential bias that may influence parent and teacher reports (Pas & Bradshaw, 2014; Sayal & Taylor, 2005). Collecting and recording these body movement data in an ecologically valid environment rather than relying on parent and teacher reports may be a better way of assessing the “H” in ADHD.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Virtual Shopping Tasks Another virtual environment approach that may be of interest to clinicians is the use of a simulated shopping environment. A number of these virtual shopping environments have been developed to assess execution of everyday behaviors in a virtual shopping center (see Table 4). These virtual shopping environments have been developed in a manner that allows the psychologist to systematically vary the information load (that affects goal maintenance). For example, the Virtual Environment Grocery Store (VEGS) uses multiple adaptive trials in the assessment procedure by creating a pool of “multiple task assignments,” empirically determining their baseline difficulty, and then adding conditions in the environment that affect baseline task difficulty via the manipulation of the density of items on shelves, the similarity of packaging, and the intensity and types of realistic irrelevant distractions (e.g., loudness/type of music in the background and loudspeaker announcements). The VEGS platform offers a range of difficulties that may be used to make the tasks sufficiently complex to avoid floor and ceiling effects. The use of a simulated environment allows older adults who may be physically or behaviorally impaired to be safely assessed. This would not be possible with the traditional MET, which requires these tasks to be completed in a real-world shopping environment (Parsons, McPherson, & Interrante, 2013). The case of L. S. A psychologist receives a referral for assessment of memory problems. The client is L. S., a 58-year-old married, Hispanic female who sought assessment after memory difficulties appear to have resulted in her frequently misplacing items around the house and forgetting to attend appointments. She also continually asks family members to help her with shopping. Furthermore, she reported that at times she would get up and go to her kitchen only to feel confused because she could not remember what she intended to do in the kitchen. She had a diagnosis of osteoarthritis, which decreased movement in her right knee, and denied any previous history of head-injury, substance abuse, or psychiatric illness. Data collected during the clinical interview suggested early onset Alzheimer’s disease, but results from the neuropsychological assessment were inconsistent.

205

To assess her cognitive functioning, L. S. was asked to complete the VEGS as part of her larger battery of tests. While immersed in the VEGS, L.S. often had difficulty with sustained concentration, which was demonstrated by the indirect route she took to the pharmacy and when shopping for items on the shopping list. She also had difficulty remembering how much money was budgeted for purchasing items and continually asked her psychologist what she was supposed to do next. She also performed poorly on various aspects of the prospective memory tasks. For example, she had to be reminded to go to the coupon machine after 5 min of shopping. She also had difficulty recalling the shopping items and frequently clicked on the shopping list for a reminder. These difficulties were notably amplified when distractors were present (e.g., announcements over the loud speaker; people shopping in the same aisle as her). These results suggest that L.S. is demonstrating behavioral symptoms of suggestive of early onset Alzheimer’s disease, including difficulty with concentration, problem solving, and memory for recently learned information. With permission from L.S., her caregiver was shown a virtual replay of her activities in the VEGS. It revealed areas of difficulty and supported a recommendation that L. S. be provided assistance with tasks that require sustained attention such as shopping or household chores. The VEGS report also revealed that she needed assistance with tasks that require problem-solving skills such as budgeting. It was recommended that family members provide L. S. with frequent reminders of newly learned information such as dates and changes in routines on an easily accessible calendar and in-person. Virtual Reality Apartment Virtual Apartments are simulation environments that allow clinicians to see how the client behaves in their home environment. The purpose of the Virtual Apartment is to assess potential cognitive deficits using a simulated apartment. While a number of virtual apartment environments have emerged, there is a consistent emphasis upon (a) ecologically valid representations of the client’s everyday activities in a living situation; (b) presentation of either traditional cognitive tasks (e.g., CPT; Stroop) or

The two groups were compared on virtual and traditional measures.

Severe TBI (n ⫽ 30) vs. healthy control (n ⫽ 24).

Descriptive study.

Comparison of TBI (n ⫽ 20) to healthy controls (n ⫽ 20) in a virtual mall. Comparison of Virtual Supermarket to traditional tests. Comparison of stroke (n ⫽ 24) and control (n ⫽ 24) participants.

N ⫽ 30 severe TBI vs. N ⫽ 24 healthy controls.

N ⫽ 24 healthy adults.

N ⫽ 20 children with TBI vs. N ⫽ 20 healthy controls.

N ⫽ 20 stroke patients.

N ⫽ 24 stroke and 24 matched controls.

Canty et al. (2014)

Carelli, Morganti, Weiss, Kizony, & Riva (2008)

Erez, Weiss, Kizony, & Rand (2013)

Josman et al. (2006)

Josman et al. (2014)

Research design

N ⫽ 44 healthy young and N ⫽ 39 healthy old.

Sample

Atkins et al. (2015)

Study

Table 4 Recent Function-Led Studies (Within Past 10 Years) Using a Virtual Store

Behavioral Assessment of Dysexecutive Syndrome

Behavioral Assessment of Dysexecutive Syndrome (BADS)

n/a

• Lexical Decision Prospective Memory Task • Hopkins Verbal Learning Test • Trail Making Task • Controlled oral word association • Hayling Sentence Completion • Letter Number Sequencing • Sydney Psychosocial Reintegration Scale n/a

• MATRICS Consensus Cognitive Battery • UCSD Performance-Based Skills Assessment-Brief • Hopkins Verbal Learning Test • Brief Visual Memory Test

Traditional tests

Results • Each VR outcome measure displayed significant age-related differences. • Traditional measures of cognitive functioning were significantly associated with VR performance across age groups. • Virtual shopping performance differentiated between TBI patients and the control group. • Measures of prospective memory, neurocognitive functioning, and psychosocial functioning were significantly associated with Virtual shopping performance among TBI patients. • Virtual supermarket may be a useful tool in executive assessment, particularly due to its temporal and accuracy measures. • Outcome measures of the VMall successfully differentiated between children with TBI and healthy controls. • Virtual shopping outcome measures (# items purchased, # correct actions, duration pauses/stops) were significantly associated with the key search subtest of the BADS. • Results revealed significant differences in executive functioning between post-stroke patients and the control group on virtual shopping outcome measures. • Virtual shopping outcome measures were also significantly associated with the BADS.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

206 PARSONS AND PHILLIPS

Parkinson’s (n ⫽ 5) to agematched healthy control (n ⫽ 5) performance.

Comparison of brain damage (n ⫽ 10) performance to age-matched control (n ⫽ 10); Healthy-old (n ⫽ 10) and healthy-young (n ⫽ 10).

N ⫽ 5 Parkinson’s and N ⫽ 5 age-matched healthy controls.

N ⫽ 10 brain damage and N ⫽ 10 age-matched controls and N ⫽ 10 healthy-older adults compared to N ⫽ 10 healthy-young. N ⫽ 9 post-stroke (n ⫽ 9) and N ⫽ 20 healthyyoung and N ⫽ 20 healthy aging

Klinger, Chemin, Lebreton, and Marié (2006)

Okahashi et al. (2013)

Rand, Katz, and Weiss (2007)

N ⫽ 14 stroke and N ⫽ 23 healthy control children and N ⫽ 44 young adults; and N ⫽ 26 older adults.

Comparison of Stroke (n ⫽ 20) and control (n ⫽ 20) performance.

N ⫽ 20 stroke and N ⫽ 20 matched controls.

Kang et al. (2008)

Comparison of stroke (n ⫽ 14) vs. healthy control (children n ⫽ 23; young adults n ⫽ 44; older adults n ⫽ 26).

Comparison of post-stroke (n ⫽ 9) vs. healthy-young (n ⫽ 20) vs. healthy-old (n ⫽ 20).

Comparison of schizophrenia (n ⫽ 30) to healthy control (n ⫽ 30) performance.

N ⫽ 30 schizophrenia and N ⫽ 30 healthy controls.

Josman, Schenirderman, Klinger, and Shevil (2009)

Rand, Basha-Abu Rukan, Weiss, and Katz (2009)

Research design

Sample

Study

Table 4 (continued)

n/a

Behavioral Assessment of Dysexecutive Syndrome (Zoo Map subtest)

Behavioral Assessment of Dysexecutive Syndrome

n/a

n/a

Behavioral Assessment of Dysexecutive Syndrome

Traditional tests

Results • Virtual shopping outcome measures were sensitive to differences in executive functioning between schizophrenia patients and controls and differentiated between patients with differing levels of executive function. • Virtual shopping outcome measures were negatively related to symptoms of schizophrenia and the BADS. • Virtual shopping outcome measures consistently differentiated between stroke patients (unilateral brain lesion) and the control group. • Virtual shopping outcome measures revealed that patients walked a significantly longer distance. • This suggests that the Virtual Supermarket may be a useful tool for assessing cognitive planning in patients with Parkinson’s. • Performance on the virtual shopping task was significantly associated with conventional cognitive assessments. • Older participants and patients with brain damage scored significantly worse on the virtual shopping tasks. • Virtual shopping tasks successfully differentiated between all three groups. • Virtual shopping outcome measures were significantly associated with Zoo Map subtest. • Virtual shopping time successfully differentiated between post-stroke patients and the control groups with post-stroke patients taking significantly longer to complete the shopping task. (table continues)

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE 207

Comparison of stroke (n ⫽ 9) vs. healthy-young (n ⫽ 10) vs. healthy-old.

Comparison of MCI (n ⫽ 30) vs. control (n ⫽ 40).

Comparison of MCI (n ⫽ 34) vs. control (n ⫽ 21).

N ⫽ 9 stroke and N ⫽ 10 healthy-young; and N10 healthy aging.

N ⫽ 10 mild cognitive impairment and N ⫽ 30 matched controls.

N ⫽ 34 mild cognitive impairment and N-20 matched controls.

Raspelli et al. (2012)

Werner, Rabinowitz, Klinger, Korczyn, and Josman (2009)

Zygouris et al. (2015)

Mini Mental Rey-Osterrieth Complex Figure Test Rey Auditory Verbal Learning Test Rivermead Behavioural Memory Test Test of Everyday attention Trail Making test Functional Rating Scale for Symptoms of Dementia • Functional Cognitive Assessment Scale

• • • • • • •

Behavioral Assessment of Dysexecutive Syndrome

• Test of Everyday Attention • Iowa Gambling Task • Stroop Test

Traditional tests

• Significant differences were found between all three groups on two outcome measures of the virtual shopping task (i.e., time, errors). • The TEA (but not the IGT or Stroop) was significantly associated with virtual shopping performance in post-stroke participants. • 4 of the 8 virtual shopping outcome measures were associated with performance on the BADS. • Virtual shopping performance successfully differentiated between MCI patients and the control group. • Virtual shopping performance was moderately correlated with traditional neuropsychological tests. • Virtual shopping performance was able to differentiate between MCI patients and the control group; however, it was unable to differentiate MCI subtypes.

Results

Note. BADS ⫽ Behavioral Assessment of Dysexecutive Syndrome; MCI ⫽ Mild Cognitive Impairment; TEA ⫽ Test of Everyday Attention; IGT ⫽ Iowa Gambling Task; TBI ⫽ Traumatic Brain Injury; UCSD ⫽ University of California, San Diego; VR ⫽ Virtual Reality; VMall ⫽ Virtual Mall.

Research design

Sample

Study

Table 4 (continued)

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

208 PARSONS AND PHILLIPS

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

everyday activities (without embedded cognitive stimuli) in the environment; (c) low and high distraction conditions; and (d) logging of behavioral metrics (e.g., ambient body movement, head/eye gaze). The ClinicaVR team has extended the virtual classroom paradigm to a virtual apartment that superimposes construct-driven stimuli (e.g., Stroop and CPT) onto a large TV. The Virtual Apartment Bimodal Stroop (VABS) is a 9.6-min task. Participants are seated in the living room, in front of a flat-screen TV. A kitchen is located to the left of the TV and a window is located to the right of the TV. The task builds on the unimodal (visually mediated) Stroop and measures cognitive interference using reaction time (RT), commission errors and omission errors, and RT variability. The VABS extends the traditional Stroop paradigm via the inclusion of bimodal (auditory and visually mediated) stimuli. During the task, distracters appear in different field of view locations in the environment. Some distracters are audio—visual: School Bus passing on the street and SUV viewed through window on the right; iPhone ringing and vibrating on the table (in front of participant); Toy Robot moving and making noise on the floor (center). Auditory distractors included: Crumple Paper (left); Drop Pencil (left); Doorbell (left); Clock (left) Vacuum Cleaner (right); Jack Hammer (right); Sneeze (left) Jet Noise (center). Visual distractors included: Paper airplane (flying from left to right in front of the participant), Woman walking in the kitchen (center). That condition was designed to assess RTs (simple and complex), selective attention (matching the auditory and visual stimuli), and external interference control (environmental distracters). In a preliminary study, Henry and colleagues (2012) with 71 healthy adult participants found that the VR-Apartment Stroop is capable of eliciting the Stroop effect with bimodal stimuli. Initial validation data also suggested that measures of the VR-Stroop significantly correlate with measures of the Elevator counting with distracters (ranging from .38 to .62), the Continuous Performance Task (CPT-II; ranging from .32 to .42), and the Stop-it task (ranging from .37 to .39). Results from regression indicated that commission errors and variability of RTs at the VR-Apartment Stroop were significantly predicted by scores of the Elevator task and the CPT-II. These preliminary results suggest that the VR-Apartment Stroop

209

is an interesting measure of cognitive and motor inhibition for adults. Virtual reality apartment medication management assessment. A different Virtual Apartment has been developed by Kurtz, Baker, Pearlson, and Astur (2007) to assesses medication management skills among those with schizophrenia. The Virtual Reality Apartment Medication Management Assessment (VRAMMA) lasts a maximum of 23 min. The environment consists of a fourbedroom apartment containing a living room, bedroom, kitchen, and bathroom. During the practice phase, clients are asked to do a number of tasks that help build familiarity with the virtual environment: using a joystick to navigate to the living room and turn on the TV, checking the time on the interactive clock on top of the TV, turning on a light in the bedroom, checking the medication reminder post-it note in the kitchen (with a different prescription than the actual trial) after turning on the light and turning off the stove, and opening the medicine cabinet in the bathroom and taking out pills. During the testing phase clients start in the living room and a message is displayed that gives them the medications to take and the time they must be taken. They must then use all the items from the practice phase to take the correct medication at the correct time (in 15 min). Several auditory distractors that simulate real-life distractions were included in the virtual apartment: a phone ringing, a doorbell ringing, a dog barking, and a police siren. These auditory distractors are introduced every 3 min. Significant events, such as turning on lights or checking the clock, and the location and movement of the client are recorded. The variables recorded during testing include: quantitative errors, qualitative errors, time discrepancy, total distance traveled, clock checks, and reminder note checks. The VRAMMA is able to distinguish between healthy controls and those with schizophrenia, who perform worse. Kurtz et al. (2007) conducted a validation study with 25 schizophrenia patients and 18 matched healthy controls. They found patients with schizophrenia made more quantitative errors concerning the number of pills taken (p ⫽ .001), were less likely to take pills at the correct time (p ⫽ .01), and checked the clock less often (p ⫽ .001). The VRAMMA has several advantages compared with traditional testing. The VRAMMA

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

210

PARSONS AND PHILLIPS

shows good convergent validity with the Medication Management Ability Assessment (Patterson et al., 2002), a role-play task involving medication management that typically would be administered in an office setting. Clinicians are able to directly observe client behavior in a more ecologically valid environment rather than relying of selfreport measures, which may be biased by a lack of insight from clients with schizophrenia (see Mintz, Dobson, & Romney, 2003 for a metaanalysis) or cognitive and emotional disruptions (Anticevic, Schleifer, & Youngsun, 2015). The VRAMMA approximates an environment in which many clients would likely need to manage medication and includes common tools needed to carry out this activity, such as a reminder note and a clock. It also allows clinicians to assess when the process of performing this task is disrupted and make specific recommendations from these observations. The case of J. A. A psychologist receives a referral for assessment of concentration, and a resurgence of visual and auditory hallucinations. The client is J. A., a 23-year-old, single White male who was diagnosed with schizophrenia in 2013 after hospitalization for a single episode of psychosis. He was released from the psychiatric hospital after 1 month and given a prescription for antipsychotic medication. J. A. continued meeting regularly with a psychiatrist. Several months after being released from the psychiatric hospital, J. A. began experiencing difficulty with concentration, and a resurgence of visual and auditory hallucinations. To ascertain whether medication adherence was at issue, J. A. was asked to complete the VRAMMA. He took fewer than the required number of pills and took the pills 3 min after the 15-min time lapse had expired. While in the virtual apartment, J. A. traveled around the apartment longer than necessary to complete the task and was distracted by the sound of the barking dog. Based on these results, it appears that J. A. is having difficulty taking the correct number of pills and using cues in the environment to take pills at the correct time. Positive symptoms are related to distance traveled in VRAMMA (Kurtz et al., 2007). J. A. traveled a significant distance in the virtual apartment, which shows his medication adherence is low. It was recommended that J. A. use an automated pill dispenser that holds the correct number of pills for each day of the week, reminds J. A. to take

medication if not taken at a scheduled time, and sends a notification to his psychiatrist if he stops taking medication. J. A. was advised to reduce auditory distractions at home by turning on a fan to reduce noise from outside sources. Considerations in the Adoption of Virtual Reality Technologies As can be seen there are some specific cases in which virtual reality-based psychological assessments may offer the psychologist with ecologically valid assessments of day-to-day activities. That said, there are a number of considerations that go into the decision to add new technologies to one’s battery of tests (see Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014). While some of these issues are concerns related to the current generation of virtual reality based assessments, others reflect outdated concerns from an earlier generation of platforms., for example, it used to be the case that the equipment needed to conduct such assessments was bulky and expensive. Recent advances in virtual reality technology have made the use of simulations in assessment more feasible and affordable. Smaller, easier-to-use equipment and reduced cost make virtual environment assessment a practical tool psychologists can use to gather precise functional data and to provide customized recommendations to clients (Bohil et al., 2011). Furthermore, validated virtual environments with automated stimulus presentation, data capture, and scoring are emerging that include sample characteristics for norm-referenced assessment. However, there are other concerns that continue to this day. The dearth of established guidelines for the development, administration and interpretation of these assessments could lead to important psychometric pitfalls. At minimum, all virtual reality-based psychological assessments must have (and many now do) standardized instructions for administration and methods for scoring and interpreting test results provided in a test manual. While some virtual environments are being designed for limit testing, more work needs to be done in this area.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

211

Need for VE-Based Neuropsychological Assessments to Be Sufficiently Standardized

Issues for Use of Virtual Environments in Specific Patient Populations

While the use of virtual environments for assessment is an emerging area of application, adoption will require substantial research and development to establish acceptable psychometric properties and clinical utility. Although a review of VR therapies has revealed statistically large effects on a number of affective domains (Parsons & Rizzo, 2008), findings must be interpreted with caution given that some VR studies do not include control groups, and many are not randomized clinical trials, limiting the confidence that the enhancements were caused by the VR intervention. An important resolution to clinical heterogeneity of outcome measures in virtual environment research is the standardization of outcomes and the measures used to assess these outcomes. The selection of outcome measures for standardization need to be relevant to the client’s treatment and health status as well as psychometrically sound. Another pressing need among psychologists is the identification of VE-based assessments that reflect relevant underlying cognitive and behavioral capacities for assessments of varying degrees of psychological deficits. VE-based assessments must demonstrate relevance beyond that which is available through simpler means of assessment. As such, there is specific need for VE-based assessments to be sufficiently standardized within the range and nature of responses available to participants within the virtual environment to allow for reliable measurement. VE-based assessment studies have often sought to establish construct validity by demonstrating significant associations between virtual environments and paper-and-pencil assessments (e.g., virtual classroom assessments). In the area of function led assessment, multiple cognitive domains may be involved in the simulation of real-world tasks, and associations with traditional construct driven tests may be necessarily lower than is typically desired to establish construct validity. In this context, the degree to which a VE-based model using a function led approach accurately predicts relevant real-world behavior may be more important than large-magnitude associations with traditional construct driven paper-and-pencil tests (e.g., virtual shopping tasks).

In addition to psychometric and technical issues, clinicians, researchers, and policymakers will need to scrutinize emerging VE-based assessments to ensure adherence to legal, ethical, and human safety guidelines. The matching of specific technologies to the needs and capacities of the client will also require careful consideration by psychologists. How will virtual environments be experienced by certain clinical populations? In pediatrics and geriatrics, human guidance is critical for safeguarding the client’s full comprehension of assessment use and instruction. Geriatric clients in particular may find adjusting to virtual platforms, on the whole, difficult (Miller et al., 2014). Although virtual environments have been successfully applied to the study of age differences in spatial navigation among both healthy and demented elderly, virtual environmentbased tasks may be complicated by visual, auditory, or motor impairment. In comparison with younger controls, aging patients may perform more poorly on virtual environment-based tasks simply because of the normative aging process or because of lack of experience with computers. Maximum effort should be exerted to ensure equitability in sensorimotor capacities between younger and older adult subjects. A systematic review by Miller et al. (2014) introduced concern regarding the feasibility of home-use VE and gaming systems for physical rehabilitation of older adults. Such systems could be therapeutic to existing physical impairment or could be preventative. A main limitation is the low quality of studies investigating the effectiveness of these systems in older adult populations. Furthermore, some studies cited heightened fall risk, overexertion, and musculoskeletal irritation. There is need for more rigorous research methods including more consistent and strenuous reporting of exercise dosages and adherence. Moffatt (2009) suggests a number of helpful methodological practices in assessing older adults in research studies of navigation skills, including: (a) allowing aging patients to practice and ensure maximum familiarization with the computer platform, (b) including measures of computer experience, visual ability, and motor function, and (c) including assessments requiring the same sen-

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

212

PARSONS AND PHILLIPS

sorimotor capacities, but not physical navigation. A potential barrier to adoption of virtual reality technology among clinicians is concerns about the ability of older adults to use this technology. Wandke, Sengpiel, and Sönksen (2012) have outlined several pervasive myths about older adults and human-computer interaction. They state that the myth that older adults are not interested in computers is not true. Dyck and Smither (1994) conducted a survey of adults over age 55 and found these older adults were less computer anxious and had more positive attitudes about computers than adults under 30. Wandke et al. (2012) also address the myth that “you can’t teach an old dog new tricks.” While there are decreases in brain plasticity in older age, this does not mean that learning ceases. It may be the case that some older adults have had frustrating experiences that lead to giving up on learning how to use new technologies. However, this effect should not be overgeneralized, as older adults are often interested in using newer technologies (Sayago, Sloan, & Blat, 2011). When assessing older adults for memory performance, it is important to avoid invoking stereotype threat. Chasteen, Bhattacharyya, Horhota, Tam, and Hasher (2005) found that invoking stereotype threat about memory abilities in older adults harms performance on memory tasks, particularly when these adults are aware their memory is being assessed. Subtle and unambiguous age-related stereotypes have also been found to influence older adults’ performance on a number of cognitive and physical tasks (see Lamont, Swift, & Abrams, 2015 for a meta-analysis), such as and map learning (Meneghetti, Muffato, Suitner, De Beni, & Borella, 2015) driving a car (Lambert et al., 2016), and hand grip strength (Swift, Lamont, & Abrams, 2012). It is possible that assessment in a virtual environment such as the VEGS could reduce stereotype threat by obscuring the true purpose of the task. Simulation technology may also be problematic for individuals with autism spectrum disorder. Given pronounced sensory issues commonly found in this population, the head mounted display or even the graphical interface may be experienced as intolerable. Moreover, there is concerned that too intense a stimulus presentation may aggravate sensory processing difficulties. This is an important concern though there is no

evidence from two different studies with students diagnosed with autism that they experience negative effects over and above those experienced by students without autism (Parsons & Carlew, 2016; Wallace et al., 2010). However, while these two studies tend to suggest that negative effects were self-reported as low, they involved screen-based virtual environments. As we adopt newer and more immersive technologies (i.e., HMDs) it is important to consider the potential negative effects (i.e., dizziness, sickness, and displacement) to ensure that wearable technologies (e.g., HMDs) can provide an acceptable space for children to use; especially children with disabilities. With this said, there is some evidence that suggests children do not experience HMDs any more negatively than screen-based media (Peli, 1998). Although more work is needed in this area, these findings support the potential of VR technology for continued greater approximations (Bohil et al., 2011) of cognitive processes in the real world. Furthermore, individuals with severe psychiatric conditions that cause limited self-awareness, high suggestibility, and/or an altered sense of reality (e.g., hallucinations, delusions) may respond undesirably to immersion in a virtual environment. There is also the potential of unintended negative effects of exposure to virtual environments—stimulus intensity, if taken too far, may exacerbate rather than ameliorate a deficit. High-fidelity virtual environments may be confusing for these individuals and increase negative behaviors after exposure to the environment. Flat-screen presentation of virtual environments has proven to be an acceptable alternative to full immersion with the environment, and may be more appropriate for certain clinical groups (Attree et al., 1996). Summary and Conclusions There are many different types of tests available to psychologists for determining a client’s level of functioning. The challenge for psychologists is choosing tests that provide accurate information for making prescriptive statements to clients, parents, and teachers based on the best evidence available. Self-reports can be helpful in collecting data on specific areas of functioning, but also suffer from lack of agreement among informants (Biederman et al., 1993, 1990; Mitsis et al., 2000; Newcorn et al., 1994; de Nijs et al., 2004) potential bias from

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

clients (Schwarz, 1999), and provides post hoc appraisal of behavior. Normative comparison of performances on cognitive assessments allows the psychologist to determine if a client’s performance is similar or divergent from peers, but provides limited information about daily functioning because they are construct-driven rather than function-led. The addition of virtual reality to a psychological battery provides an opportunity for psychologists to obtain more ecologically valid data about client functioning in simulations of real-world environments. Virtual environments allow the psychologist to have greater control of dynamic perceptual stimuli and the sensitivity of the test, while also capturing data about client performance in activities of daily living (Bohil et al., 2011). The computerized nature of these tests allows for the accurate capture of neurobehavioral data, as well as precise recording and scoring of neuropsychological test results (Campbell et al., 2009). Several virtual environments have already been developed to for psychologists to use in neuropsychological assessment, such as the virtual classroom, virtual grocery store, and the virtual apartment. Although more validation studies need to be conducted with virtual reality assessments, the benefits of using this technology for understanding daily functioning are clear. In addition, smaller and more affordable equipment makes virtual reality a viable option for use in psychological assessment. Preferably, virtual assessments will be added to flexible assessment batteries tailored to each individual within the context of the presenting question. Thus, traditional construct driven measures should not be abandoned. In some circumstances, construct driven assessments may be more appropriate in terms of assessing a specific construct that is generalizable across environments. For instance, working memory may be more easily assessed by a simple span task. The allure of the virtual assessment lies primarily in enriching stimulus presentation, logging additional variables, and database building rather than the automation of the entire psychological battery and the minimization of human interaction. Virtual environments may add to an existing psychological battery when the psychologist is attempting to make accurate predictions about a person’s behavior within the real world. In a

213

virtual environment, the psychologist can measure functional output of constructs within the complexity of a real-world environment. For example, in a virtual classroom, selective attention can be measured by conducting tests such as the CPT in a real world environment. In a virtual environment grocery store, prospective memory may be assessed using a real-world task like remembering to pick up a prescription at the pharmacy. Cognitive interference can be assessed in a virtual apartment that includes common distractors found in an everyday environment. Technological innovations, such as virtual reality, allow psychologists to expand our methods for designing and implementing assessments capable of collecting information that provides an accurate picture of client limitations. These advances improve the prescriptive statements psychologists dispense by providing the opportunity to observe client functioning in real-world environments—a practice that might otherwise be infeasible because of clients’ behavioral and physiological impairments. By adopting virtual reality as a method for assessing clients, psychologists increase the potential positive impact of neuropsychological assessment for improving the daily functioning of clients through accurate understanding of neuropsychological deficits and directly relevant recommendations.

References Adams, R., Finn, P., Moes, E., Flannery, K., & Rizzo, A. S. (2009). Distractibility in Attention/Deficit/ Hyperactivity Disorder (ADHD): The virtual reality classroom. Child Neuropsychology, 15, 120 – 135. http://dx.doi.org/10.1080/09297040802 169077 American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. Anticevic, A., Schleifer, C., & Youngsun, T. C. (2015). Emotional and cognitive dysregulation in schizophrenia and depression: Understanding common and distinct behavioral and neural mechanisms. Dialogues in Clinical Neuroscience, 17, 421– 434. Atkins, A. S., Stroescu, I., Spagnola, N. B., Davis, V. G., Patterson, T. D., Narasimhan, M., . . . Keefe,

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

214

PARSONS AND PHILLIPS

R. S. E. (2015). Assessment of age-related differences in functional capacity using the Virtual Reality Functional Capacity Assessment Tool (VRFCAT). The Journal of Prevention of Alzheimer’s Disease, 2, 121. Attree, E. A., Brooks, B. M., Rose, F. D., Andrews, T. K., Leadbetter, A. G., & Clifford, B. R. (1996, July). Memory processes and virtual environments: I can’t remember what was there, but I can remember how I got there. Implications for people with disabilities. In ECDVRAT: 1st European Conference on Disability, Virtual Reality and Associated Technologies, Reading, UK (Vol. 118). Biederman, J., Faraone, S. V., Milberger, S., & Doyle, A. (1993). Diagnoses of attention-deficit hyperactivity disorder from parent reports predict diagnoses based on teacher reports. Journal of the American Academy of Child & Adolescent Psychiatry, 32, 315–317. http://dx.doi.org/10.1097/ 00004583-199303000-00011 Biederman, J., Keenan, K., & Faraone, S. V. (1990). Parent-based diagnosis of attention deficit disorder predicts a diagnosis based on teacher report. Journal of the American Academy of Child & Adolescent Psychiatry, 29, 698 –701. http://dx.doi.org/10 .1097/00004583-199009000-00004 Bioulac, S., Lallemand, S., Rizzo, A., Philip, P., Fabrigoule, C., & Bouvard, M. P. (2012). Impact of time on task on ADHD patient’s performances in a virtual classroom. European Journal of Paediatric Neurology, 16, 514 –521. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nature Reviews Neuroscience, 12, 752–762. Burgess, P. W., Alderman, N., Forbes, C., Costello, A., Coates, L. M. A., Dawson, D. R., . . . Channon, S. (2006). The case for the development and use of “ecologically valid” measures of executive function in experimental and clinical neuropsychology. Journal of the International Neuropsychological Society, 12, 194 –209. http://dx.doi.org/10.1017/ S1355617706060310 Campbell, Z., Zakzanis, K. K., Jovanovski, D., Joordens, S., Mraz, R., & Graham, S. J. (2009). Utilizing virtual reality to improve the ecological validity of clinical neuropsychology: An FMRI case study elucidating the neural basis of planning by comparing the Tower of London with a threedimensional navigation task. Applied Neuropsychology, 16, 295–306. http://dx.doi.org/10.1080/ 09084280903297891 Canty, A. L., Fleming, J., Patterson, F., Green, H. J., Man, D., & Shum, D. H. K. (2014). Evaluation of a virtual reality prospective memory task for use with individuals with severe traumatic brain injury. Neuropsychological Rehabilitation, 24, 238 – 265. http://dx.doi.org/10.1080/09602011.2014 .881746

Carelli, L., Morganti, F., Weiss, P. L., Kizony, R., & Riva, G. (2008). A virtual reality paradigm for the assessment and rehabilitation of executive function deficits post stroke: Feasibility study. 2008 Virtual Rehabilitation. http://dx.doi.org/10.1109/icvr.2008 .4625144 Chasteen, A. L., Bhattacharyya, S., Horhota, M., Tam, R., & Hasher, L. (2005). How feelings of stereotype threat influence older adults’ memory performance. Experimental Aging Research, 31, 235–260. http://dx.doi.org/10.1080/036107 30590948177 Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills. Neuropsychology Review, 13, 181– 197. http://dx.doi.org/10.1023/B:NERV.00000 09483.91468.fb de Nijs, P. F., Ferdinand, R. F., de Bruin, E. I., Dekker, M. C. J., van Duijn, C. M., & Verhulst, D. C. (2004). Attention-deficit/hyperactivity disorder (ADHD): Parents’ judgment about school, teachers’ judgment about home. European Child & Adolescent Psychiatry, 13, 315–320. http://dx .doi.org/10.1007/s00787-004-0405-z Díaz-Orueta, U., Garcia-López, C., Crespo-Eguílaz, N., Sánchez-Carpintero, R., Climent, G., & Narbona, J. (2014). AULA virtual reality test as an attention measure: Convergent validity with Conners’ Continuous Performance Test. Child Neuropsychology: A Journal on Normal and Abnormal Development in Childhood & Adolescence, 20, 328 –342. Dyck, J. L., & Smither, J. A. (1994). Age differences in computer anxiety: The role of computer experience, gender and education. Journal of Educational Computing Research, 10, 239 –248. http:// dx.doi.org/10.2190/E79U-VCRC-EL4E-HRYV Erez, N., Weiss, P. L., Kizony, R., & Rand, D. (2013). Comparing performance within a virtual supermarket of children with traumatic brain injury to typically developing children: A pilot study. OTJR: occupation, participation and health, 33, 218 –227. Gilboa, Y., Kerrouche, B., Longaud-Vales, A., Kieffer, V., Tiberghien, A., Aligon, D., . . . Paule Chevignard, M. (2015). Describing the attention profile of children and adolescents with acquired brain injury using the Virtual Classroom. [Advance online publication]. Brain Injury, 29, 1691– 1700. http://dx.doi.org/10.3109/02699052.2015 .1075148 Gilboa, Y., Rosenblum, S., Fattal-Valevski, A., Toledano-Alhadef, H., Rizzo, A. S., & Josman, N. (2011). Using a Virtual Classroom environment to describe the attention deficits profile of children with Neurofibromatosis type 1. [Advance online publication]. Research in Developmental Disabil-

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

ities, 32, 2608 –2613. http://dx.doi.org/10.1016/j .ridd.2011.06.014 Henry, M., Joyal, C. C., & Nolin, P. (2012). Development and initial assessment of a new paradigm for assessing cognitive and motor inhibition: The bimodal virtual-reality Stroop. Journal of Neuroscience Methods, 210, 125–131. http://dx.doi.org/ 10.1016/j.jneumeth.2012.07.025 Iriarte, Y., Diaz-Orueta, U., Cueto, E., Irazustabarrena, P., Banterla, F., & Climent, G. (2016). AULA-Advanced Virtual Reality Tool for the Assessment of Attention: Normative Study in Spain. Journal of Attention Disorders, 20, 542–568. http://dx.doi.org/10.1177/1087054712465335 Josman, N., Hof, E., Klinger, E., Marie, R. M., Goldenberg, K., Weiss, P. L., & Kizony, R. (2006). Performance within a virtual supermarket and its relationship to executive functions in post-stroke patients. 2006 International Workshop on Virtual Rehabilitation. http://dx.doi.org/10.1109/iwvr .2006.1707536 Josman, N., Kizony, R., Hof, E., Goldenberg, K., Weiss, P. L., & Klinger, E. (2014). Using the virtual action planning-supermarket for evaluating executive functions in people with stroke. Journal of Stroke and Cerebrovascular Diseases, 23, 879 – 887. http:// dx.doi.org/10.1016/j.jstrokecerebrovasdis.2013.07 .013 Josman, N., Schenirderman, A. E., Klinger, E., & Shevil, E. (2009). Using virtual reality to evaluate executive functioning among persons with schizophrenia: A validity study. Schizophrenia Research, 115, 270 –277. http://dx.doi.org/10.1016/j.schres .2009.09.015 Jurado, M. B., & Rosselli, M. (2007). The elusive nature of executive functions: A review of our current understanding. Neuropsychology Review, 17, 213–233. http://dx.doi.org/10.1007/s11065007-9040-z Kang, Y. J., Ku, J., Han, K., Kim, S. I., Yu, T. W., Lee, J. H., & Park, C. I. (2008). Development and clinical trial of virtual reality-based cognitive assessment in people with stroke: Preliminary study. CyberPsychology & Behavior, 11, 329 –339. http://dx.doi.org/10.1089/cpb.2007.0116 Klinger, E., Chemin, I., Lebreton, S., & Marié, R. M. (2006). Virtual action planning in Parkinson’s disease: A control study. CyberPsychology & Behavior, 9, 342–347. http://dx.doi.org/10.1089/cpb .2006.9.342 Kurtz, M. M., Baker, E., Pearlson, G. D., & Astur, R. S. (2007). A virtual reality apartment as a measure of medication management skills in patients with schizophrenia: A pilot study. Schizophrenia Bulletin, 33, 1162–1170. http://dx.doi.org/10 .1093/schbul/sbl039 Lalonde, G., Henry, M., Drouin-Germain, A., Nolin, P., & Beauchamp, M. H. (2013). Assessment of

215

executive function in adolescence: A comparison of traditional and virtual reality tools. Journal of Neuroscience Methods, 219, 76 – 82. http://dx.doi .org/10.1016/j.jneumeth.2013.07.005 Lambert, A. E., Watson, J. M., Stefanucci, J. K., Ward, N., Bakdash, J. Z., & Strayer, D. L. (2016). Stereotype threat impairs older adult driving. Applied Cognitive Psychology, 30, 22–28. http://dx .doi.org/10.1002/acp.3162 Lamont, R. A., Swift, H. J., & Abrams, D. (2015). A review and meta-analysis of age-based stereotype threat: Negative stereotypes, not facts, do the damage. Psychology and Aging, 30, 180 –193. http:// dx.doi.org/10.1037/a0038586 Manchester, D., Priestley, N., & Jackson, H. (2004). The assessment of executive functions: Coming out of the office. Brain Injury, 18, 1067–1081. http://dx.doi.org/10.1080/02699050410001672387 Matheis, R. J., Schultheis, M. T., Tiersky, L. A., DeLuca, J., Millis, S. R., & Rizzo, A. (2007). Is learning and memory different in a virtual environment? The Clinical Neuropsychologist, 21, 146 –161. http://dx.doi.org/10.1080/1385404 0601100668 Meneghetti, C., Muffato, V., Suitner, C., De Beni, R., & Borella, E. (2015). Map learning in young and older adults: The influence of perceived stereotype threat. Learning and Individual Differences, 42, 77– 82. http://dx.doi.org/10.1016/j.lindif.2015.08 .015 Miller, K. J., Adair, B. S., Pearce, A. J., Said, C. M., Ozanne, E., & Morris, M. M. (2014). Effectiveness and feasibility of virtual reality and gaming system use at home by older adults for enabling physical activity to improve health-related domains: A systematic review. Age and Ageing, 43, 188 –195. Mintz, A. R., Dobson, K. S., & Romney, D. M. (2003). Insight in schizophrenia: A meta-analysis. Schizophrenia Research, 61, 75– 88. http://dx.doi .org/10.1016/S0920-9964(02)00316-X Mitsis, E. M., McKay, K. E., Schulz, K. P., Newcorn, J. H., & Halperin, J. M. (2000). Parent-teacher concordance for DSM–IV attention-deficit/hyperactivity disorder in a clinic-referred sample. Journal of the American Academy of Child & Adolescent Psychiatry, 39, 308 –313. http://dx.doi.org/10 .1097/00004583-200003000-00012 Newcorn, J. H., Halperin, J. M., Schwartz, S., Pascualvaca, D., Wolf, L., Schmeidler, J., & Sharma, V. (1994). Parent and teacher ratings of attentiondeficit hyperactivity disorder symptoms: Implications for case identification. Journal of Developmental and Behavioral Pediatrics, 15, 86 –91. http://dx.doi.org/10.1097/00004703-19940400000004 Nolin, P., Martin, C., & Bouchard, S. (2009). Assessment of inhibition deficits with the virtual classroom in children with traumatic brain injury: A

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

216

PARSONS AND PHILLIPS

pilot-study. Studies in Health Technology and Informatics, 144, 240 –242. Nolin, P., Stipanicic, A., Henry, M., Joyal, C. C., & Allain, P. (2012). Virtual reality as a screening tool for sports concussion in adolescents. Brain Injury, 26, 1564 –1573. http://dx.doi.org/10.3109/ 02699052.2012.698359 Okahashi, S., Seki, K., Nagano, A., Luo, Z., Kojima, M., & Futaki, T. (2013). A virtual shopping test for realistic assessment of cognitive function. Journal of Neuroengineering and Rehabilitation, 10, 59. http://dx.doi.org/10.1186/1743-0003-10-59 Parsons, T. D., Bowerly, T., Buckwalter, J. G., & Rizzo, A. A. (2007). A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods. Child Neuropsychology, 13, 363–381. http://dx.doi.org/10 .1080/13825580600943473 Parsons, T. D., & Carlew, A. R. (2016). Bimodal virtual reality Stroop for assessing distractor inhibition in autism spectrum disorders. Journal of Autism and Developmental Disorders, 46, 1255– 1267. http://dx.doi.org/10.1007/s10803-0152663-7 Parsons, T. D., McPherson, S., & Interrante, V. (2013). Enhancing neurocognitive assessment using immersive virtual reality. Paper presented at the 17th IEEE Virtual Reality Conference, Greenville, SC. Parsons, T. D., & Rizzo, A. A. (2008). Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: A meta-analysis. Journal of Behavior Therapy and Experimental Psychiatry, 39, 250 –261. http://dx.doi.org/10 .1016/j.jbtep.2007.07.007 Pas, E. T., & Bradshaw, C. P. (2014). What affects teacher ratings of student behaviors? The potential influence of teachers’ perceptions of the school environment and experiences. Prevention Science, 15, 940 –950. http://dx.doi.org/10.1007/s11121013-0432-4 Patterson, T. L., Lacro, J., McKibbin, C. L., Moscona, S., Hughs, T., & Jeste, D. V. (2002). Medication management ability assessment: Results from a performance-based measure in older outpatients with schizophrenia. Journal of Clinical Psychopharmacology, 22, 11–19. http://dx.doi .org/10.1097/00004714-200202000-00003 Peli, E. (1998). The visual effects of head-mounted display (HMD) are not distinguishable from those of desk-top computer display. Vision Research, 38, 2053–2066. http://dx.doi.org/10.1016/s00426989(97)00397-0 Pollak, Y., Shomaly, H. B., Weiss, P. L., Rizzo, A. A., & Gross-Tsur, V. (2010). Methylphenidate effect in children with ADHD can be measured by an ecologically valid continuous performance test

embedded in virtual reality. CNS Spectrums, 15, 125–130. Pollak, Y., Weiss, P. L., Rizzo, A. A., Weizer, M., Shriki, L., Shalev, R. S., & Gross-Tsur, V. (2009). The utility of a continuous performance test embedded in virtual reality in measuring ADHDrelated deficits. Journal of Developmental and Behavioral Pediatrics, 30, 2– 6. http://dx.doi.org/10 .1097/DBP.0b013e3181969b22 Rand, D., Basha-Abu Rukan, S., Weiss, P. L., & Katz, N. (2009). Validation of the Virtual MET as an assessment tool for executive functions. Neuropsychological Rehabilitation, 19, 583– 602. http:// dx.doi.org/10.1080/09602010802469074 Rand, D., Katz, N., & Weiss, P. L. (2007). Evaluation of virtual shopping in the VMall: Comparison of post-stroke participants to healthy control groups. Disability and Rehabilitation, 29, 1710 –1719. http://dx.doi.org/10.1080/09638280601107450 Raspelli, S., Pallavicini, F., Carelli, L., Morganti, F., Pedroli, E., Cipresso, P., . . . Riva, G. (2012). Validating the neuro VR-based virtual version of the Multiple Errands Test: Preliminary results. Presence, 21, 31– 42. http://dx.doi.org/10.1162/ PRES_a_00077 Rizzo, A. A., Bowerly, T., Buckwalter, J. G., Klimchuk, D., Mitura, R., & Parsons, T. D. (2006). A virtual reality scenario for all seasons: The virtual classroom. CNS Spectrums, 11, 35– 44. http://dx .doi.org/10.1017/S1092852900024196 Sayago, S., Sloan, D., & Blat, J. (2011). Everyday use of computer-mediated communication tools and its evolution over time: An ethnographical study with older people. Interacting with Computers, 23, 543–554. http://dx.doi.org/10.1016/j .intcom.2011.06.001 Sayal, K., & Taylor, E. (2005). Parent ratings of school behaviour in children at risk of attention deficit/hyperactivity disorder. Acta Psychiatrica Scandinavica, 111, 460 – 465. http://dx.doi.org/10 .1111/j.1600-0447.2004.00487.x Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93– 105. http://dx.doi.org/10.1037/0003-066X.54.2.93 Swift, H. J., Lamont, R. A., & Abrams, D. (2012). Are they half as strong as they used to be? An experiment testing whether age-related social comparisons impair older people’s hand grip strength and persistence. British Medical Journal Open, 2, e001064. http://dx.doi.org/10.1136/bmjopen2012-001064 Wallace, S., Parsons, S., Westbury, A., White, K., White, K., & Bailey, A. (2010). Sense of presence and atypical social judgments in immersive virtual environments: Responses of adolescents with Autism Spectrum Disorders. Autism, 14, 199 –213. http://dx.doi.org/10.1177/1362361310363283

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

VIRTUAL REALITY IN PSYCHOLOGICAL PRACTICE

Wandke, H., Sengpiel, M., & Sönksen, M. (2012). Myths about older people’s use of information and communication technology. Gerontology, 58, 564 –570. http://dx.doi.org/10.1159/000339104 Werner, P., Rabinowitz, S., Klinger, E., Korczyn, A. D., & Josman, N. (2009). Use of the virtual action planning supermarket for the diagnosis of mild cognitive impairment: A preliminary study. Dementia and Geriatric Cognitive Disorders, 27, 301–309. http://dx.doi.org/10.1159/000204915 White, S. W., Richey, J. A., Gracanin, D., Bell, M. A., LaConte, S., Coffman, M., & Kim, I. (2014). The promise of neurotechnology in clinical translational science. Clinical Psychological Sci-

217

ence. Advance online publication. http://dx.doi .org/10.1177/2167702614549801 Zygouris, S., Giakoumis, D., Votis, K., Doumpoulakis, S., Ntovas, K., Segkouli, S., . . . Tsolaki, M. (2015). Can a virtual reality cognitive training application fulfill a dual role? Using the virtual supermarket cognitive training application as a screening tool for mild cognitive impairment. Journal of Alzheimer’s Disease, 44, 1333–1347. Received April 25, 2016 Revision received July 23, 2016 Accepted July 25, 2016 䡲

Members of Underrepresented Groups: Reviewers for Journal Manuscripts Wanted If you are interested in reviewing manuscripts for APA journals, the APA Publications and Communications Board would like to invite your participation. Manuscript reviewers are vital to the publications process. As a reviewer, you would gain valuable experience in publishing. The P&C Board is particularly interested in encouraging members of underrepresented groups to participate more in this process. If you are interested in reviewing manuscripts, please write APA Journals at [email protected]. Please note the following important points: • To be selected as a reviewer, you must have published articles in peer-reviewed journals. The experience of publishing provides a reviewer with the basis for preparing a thorough, objective review. • To be selected, it is critical to be a regular reader of the five to six empirical journals that are most central to the area or journal for which you would like to review. Current knowledge of recently published research provides a reviewer with the knowledge base to evaluate a new submission within the context of existing research. • To select the appropriate reviewers for each manuscript, the editor needs detailed information. Please include with your letter your vita. In the letter, please identify which APA journal(s) you are interested in, and describe your area of expertise. Be as specific as possible. For example, “social psychology” is not sufficient—you would need to specify “social cognition” or “attitude change” as well. • Reviewing a manuscript takes time (1– 4 hours per manuscript reviewed). If you are selected to review a manuscript, be prepared to invest the necessary time to evaluate the manuscript thoroughly. APA now has an online video course that provides guidance in reviewing manuscripts. To learn more about the course and to access the video, visit http://www.apa.org/pubs/ authors/review-manuscript-ce-video.aspx.