Effect of visual dependence and task loads on the TUG sub-components in old and young adults Rania Almajid, Emily Keshner, William Geoffery Wright, Carole Tucker Department of Physical Therapy Temple University Philadelphia, PA
[email protected],
[email protected],
[email protected],
[email protected]
Erin Vasudevan Department of Physical Therapy Stony Brook University Stony Brook, NY
[email protected] Abstract— The Timed Up & Go (TUG) is one of the recommended tools in rehabilitation settings to assess the probability of falling in the elderly. However, the test does not incorporate any of the environmental or multitasking elements identified as increasing risk for falls. Older adults are more visually dependent than younger adults; thus, they are more likely to fall when visual or other distractions disrupt their multisensory integration. We have used virtual reality in a dual tasking paradigm that coupled a visual dynamic environment with a common motor task during the TUG test to gain insight into the sensorimotor integration in older adults. Our results suggest that negotiating a visual scene in a dual tasking paradigm can reveal kinematic difference in motor behavior in some of the activities of the TUG test more than the standard TUG test score, which is only time-based. I. INTRODUCTION Falls are one of the leading causes of disability and mortality in older adults worldwide [1]. A key element to overcoming the societal problem of falls is to accurately detect kinematic differences between fallers and non-fallers at an early stage. Existing studies have used the Timed Up and Go test (TUG) test to evaluate fall risk. The TUG is one of the recommended tools in rehabilitation settings to assess the probability of falling in the elderly [2]. More recent studies have found that analyzing the subcomponents of the TUG provide a more accurate prediction of fall risk [3]. Fallers exhibit different movement strategies during these subcomponent activities, such as sitting to standing and turning, than non-fallers [4]. Fall risk has been linked with difficulties in performing two tasks at the same time [6], e.g. walking on a busy market while holding a beverage. To our knowledge, only one study has examined the effect of adding a manual task to the TUG subcomponents on healthy elders with no history of a fall [5]. None of the existing studies on the TUG
test and fall risk have explored the impact of adding a common motor task to the TUG subcomponents. By exploring the effects of adding a common manual task to the different phases of the TUG test, we expect to clarify which functional activity is the most useful for differentiating between those who are at high and low risk of fall. Furthermore, no studies have yet considered the role of optic flow in evaluating fall risk using the TUG test. Older adults are more visually dependent than younger adults: they rely on their visual inputs to maintain postural balance more than other sensory systems [7]. Thus, in a busy visual environment, these subjects are more likely to fall when the visual distraction interferes with the multisensory integration. There is a need to clarify how older adults process visual inputs when executing normal daily activities and to determine if dysfunction in visual input processing is correlated with increased risk of fall. Therefore, the overall goal of this study is to understand how attention-demanding factors that occur with normal functional activity (i.e. task load and visual flow) contribute to fall-risk in older adults. This project examined kinematic properties of the motor behavior of older adults during attentionally demanding conditions while performing a clinical test that incorporates functional tasks. Results from this project can be used to identify the difference in performance of fallers versus non-fallers under multitasking conditions. This study will be the first to consider how dynamic visual inputs affect the movement strategies during the TUG test. II. METHODS A. Participants Seven young (25.7±3.3 yrs) and three older healthy adults with no history of falling (70.03±4.5 yrs) gave informed consent to participate in this study.
978-1-5090-3053-8/17/$31.00 ©2017 IEEE
B. Appartus and Materials Body movement during the TUG test was captured using Trigno™ wireless sensors (Delsys Inc.). Sensors were placed on the participants’ sternum, lumbar, both wrists, and both shanks. The sensors include tri-axial accelerometers (range 40 m, resolution 16 bit, sampling frequency 148 s/sec, noise < 3.5 mg); a tri-axial gyroscope (range 40 m, resolution 16 bit, sampling frequency 148 s/sec, noise < 0.05°/sec); and a triaxial magnetometer (range 40 m, resolution 16 bit, sampling frequency 74 s/sec, Noise < 0.4 uT). C. Virtual Enviroments The Oculus Rift Development Kit 2 (Oculus VR, 2014) was used to display either a normal view of the room or a distracting virtual visual scene of random dots of snowflakes rotated at a constant speed (5 degree/sec) in pitch up and pitch down directions. The Oculus Rift consists of two displays, one for each eye, with a resolution of 960 x1080 pixels per eye, a maximum refresh of 75 Hz, and a weight of 440 grams. An Ovrvision mount, which is a high-performance USB stereo camera customized for the Oculus Rift, allowed users to replace their current display with a view of the real world. The resolution for the Ovrvision is 640 x 480 per eye (1280 x 480), the frame rate is 60 FPS, the angle of view is H90 °, V75 °, the latency is 50 msec, the pixel number is 0.6 MP, and its weight is 55 grams. Participants were asked to start the TUG test after 10 sec of viewing the visual scene motion. D. Procedure Subjects were asked to wear comfortable shoes and to practice the TUG test one time prior to testing. We started collecting data at the second trial after a training trail and we analyzed the average of two trials of each condition similar to [8]. In the standard TUG test, when the command “go” was given, participants were asked to rise up from an armless chair, walk 3 meters at a comfortable speed, turn around 180°, walk back to chair, and sit down [9]. A line of tape and a cone were placed on the floor to indicate where the subjects should turn. A motor task was used, which included holding a halffull glass of water covered with a plastic cover and a thin piece of fabric. This task was chosen because it is a common functional task that is used in the activity of daily living among elders. Eight conditions were presented, which included: TUG, TUG with motor task (TUGmotor), TUG while wearing the Oculus Rift without additional tasks (TUGOculus_Rift), TUG while wearing the Oculus Rift with motor task (TUGmotor_Oculus_Rift), TUG with visual task in pitch up (TUGvisual(PitchUp)) and pitch down (TUGvisual(PitchDown)) directions, TUG with motor and visual tasks (TUGmotor_visual(PitchUp)) and (TUGmotor_visual(PitchDown)). E. Data Analysis All signals were analyzed using custom made MatLab Codes (MathWorks, Natick, MA, USA). Data were analyzed using one-way repeated measures (within subjects) ANOVA. Post hoc analysis was followed in case of significance (p