Fall Detection and Intervention based on Wireless ...

7 downloads 72812 Views 11MB Size Report
notifications / actuations) or at very low cost (with respect to 3G notifications / actuations). .... 1 For a comprehensive overview of current trends involving Android OS in ... via two XBee USB-dongles equipped with corresponding XBee Series 1 ...
2

Fall Detection and Intervention based on Wireless Sensor Network Technologies

3 4

A. Liu Cheng*, C. Georgoulas, and T. Bock

1

5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

Chair for Building Realization and Robotics, Technische Universität München, Germany *Corresponding author. Tel.:+49 176 779 22642 / +593 98 386 4261 E-mail: [email protected] ABSTRACT The present paper details the development of a cost-effective Fall-Detection and -Intervention System (FaDIS) based on Wireless Sensor Network technologies. The system is designed to integrate into existing low-tech homes, to enable Ambient Assisted Living environments, where software and hardware devices attempt to facilitate a safe and proactive independent living. FaDIS is designed to operate both as an add-on component to existing centralized solutions (complete or otherwise) or as an integral yet independent component of decentralized, scalable, and expandable solutions. Accordingly, FaDIS was implemented in two parts. Part 1 was developed as a scaled proof-of-concept that served as the foundation for Part 2, which is the principal focus of the paper. In Part 2, FaDIS is developed as a fully operational, real-scale system that uses a self-healing mesh network protocol, where its own BeagleBone Black development platform serves as the sink node, and where two Class 2M 10º line lasers are used in conjunction with Light Dependent Resistors to gauge the probabilities of an emergency-event based on the estimated dimensions of the collapsed object. In both parts, if FaDIS construes the probabilities of an emergency event as high, the same series of corresponding robotic response-actions intervene locally while automated notifications are sent to emergencypersonnel, care-takers, and/or family-members via both wireless and cellular technologies. A series of sample runs are detailed and described in the present work in order to demonstrate and to argue for the feasibility and functionality of FaDIS as both a fall-detection and -intervention system in particular and as a WSN-based system in general. Keywords: Fall detection; Wireless Sensor Networks; Embedded Systems; Ambient Assisted Living; Ambient Intelligence

30

1

Introduction

31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46

The independent exercise of Activities of Daily Living (ADLs) becomes increasingly difficult to sustain as people enter a stage of physical and cognitive decline typically associated with the natural aging process [1]. If no intervention and/or mitigation solutions are proposed, this decline may progress prematurely and unnecessarily to the extent where those affected lose the ability to care for themselves and to live independently at home. From a practical and logistical standpoint, this represents an unexpected burden to family members and/or an additional load to institutionalized nursing-care systems. These considerations are particularly important since every emerging industrial nation is experiencing a debilitating age-related demographic change [2]. There exist Ambient Assisted Living (AAL) solutions imbued with Ambient Intelligence (AmI) that attempt to address this problem. However, such solutions tend to be based on centralized models that are costly (see Section 2) and therefore inaccessible to the general public. Intelligent, resilient, and—above all—decentralized solutions with respect to AmI and AAL are therefore necessary to promote and to sustain a consistantly affordable and constinuously evolving independent living. In the present paper, a Fall-Detection and -Intervention System (FaDIS) that is built on such solutions via Wireless Sensor Network (WSN) technologies is proposed as part of a larger unified yet distributed AAL solution.

47 48 49

The detection aspect of FaDIS is based on Pyo’s and Hasegawa’s laser reflectivity method [3], which consists of a laser-emitting component in conjunction with a light-sensing component used to generate a grid of theoretical intersections against which blocking objects may be detected via 1

50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

instantiated intersections. The form of this grid is not predetermined, as the user is free to distribute (uniformly or otherwise) the Light Dependent Resistors (LDRs) conforming the light-sensing component across an adjacent wall and an opposite wall of the deployment space with respect to the position of the laser-emitting component, as long as these are within the laser’s projection range. The principal innovation with respect to the detection aspect of the system lies in that it does not need a priori knowledge about the position of these LDRs with respect to neither the comprehensive spatial dimensions of the deployment area nor the laser-emitting component in order to generate its operational intersection grid. From four givens discussed in Section 4.4, the system’s self-configuration and -calibration mechanisms extrapolate the position of the deployed LDRs after every scan iteration, thereby potentially but not necessarily generating an updated operational grid each time. Two core innovative consequences follow from this feature. The first entails that the distribution of LDRs may be automatically updated between scans without service interruption nor the need for manual system reconfiguration / restart. The second entails that the system may be deployed in a variety of architectures across areas of emphasis and diversity of programs. That is to say, as depending on the proximity and distance between each LDR, the generated grid will be higher resolution in particular areas and lower in others, the user may choose to identify areas for closer observation by increasing the intersection resolution.

67 68 69 70 71 72 73 74 75 76 77 78 79 80

In the event of an identified fall-related emergency instance, and as part of the intervention aspect, a series of interrelated yet independent low-cost and low energy-consuming intervention mechanisms are executed. The principal innovation with respect to this aspect lies in the technologically heterogeneous interoperability involved within and across said mechanisms. A unifying framework brings together proprietary services and technologies with open-source and accessible technologies to provide the user with free (in the case of Internet-based intervention notifications/actuations) or low-cost (in the case of 3G-based notifications/actuations) yet reliable solutions. Between the detection and the intervention aspects, three innovative consequences may be identified. The first removes the user from constant operational responsibility. After having provided the four givens discussed in Section 4.4, the system’s operation is independent of user-intervention. The second allows the user to personalize the system by enabling the creation of custom theoretical intersection grids with varying resolutions depending on preference or need. Finally, the third one brings together a variety of technological services at no additional cost (with respect to Internet-based notifications / actuations) or at very low cost (with respect to 3G notifications / actuations).

81 82 83 84 85 86 87 88 89 90 91 92 93 94

FaDIS is demonstrative of the potential of Wireless Sensor Network (WSN) technologies as facilitators of affordable and decentralized solutions. In order to demonstrate this, FaDIS was developed in two parts. In Part 1, it was developed to validate the feasibility of low-cost components as well as to demonstrate compatibility with existing centralized AAL solutions, thereby rendering it as at least a viable add-on if not a stand-alone solution. In Part 2, it was developed to confirm the reliable functionality of high-quality yet cost-effective components in real-scale environments as well as to demonstrate decentralized scalability and resilience with respect to evolving functional requirements (see Section 3). Part 1 was implemented within the context of the PassAGE project [4], where AmI systems were mediated via a central terminal; and Part 2 within that of the LISA Habitec project [5], where stand-alone terminals created a decentralized-yet-unified AAL environment. In both parts, FaDIS extended the systems initially deployed in their respective foundation projects. However, the true character of FaDIS as a costeffective WSN-based solution that provides automated fall-detection and -intervention services is showcased in Part 2, which is the focus of the present paper.

95

In order to detail FaDIS, the paper is organized as follows. In Section 2, a brief overview of existing

2

96 97 98 99

related solutions is provided. In Section 3, FaDIS’s concept and approach are outlined in order to explain underlying design motivations. In Section 4, the methodology and implementation are described and justified in detail. In Section 5, the results of actual sample runs are extensively described and the corresponding limitations detailed, and the conclusions are provided in Section 5.

100

2

Related work

101 102 103 104 105 106 107 108 109 110 111 112 113 114

In this paper, FaDIS is discussed in two qualifications, i.e., as an intelligent service with the function of detecting unexpected collapsed objects, and as an example of WSN-based solutions. Since the WSN character of FaDIS is illustrative of a variety of other possible intelligent services, the promise of the second qualification takes precedence over the first. As a result, this section will juxtapose the advantages of WSN technologies over existing centralized AAL and AmI complete solutions. However, taken in its first qualification, FaDIS may be considered against similar promising intelligent services such as the fall-detection system based on Kinect’s infrared sensors by Mastorakis and Makris [6]; the system based on smartphone devices by Wu [7] as well as Abbate, Avvenuti, Bonatesta, Cola, Corsini, and Vecchio [8]; and the fall-detection and intervention system based on wearable sensors by Wu, Zhao, Zhao, and Zhongs [9] just to name a few1. The salient difference between FaDIS and other solutions that offer similar services is that FaDIS is representative of an emerging type of decentralized solution whose core technologies may be used to spawn a variety of other services and functions that integrate seamlessly into one system architecture, which yields a more robust performance across the entire system.

115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133

The prevailing type of AAL solution is conceived as a highly integrated and personalized complete solution based on a centralized architecture. Based on this type of architecture there exist robotized and intelligent solutions—e.g., RoboticRoom [11], Wabot-House [12], The Aware Home [13]—as well as ambitious AmI and AAL implementation proposals that make use of sensor networks for intelligent robots [14]. But however promising these solutions may be, their cost still makes them available to only a minority of the aging population. One reason why these and other present solutions are costly is because the research and industry sectors tend to view them as “complete solutions”, “often including overlapping of almost equal or homogeneous sensors.” [15]. Another reason is because the computation of self-learning methods requires considerable infrastructure to produce a useful dataset from which to draw substantial conclusions. In more recent research projects—as mentioned by Chiriac and Rosales [16]—such as SAMDY [17] and eHome [18], these system costs alone “are estimated [to be] between 3,500 EUR and 5,000 EUR” [16]. Yet another reason is that AmI / AAL solutions require customized planning and installation by experts, which in part cause the “enormous costs of today’s single solutions … which are too expensive for private buyers as well as health and care insurance providers.” [19]. Moreover, activity-monitoring in AAL requires the implementation of a system that is able to track the movement and positions of the user. On the whole, indoor tracking solutions, based on triangulation methods etc., provide strong and reliable performance. But “these architectures require structured environments and consequently high installation costs” [20].

134 135 136 137 138

WSN solutions, however, provide a viable alternative. These WSNs do away with the notion that AAL solutions must be “complete solutions” where sensors and actuators are deeply embedded and integrated into the very architecture. WSNs are decentralized solutions that avoid the high-costs generally associated with highly integrated systems. Georgoulas, Linner, Kasatkin, & Bock [21] showed that a solution that seeks to reduce complexity of functions—and therefore cost—should be 1

For a comprehensive overview of current trends involving Android OS in fall-detection systems, refer to the discussion and comparisons provided by Luque, Casilari, Morón, and Redondo [10].

3

139 140 141 142 143 144 145 146 147

one that does not have all services and functions centralized in a service robot or in a static location, but rather one that strategically distributes services along a decentralized controlled environment. Furthermore, WSNs are more energy efficient, and sensor nodes can be configured to shut down at particular intervals depending on particular needs and/or the desired sense-data resolution. This is a significant advantage over sensor nodes running on a wired or WiFi system, since these latter cannot be intermittently turned off without sacrificing performance and functionality. Over the last decade, and particularly in the last five years (see for example, [22–26]), work on Wireless Sensors and WSNs demonstrate excellent performance—in terms of energy-consuption and operation—and reliability, giving them a solid track-record for future development.

148

3

149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172

Since Part 1’s core-concept, methodology, and implementation have already been detailed elsewhere [28], the present paper focuses on the development of Part 2. However, an overview of Part 1 is provided in order to situate the discourse in Part 2. Part 1 demonstrated the feasibility and functionality of the proposed FallDetection and -Intervention System (FaDIS), where the fall-detection component was and remains based on the laser reflectivity scheme developed by Pyo, Hasegawa, Tsuji, Figure 1. FaDIS: Part 1 topology (deployed in PassAGE [27]). Kurazume, & Morooka [3]2. Moreover, the work detailed in this paper partly builds on a WiFi-dependent assistive robotic system previously developed and deployed [29] at a real scale AAL environment in the Robotic Laboratory of the Chair for Building Realization and Robotics (BR2) at Technische Universität München (TUM). The feature of this system pertinent to the present work consisted in a TurtleBot rover being controlled via a Graphical User Interface (GUI) that triggered Secure Shell (SSH) commands to execute Robot Operating System (ROS) routines from a central terminal. These routines would take the rover to a series of predetermined destinations, specified by the coordinates of the environment’s map previously generated in ROS’s proprietary 3D visualization tool, Rviz.

173 174 175 176 177 178 179 180 181

In Part 1 of FaDIS, the authors have added an ad hoc WSN layer that feeds sensed-data to the same central terminal from which SSH commands are sent to the rover via WiFi. The core system architecture of the Part 1 proof-of-concept consisted of two main modules, “A” and “B”, each built on an Arduino UNO MCU equipped with a corresponding XBee/ZigBee shield and XBee Series 1 802.15.4 antenna. Both modules communicated with a central [desktop/laptop] computer / terminal via two XBee USB-dongles equipped with corresponding XBee Series 1 802.15.4 antennae, one for each module. Each Arduino-mounted XBee antenna corresponded exclusively with its counterpart XBee USB-dongle, although a single XBee USB-dongle could have managed the inbound and outbound communication from both XBee antennae provided that they all shared the same PAN ID,

Concept and Approach

2

However, it is worth noting that the present work’s implementation does not use a laser range finder as in Pyo et al.’ Instead, it uses low-cost lasers and a series of Light Sensitive Resistors (LDRs).

4

182 183 184 185 186

and that a proper parser was in place to identify which data packet was correlated to which antenna. Since this possibility had no bearing on the objectives of the proof-of-concept, it was opted to retain a one-to-one correspondence between an Arduino-mounted XBee antenna and an XBee USB-dongle for simplicity. Instructions for the laser diode mounted on Module A were sent from the central computer, which also received the LDR data from Module B.

187 188 189 190 191 192 193 194 195 196 197 198

All computation and decision-making was performed by the central computer—from the detection of unexpected objects via Modules A and B, to the trigger of response and intervention systems via the TurtleBot rover and the automated fee-based SMS as well as the free web-based SMS and email Internet of Things (IoT) systems.3 Modules A, B and C, in conjunction with the central computer, conform the Fall-Detection part of the system, while Modules D and E (detailed below), also in conjunction with the central terminal, form the Intervention core. Modules C, D, and E are IoT systems, which means that their connectivity to and interactivity with the web are independent from that of the central computer’s (see Figure 1). Modules A to E are based on independent and individual Arduino UNO MCUs because of the 32KB ATmega328 flash memory limitation, which is not enough to house the program of two or more modules. The system architecture in Part 1 was appropriate for a Technology Readiness Level (TRL) of 3-4, scaled proof-of-concept compatible with existing WSN/CPN-compatible and/or -based AAL solutions

199 200 201 202 203 204 205 206 207 208 209

The real scale implementation in Part 2, however, considers matters of performance, efficiency, robustness, and scalability appropriate to a TRL 5-7 deployment that is an integral part of a larger decentralized WSN/CPNbased AAL solution. Consider the following differences between the implementation in Parts 1 and 2 in the following sections.

210 211

3.1 Presupposed Deployment and Approach

212 213 214 215 216 217 218 219 220 221 222 223 224 225

The implementation in Part 1 was developed as a system that could be added to any existing AAL solution with at least a Wi-Fi-supporting Figure 2. Diagram of overall concept in Part 2. framework and a central computer, which are two prevalent features in such AAL solutions. The central computer in Part 1 represented the presupposed AAL environment’s central computer and decision-making terminal. This supposition renders the implementation in Part 1 as highly compatible with and easy to retro-fit in most—if not all—common existing AAL solutions. Indeed, and as mentioned in Section 1, Part 1’s implementation was developed as an add-on subsystem to the PassAGE project [4] developed at TUM’s BR2 Laboratory. With this respect, it may be said that Part 1’s implementation adopted a top-down design approach, where the known features of the overall context provided the limitations and conditions that drove the design of the system’s architecture. There are crucial advantages and disadvantages with such an approach. One key advantage is compatibility across different and 3

Context

Such IoT systems are not discussed in this paper. For their detailed methodologicaly development, see [30].

5

226 227 228 229

potentially rudimentary AAL solutions. One key disadvantage, however, is that some existing solutions may not be very efficient or optimal—in terms of maintenance cost, performance, energy consumption, etc.—and to design a FaDIS to fit something less sophisticated and less cost-effective in the long run may be ill-advised and unjustified.

230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248

On the one hand, there is a point where the intentional limitation of new technologies and methods in order to satisfy and/or to accommodate to the requirements of old technologies and methods becomes unreasonable and unjustifiable from any perspective, whether technological or economical. In the long-run, this key disadvantage outweighs all advantages of the implementation in Part 1. On the other hand, it is also unreasonable and unjustifiable to design a system—intended for the public in general—using only sophisticated and state-of-the-art technologies and methods that completely ignore compatibility issues with older systems. There is, however, a middle way, where new systems retain compatibility with older systems while uncompromisingly showcasing technological sophistication. With the cross-compatibility desideratum in mind, a solution may be developed with a bottom-up design approach, which is what the implementation in Part 2 does. In this implementation, FaDIS is conceived as an integral subsystem among others in a larger decentralized system, each serving a particular programmatic function and context yet fully compatible with each other (both in terms of data-exchange and physical interfacing) and with existing AAL technologies that make use of established communication and interfacing standards. For the purposes of the real scale and fully operational implementation, this larger decentralized solution is instantiated in the schematic and detail development of the Living independently in Südtirol Alto-Adige through an integration of Habitat, Assistance, Bits and Technology (LISA Habitec) project (see [5, 21]) developed at TUM’s BR2 Laboratory, within which FaDIS is implemented as an integral part.

249

3.2 Proposed System Architecture—Hardware

250 251 252

The implementations in Part 1 and Part 2 use the same sensor components as well as the same intervention IoT mechanisms. However, due to Part 2’s higher TRL implementation, it replaces the following devices and modifies corresponding configurations:

253 254 255 256 257 258 259 260 261 262 263 264

Development Platform: The system topology in Part 1 (see Figure 1) consisted of a central computer serving as the parent node of hierarchically equal children nodes based on Arduino UNO MCUs. In Part 2, the central computer as the parent node has been eradicated, and its coordination and computation duties and responsibilities have been relegated to a BeagleBone Black (BBB) development platform [31]—a mini-computer, in fact—that also replaces (physically and functionally) Modules C and E. That is to say, the BBB system-node, the system’s sink node, is able to communicate and to coordinate with Modules A and B wirelessly as well as to log data directly to Plotly (subsuming Module C) and to potentially send web-based notifications (subsuming Module E). Module A retains its Arduino UNO MCU-based functions and Module B, while likewise retaining its functions, replaces Module D—that is to say, the same module that is able to register if and when the laser diodes from Module A have struck any LDRs is also able to send fee-based SMS emergency notifications via the Siemens TC35 shield (subsuming Module D).

6

265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295

Modules A and B cannot be subsumed by the BBB system-node. If such were the case, Module A’s laser diodes and Module B’s LDRs would have to be physically connected to the BBB system-node, which would be impractical due to cable-length limitations and latency issues if FaDIS were to be installed across a large space. These issues are avoided by keeping them as separate nodes communicating wirelessly. Furthermore, the functions of Modules D and E were subsumed by Module B and the Figure 3. FaDIS: Part 2 topology (deployed in LISA [5]). BBB system-node, respectively, in order to avoid having to depend solely on the BBB system-node for emergency communication with the external world. Under normal operation, Module A reports its detection data to the BBB system-node, which, in turn, is able to beckon the TurtleBot rover for immediate verification and limited intervention while also potentially sending web-based SMS and emails notifications. If, however, the BBB system-node malfunctions and is no longer capable of accessing the WWW, Module A then finds another BBB system-node—from the Kitchen or the Bedroom systems, perhaps—and notifies it that its central node is not operational. Any of the other BBB system-nodes will then automatically take over the duties and responsibilities of FaDIS’s BBB system-node, which is possible because all the nodes installed with an XBee Series Pro 2B antenna and operating under the ZigBee protocol belong to a networked, self-healing mesh. In the highly unlikely event that all BBB system-nodes in LISA should fail, and that the LISA environment should lose its ability to access the WWW completely—whether due to problems with the local router or with the corresponding ISP—Module B‘s optional cellular capabilities still permit FaDIS to communicate emergency events across the globe (provided the pertinent SIM card is active). Under this topology, FaDIS has two completely independent means of communicating emergency events.

296 297 298 299 300 301 302 303 304 305 306 307

a) Communication Devices: Both Parts 1 and 2 require WiFi in order to integrate into a WiFibased Structured Environment upon which robotic agents operate. However, to use WiFi as the communication protocol between nodes—may these be between system-node to child-node, child-node to child-node, or system-node to system-node—would be highly inefficient in terms of energy-consumption relative to task performance. Alternatively, as it has been previously argued, the XBee platform and communication protocol provide an adequate solution due to its low energy-consumption yet robust connectivity. In Part 1, however, XBee Series 1 antennae was used, whereas in Part 2 XBee Pro Series Pro 2B antennae are used. The Pro Series Pro 2B antennae run ZigBee mesh firmware, which enable the devices to have “the lowest current draw of any Digi RF product” [32]. Furthermore, although both Series 1 and Pro Series Pro 2B are capable forming and sustaining Point-to-point, Star, and Mesh topologies4, Pro Series Pro 2B have greater range and Receiver Sensitivity [32].

4

Since the Series 1 antennae are preinstalled with 802.15.4 firmware, which only permits Point-to-point and Star topologies; they must be installed with Digi International®’s proprietary DigiMesh firmware before enjoying Mesh capabilities [32].

7

308 309 310 311 312 313 314 315

b) Actuator Device: In Part 1, an MG90 Microservo motor was used to rotate the laser in Module A, which, in spite of its inherent unavoidable and unpredictable imprecision, performed adequately in the context of a scaled-model where scaled-down distances rendered the motor’s loss or gain of one degree in rotation negligible. In Part 2, however, a 28BYJ-28 Stepper motor, whose very architecture enables greater reliability while still remaining within the low-cost category, is used. Furthermore, the stepper motor operates at lower decibels than the microservo motor, which is a subtle yet important factor that contributes to user-acceptance and seamless lifestyle integration.

316 317 318 319

c) Laser diode: In Part 1, a 650 nm, 30 mA Keyes-008 Laser Diode, which is a de facto Class 1 laser, was used. In Part 2, however, a more powerful 532 nm, 350 mA, and 5 mW Class 3R green dot laser on one prototype, and two 650 nm, 30 mA, and 5mW Class 2M red 10-degree line lasers were used.

320

3.3 System Architecture—Software

321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336

Part 1 was implemented with fee-based software (i.e., Rhinoceros 5.0® and its free plug-ins Grasshopper, Firefly, and gHowl) running on Microsoft Windows®. The software was installed on the central computer, i.e., Part 1’s parent-node controlled the I/O functions of Modules A and B remotely from Grasshopper via XBee Series 1 antennae operating under 802.15.4 protocol; and commanded the execution of IoT Modules C, D, and E. In order for Grasshopper to be able to communicate with the Arduino UNO MCU-based Modules A and B, the MCUs first had to be “flushed” (i.e., uploaded) with Firefly’s proprietary “Firefly_Firmata” [33], which provided the necessary instructions in the Arduino IDE language for Grasshopper—and Firefly, more specifically—to control the MCUs remotely. The IoT Modules C, D, and E were “flushed” with their respective Arduino IDE “sketches” (i.e., program) and required no further user interaction, being commanded directly the by central computer. Generally, whenever a change in function needs to be adapted to the Arduino UNO MCU, the new code must be uploaded to the board’s chip. “Firefly_Firmata”, however, needed to be uploaded once in Modules A and B, and afterwards functional modifications could be effected via Grasshopper in near real-time without further uploads. With respect to IoT Modules C, D, and E, their code required no further change after the first upload, as their functions remained unchanged throughout execution.

337 338 339 340 341 342 343 344 345 346 347 348 349 350

Controlling Modules A and B via Grasshopper was particularly beneficial at early design stages, when the functions of each module were still to be defined with specificity. However, once the functions were clearly defined and the variables detailed (in number and in type), the only conditions subject to change where those related to values assigned to variables, which do not require a fresh upload of the code at each change in value, as long as the logical structure remains the same (such was the case with Modules C, D, and E, which operated independently of Grasshopper). At this stage, executing a particular set of functions directly from the MCU would be more efficient (in terms of performance and energy-consumption) than running an equivalent set indirectly via Grasshopper. Another considerable drawback in Part 1’s software architecture configuration with respect to a higher TRL implementation was that the central computer needed to be running these energy- and resource-consuming third-party software permanently in order for the system to work. Without them, all Modules would remain active but inoperable. Furthermore, if and when the system had detected a collapsed object, it would send SSH commands to PASSAge’s TurtleBot rover’s on-board computer, which runs ROS on Linux.

351 352

In Part 2, the BBB system-node’s 4GB memory contains the Python-equivalent code of Plotly’s proprietary API (replacing Module C) in addition to self-written code that replaces Grasshopper’s

8

353 354 355 356 357 358 359

coordinating duties. Module A is flushed with a self-written program that enables it to remain on stand-by while waiting for Module B’s operational status, and that tasks it with sending falldetection data to the Coordinator. Similarly, Modules B has been flushed with a self-written program that enables it to report to Module A when an LDR has registered an instance of a laser. The communication between the nodes is direct and independent via ZigBee, and the Coordinator’s communication with the TurtleBot rover would take place via Wi-Fi from a Linux environment to another.

360

3.4 Proposed Communication Protocols

361 362 363 364 365 366 367 368 369 370 371

The system architecture of FaDIS in Part 1 took the form of a multipoint or star topology (see Figure 1), where a single parent-node (i.e., the central computer) controlled six children nodes (i.e., Modules A, B, C, D, E, and the TurtleBot Rover). In Part 2, however, FaDIS takes the form of a mesh topology, where a single parent-node—i.e., the BBB system-node—coordinates three children nodes (i.e., Modules A, B subsuming D, and the TurtleBot Rover). This is made possible by replacing the XBee 802.15.4 protocol with the ZigBee protocol via XBee Series Pro 2B antennae. With ZigBee, the BBB system-node’s antenna is assigned the role of Coordinator, while Module A’s is assigned that of Router and Module B’s that of End-Device (see Figure 3). The ZigBee protocol should not be confused with the DigiMesh protocol, since both are mesh-ready protocols that enable network self-healing capabilities. An important distinction is that the ZigBee protocol retains the parent-child node hierarchy while under DigiMesh all nodes are equal.

372

4

373

4.1 Step 1: BeagleBone Black Revision C Initial Configuration and Setup

374 375 376 377 378 379 380 381 382 383 384 385 386

Since the BeagleBone Black Revision C is an affordable yet robust electronics and mechatronics “development platform” [31], it serves as a viable alternative for the system-node of all systems deployed across the programmatic functions in LISA, of which FaDIS is one. FaDIS operates in a context where all system-nodes are able to communicate with each other and with their corresponding assets and/or subsystems via ZigBee and/or Wi-Fi. In the configuration process WiFi comes before ZigBee, as each BBB system-node must be given access to the WWW for system updates as well as to fetch pertinent libraries, packages, and APIs necessary for FaDIS’s functionality. Since the BBB system-node is tasked with streaming fall-detection data to Plotly in real-time, the appropriate API library must be downloaded and the credentials configured. Depending on the number of variables and/or data categories, the number of necessary stream ids may vary. In the present case, two data-streams are required to do this, one to plot the position of LDRs, and another to mark the intersection points created by the collapsed object’s blocking of corresponding direct and indirect laser-to-LDR lines of sight.

387

4.2 Step 2: Creating a network mesh of BeagleBone Black Nodes

388 389 390 391 392 393 394 395 396

In this context of LISA, each program (e.g., Kitchen, Bathroom, Bedroom, etc.) is equipped with ADLs-assisting system. At the core of every system lies a BBB system-node as the sink node. FaDIS is one said system deployed in the Bathroom. Although these are separate systems with respective assets and services, LISA is envisioned in such a way as to enable integrity, consistency, and robustness through resilience. That is to say, if any system-node should fail or malfunction, any other system-node would have the potential to take over the roles and responsibilities of the faulty node. These features enable the LISA system to be a self-healing and highly adaptive decentralized system networked via the ZigBee protocol, which are important features in affordable and accessible AAL systems. LogicSupply’s BeagleBone ZigBee + XBee Capes (see Figure 4) were

Methodology and Implementation

9

397

used in order to install the XBee Series Pro 2B antennae on the BBB system-nodes.

398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415

As shown in Figure 3, the Part 2 implementation of FaDIS has one BBB system-node, two Arduino UNO MCU children nodes, and the TurtleBot rover. However, only the BBB system-node and the Arduino UNO MCU children nodes are equipped with XBee Series Pro 2B antennae. Accordingly, a typical hierarchy is established and the role of Coordinator is assigned to the BBB system-node, that of End Device to Module A, and that of Router to Module B. The present system architecture deploys a mesh network featuring each kind of capacity (Coordinator, Router, Figure 4. Coordinator BBB with ZigBee + XBee Cape and Series Pro 2B antenna. End Device) as a working template for a larger heterogeneous mesh. Under the present architecture, Module B communicates with Module A, which in turn communicates with the Coordinator. It is important to note, however, that while Module B does not communicate directly with the Coordinator, it retains the potential to do so. In other words, even though network may have a mesh topology, it does not necessarily mean that every node communicates with every other node, but rather that every node has the capacity and potential to do so, regardless of whether it is configured as Coordinator, Router, or End Device.

416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433

In order to test the proper configuration of the antennae and the successful installation of the XBee libraries, the BBB equipped with the Coordinator antenna is also wired with an infrared sensor; the BBB with the Router antenna is also wired with a temperature and humidity sensor; and the BBB with the End Device antenna is also wired with a light dependent resistor sensor. In this brief test, each BBB will be both Figure 5. Three BBB nodes broadcasting and receiving from (1) broadcasting its wired sensor each other and independently sreaming both local and received data to the other BBBs, and (2) data to Plotly. receiving the sensor data from the other BBBs. Each BBB then streams its local and received data to Plotly (see Figure 5). What is particularly significant about this test is that each node gathers its local sensor data simultaneously with that of two other remote sensors yet streams all three at the same time to Plotly.

434

4.3 Step 3: Setting up the deployment context: BR2’s Bathroom

435 436 437 438 439 440

In Part 2, FaDIS is deployed in the bathroom of the real-scale apartment in BR2’s laboratory. Its dimensions are 1.95 meters in length and 3.1 meters in width5. Given the size of the room, six LDRs are used in order to create a sufficiently detailed grid of theoretical intersections for the detection of objects above a certain size (see Figure 6 Left). There is a point after which higher resolution based on a higher number of LDRs does not justify the resulting slower speed of the scan cycle. For example, if twelve LDRs were used instead of six, the scan time would double, yet it is 5

i.e., for the present purposes, the width is the horizontal side and the length the vertical.

10

441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464

doubtful whether such a resolution would detect objects that the variation with only six LDRs would not detect. Indeed, as resolution increases within a fixed area, the system is able to detect increasingly small objects. But objects below a certain size are negligible, as the probability of their being a collapsed person becomes null below a certain threshold. There may be an occasion to imagine a deployment with thousands of LDRs within the specified bathroom dimensions, where the theoretical intersections are only millimeters apart from each other. In such an exaggerated scenario, a collapsed person may lie undetected for longer than anticipated in the duration of a single laser scan cycle. There must indeed be a reasoned and calculated compromise between resolution and speed. This compromise needs to be decided upon on a case-by-case basis, depending on both the room area and the size thresholds that the user wishes to scan for. In the present resolution, the system is configured to ignore instances of single confirmed intersections resulting from direct-with-indirect and indirect-with-indirect lines of sight. That is to say, imagine that in a scan cycle, the system detects that the direct line of sight between the origin of the laser to LDR2 is blocked (i.e., the laser fails to register at LDR2’s position), and that the indirect—or reflected—line of sight between the origin of the laser and LDR1 is blocked—i.e., the reflected laser fails to register at LDR1’s position (see Figure 6 Center). Such an instance of a single detected intersection would mean that some object occupies the intersected space, but that this object is not large enough to block near-by LDR lines of sight and may be assumed to be too small to be a collapsed person. Perhaps it is the foot of a person stepping into the shower. For the system to trigger intervention mechanisms as a result would be more a nuisance to the user than an actual intervention. If an average adult collapsed around the center area of the bathroom, approximately eleven intersections would be identified (see Figure 6 Right). As may be observed, not all identified intersections would be caused by the presence of a physical object, but some—primarily around the periphery—would be caused by line of sight occlusions.

11

Figure 6. Left: Real-scale Bathroom context with theoretical intersection points created with six LDRs. Center: Bathroom context with one detected intersection. Right: Bathroom context with category four blockage (8-12 instantiated intersections).

465

4.4 Step 4: Design and Development of Laser-localization Module A

466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484

The development of the laser module in Part 2 requires a deliberate and carefully considered design approach, as opposed to the ad hoc character of Part 1’s module. In Part 1, the low-cost laser was attached to the servo-motor by wires, which, in turn, was embedded into one of the short sides of the scaled model by carving out a space for it in the foamcore panel. In Part 2, two custom-designed and 3Dprinted PVC containing modules for the laser(s), stepper motor, and power supply, were produced to precise specifications. The laser module prototype (see Figure 7) was developed with two Class 2M, 10 degrees Line lasers.

12

485 486 487 488 489 490 491 492 493 494 495

Each laser was rotated ninety degrees with respect to the horizon in order to create two slightly overlapping parallel vertical lines of roughly twenty to twenty-five centimeters in height. This height served as a generous tolerance margin accounting for slight floor inclinations, stepper-motor rotational deviations, and a slightly skewed field of LDRs.

496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518

The lasers were held in place by an enclosure which itself is fixed to the stepper motor on one end and to a fixed ball-bearing on the other. This enabled the enclosure to rotate consistently parallel to the horizon. As was previously mentioned in Section 4.1, FaDIS Part 2 is a selfcalibrating system that finds and graphically maps the LDRs in its field of sight. Given the length, Figure 7. Top: Module A’s laser component, isometric drawing; width, which LDR is positioned at Bottom: 3D-printed result. the upper-left corner, and the distance between the last LDR and the upper-right corner, FaDIS is able to ascertain the extents of its deployment environment Plotly. FaDIS does this by counting its steps and recording at which each LDR was found. But in order to do this, a “zero” reference position—i.e., a point from which to start counting—must be defined. Since the stepper motor has no constant “zero” position, being a 360-degree motor, in the laser component a mounted LDR was used to define such a position (see Figure 12). Furthermore, the present laser component was designed to be more flexible and versatile than dot laser counterparts. This flexibility is necessary when the system is intended to be installable in existing architectures that may or may not always provide high-precision tolerance levels. The component is able to work in environments with considerable inclinations and/or deformations while subsuming the features of dot laser variations.

519 520 521 522 523 524 525 526 527 528 529 530 531

Module A’s MCU node (see Figure 8) remains physically the same as in Part 1 except for the removal of unnecessary optional sensor components. That is to say, the node remains based on an Arduino UNO MCU attached with an XBee/ZigBee shield and a corresponding XBee antenna, only the present one is a Series Pro 2B antenna. As mentioned in Section 4.1, a variety of low-cost sensor components may be added to both Modules A and B.

Figure 8. Part 2’s Module A’s MCU node proper.

13

532

4.5 Step 5: Design and Development of Laser-localization Module B

533 534 535

Like Module A, Module B retains a very similar arrangement as it did in Part 1. A notable difference is that the optional components previously present in Part 1’s Module A are now assigned to Module B (see Figure 9 Left).

536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555

A considerable development from Part 1 to Part 2 with respect to Module B is the redesigned and redeveloped field of LDRs (see Figure 10 Top Left). The new field consists of fourteen aluminum profile embedded LDRs— conforming a unified frame (see Figure 10 Top Right)—whose cables are unified into a simple output (see Figure 9 Right). On the right-side of the aluminum frame sit a series of conventional mirrors used to reflect Figure 9. Left: Part 2’s Module B’s MCU node proper. Right: the lasers (see Figure 10 Bottom). Module B’s connection to the field of LDRs. Furthermore, since the LDR is quite small, clear acrylic was placed over every LDR to provide it with a surface against which the lasers’ beam would diffuse—which caused the laser light to degrade to a degree, but which also increased the probabilities of registering a reading6. As discussed in Section 4.3, FaDIS presently requires only six of the available LDRs to perform efficiently and accurately.

Figure 10. Top Left: Field of LDRs deployed in BR2’s real-scale model Bathroom. Top Right: Close-up of LDR in field of LDRs deployed in BR2’s real-scale model Bathroom. Bottom: Close-up of mirror-wall in field of LDRs deployed in BR2’s real-scale model Bathroom.

556 557

Module B continuously reads from each of the LDRs along the aluminum frame in order to compute a unified average against which individual readings are set in order to ascertain a quotient factor to 6

With respect to the laser readings in direct line of sight, this acrylic piece did not significantly degrade the laser to compromise performance. However, laser readings in indirect line of sight did simultaneously benefit and suffer more than their direct line of sight counterparts. This is why a better solution will be discussed in Section 5.5.

14

558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573

compare to the threshold constant. This prevents the light-sensing module from sending laserdetection event notifications every time the bathroom lights are turned on, as the light-sensing readings of each individual LDR would not differ significantly from the unified average. In order to trigger a laser-detection notification event, individual light-sending readings would have to differ considerably from the unified average. The limit of this difference margin is the laser-detection threshold constant, which serves as a de facto regulator of the light-sensing component’s sensitivity. If a quotient corresponding to a particular LDR exceeds the threshold, Module B triggers an automatic ZigBee notification to Module A. Ideally, the threshold constant would be adjusted automatically and continuously to set the detection threshold at appropriate levels. For instance, if it is set too high, the lasers will not be detected; and if too low, a variety of alternative bright lights may trigger false positives. At present, the threshold constant used was gauged via experimentation and test runs. However, and as mentioned in Section 5.5, future developments of FaDIS may include an automatically self-regulating threshold based on the real-time lux measurement of the space in question. Alternatively, it would be possible to eradicate the need for the threshold constant altogether, if the LDRs were fitted with optical filters and/or a bandpass to favor the lasers’ wavelength while reducing and/or blocking all others.

574 575 576 577 578

The following figures (see Figure 11) are the actual sequential individual readings from LDRs 0 to 5. It is evident that the lasers’ intensity is many orders of magnitude higher than that of the average room illumination. But it is also evident that if the room illumination increased uniformly, the lasers’ signature would be less pronounced, which would make it increasingly difficult to discern the lasers’ signature “spike” amidst the ambient “noise”.

579 580 581

Figure 11. Readings for LDRs per “direct line of sight” scan cycle (24,850 readings). From left to right, top to bottom: LDR0, LDR1, LDR2, LDR3, LDR4, LDR5, and the Combined average reading for all six LDRs in “direct line of sight” scan cycle.

582 583 584 585 586

Figure 11 (Bottom Right) shows the effect of the individual readings above the combined average. It may be observed that the overall average dampens the intensity registered by each individual LDR, which is partly why it becomes more difficult to define a robust mechanism that is simultaneously sensitive enough to detect even the faintest presence of lasers yet discriminating enough to disregard all other light signatures.

587

4.6 Step 6: Description of Initialization, Calibration, and Detection Routines

588

4.6.1

589 590

During Module A’s initialization, before confirming if Module B is operational or not, the stepper motor and both active lasers begin to search for the reference LDR embedded on the laser module’s

Initialization (Routine 0-0)

15

591

3D-printed encasing (see Figure 12).

592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613

The reference LDR is found when it recognizes the laser’s intensity signature upon it. In order for this recognition to happen, the reference LDR first needs to compute an ambient illumination average against which to gauge sudden increases or decreases of light intensity. The ambient average is first computed by averaging ten-thousand consecutive analogue readings of reference LDR Figure 12. Module A’s Reference LDR. under ambient illumination. It is presupposed that, since the rotational position of the motor at power-on is unpredictable, the probabilities of the motor not being in the reference LDR position at the onset are greater than those of it being within detectable rotational distance of the reference LDR. Consider that out of the 4096 possible rotational positions that the motor may find itself in at power-on, only about ten would be within the line of sight of reference LDR. Yet it is possible that during one of many executions the start-up position of the motor should happen to be such that the lasers are shining directly on reference LDR. The resulting ambient illumination average would not be representative of the actual ambient illumination, and so as the motor would continue rotating looking for an intensity reading higher than the unusually high ambient illumination average it just gathered. Fortunately, a time-out contingency would reset the ambient illumination average in such a case.

Figure 13. Left: Consecutive reference LDR readings vs. ambient illumination average until laser signature is recognized, 1st instance. Right: Direct hit of laser on zero-reference LDR, indicating position “zero” has been found.

614

4.6.2

615 616 617 618 619 620 621 622 623

Having defined an ambient illumination average, Routine 0-1 identifies the reference point / step against which the position of all LDRs is first identified and then ascertained. Since this is a costeffective system, the motor used to rotate the lasers is subject to occasional step-losses. While these losses in themselves are minor, their aggregated total significantly compromises the precision of the system, especially considering that each LDR is associated with a step number that is used to return to it in subsequent scan cycles. In order to mitigate the extent of this loss, the step-count is reset after every scan cycle, ensuring that whatever losses were accrued in one cycle do not carry over to another. Since the motor in question is able to rotate continuously, there is no original position from which it begins to rotate. Therefore, in order to keep a consistent step-count with respect to previous

Definition of Position “Zero” (Routine 0-1)

16

624 625 626 627 628 629 630 631 632 633 634 635

scan cycles, the system is tasked with finding a so-called “position zero” step from which to begin the count (see Figure 13 Left). Storing a particular step for this “position zero” would defeat its purpose, as one particular step value may land in one position in one scan and in another in the next. Accordingly, this “position zero” is identified via means independent of information gathered in previous scan cycles, i.e., via the same mechanisms used by the light-sensing component of the system. Having completed a scan cycle, the system tasks the motor to rotate until a reference LDR attached at what would become “position zero” detects the laser’s intensity signature. As stated in Section 5.5, this routine performed with a perfect success rate, invariably setting the same position—given a tolerable error-margin—as the original point of departure. In the highly unlikely event considered in the previous section, where the initial ten-thousand readings happen to primarily consist of laser readings, the subsequent reference LDR readings would never meet nor exceed the threshold. Accordingly, the timeout mechanism would reset the defined average.

636

4.6.3

637 638 639 640 641 642 643 644 645 646 647 648 649

By this point both the ambient illumination average, which will be used again to redefine position “zero” in subsequent scan cycles, and position “zero” have been found. The motor now remains at position “zero” and sends out a status confirmation message via ZigBee to Module B. If Module B is operational, it will return a TX confirmation. Without this confirmation, Module A’s motor will remain fixed at position “zero” continuously sending its operational status to Module B until it receives confirmation. This step is particularly important since it is not necessarily the case that both modules would be activated at the same time. At times Module A will run before Module B and vice versa, and at other times either module may drop due to unexpected loss of power or other reasons. If Module B happens to drop in the middle of operation, Module A will—at the latest— notice when the five-minute timeout contingency measure is executed. If successful performance is in doubt at any moment in the self-calibration and/or detection process, Module A will default back to stand-by mode waiting for Module B’s operational confirmation—see Figure 17 for an operational flowchart between all runtime parts and corresponding contingency measures.

650

4.6.4

651 652 653 654 655 656 657 658 659 660 661 662 663 664 665

When the reference LDR has been found and Modules A and B have confirmed their operational status to each other, the first part of the self-calibration begins. In this part of the self-calibration process, Module A’s motor rotates its lasers at one hundred steps per second7 and clock-wise across Module B’s field of LDRs. As soon as an LDR registers the presence of the lasers, Module B sends a ZigBee message to Module A confirming a “hit”. The particular step-count of the motor at the instance of laser detection is stored sequentially in an array the size of the total number of LDRs— i.e., six in the present case (see Figure 14). Module A stores the step-count at the first instance of detection, and ignores incoming notifications about positioning if a step-count value has already been stored for that particular LDR. If the step-count value were permitted to be updated by every instance of a hit-confirmation by Module B, then the final step-count position of all LDRs would be the position where the lasers were still detected by the LDRs, which means the right-most edge (with respect to the viewer facing the every given LDR throughout rotation) of the LDRs. This is not optimal, since afterwards, when the motor is instructed to go back to a particular step associated with a given LDR, it may go to the edge and—if a plus/minus stepOffset margin had not been set to account for losses in step—perhaps miss it altogether.

Operation Confirmation (Routine 1)

Calibration, Direct Lines of Sight (Routine 2-0)

7

Although this is the nominal speed, the actual number of steps taken in a second may vary when considering contingent factors such as lost steps, etc. But on average, speed and acceleration are steady and predictable.

17

Figure 14. Self-calibration part 1, scanning for LDRs in direct line of sight (illustration), in sequence from LDR0 to LDR5.

666 667 668 669

This will loop until all LDRs are found in the lasers’ direct line of sight. In the event that more than five minutes have elapsed from the beginning of the calibration sequence, Module A will assume that there was a problem and default back to Routine 1, where the motor and lasers will return to position “zero” and attempt to “handshake” again with Module B.

670

4.6.5

671 672 673 674

The second part of the self-calibration routine is exactly the same as the first part but on half the speed and the step position values stored in a separate array. The two are kept as separate routines to optimize performance and facilitate resilience. By way of explanation, suppose that the two routines were indeed combined into one, even though it is evident that the probabilities of scan

Calibration, Indirect Lines of Sight (Routine 2-1)

18

675 676 677 678 679 680 681 682

failure differ from scanning in direct and in indirect lines of sight. Suppose that as the routine is executed, the LDRs are completely registered by the laser in direct line of sight. But suppose that an error occurs when the laser tries to register LDRs in indirect line of sight. This is a very realistic scenario, since the quality of the laser is degraded by the mirror and thus the probabilities of scan failure increase. If the two routines were one, the failure to register a given LDR in indirect line of sight would eventually trigger a five-minute timeout contingency that would default to Routine 1 (i.e., ascertain operational status from Module B) to make sure that the problem is not caused by a drop in communication and/or operation in Module B.

Figure 15. Self-calibration part 2, scanning for LDRs in direct line of sight (illustration).

683 684 685

After finding all LDR step-positions with respect to both direct and indirect lines of sight, Routine 2-1 ends by instructing the motor to find position “zero” again, so that all the inaccuracies aggregated in the present routine may be neutralized.

19

686

4.6.6

687 688 689 690 691 692 693 694

By this point two arrays, one corresponding to LDR step positions with respect to direct lines of sight and another with respect to indirect lines of sight, have stored the values necessary the present routine to compute the equivalent and corresponding XY-coordinates. Once this is done, Routine 3 sends the motor and laser back to position “zero” (see Figure 13) and then sends these coordinates via ZigBee to the Coordinator, the BBB system-node, who subsequently plots the coordinates automatically in Plotly (see Figure 16 Bottom). But between XY-coordinates computation and plotting the corresponding spatial extents in Plotly, the XY-coordinates must be adjusted to compensate for inevitable mechanical inaccuracies.

695 696

Figure 16. Left: Actual LDR positions in the LDR field. Right: Plotly mapping of LDR positions by Module A’s step-to-XY-coordinates calculation (in cms).

Translation and Mapping (Routine 3)

697

4.6.7

698 699 700 701 702 703 704 705 706

In this process the laser is guided sequentially to scan LDR to LDR, from direct line of sight to indirect line of sight. As mentioned in Section 4.2, in order to plot different categories or datasets, different Plotly streams must be assigned to each category or dataset. In Routine 3 a stream was used to strictly plot the XY-coordinates corresponding to the position of each LDR. If the same stream were to be used in Routine 4, the incoming XY-coordinates would be misconstrued as additional LDRs. Hence a different stream is used to upload the XY-coordinates of detected intersections. Towards the end of Routine 4, the system triggers a different output depending on the number of blockage intersections detected. Naturally, if there are no intersections found, the system outputs: NO OBJECTS DETECTED. Otherwise:

707 708

• If only one intersection is detected: VERY SMALL OBJECT DETECTED. VERY LOW

709 710

• If more than one but lesser or equal to four intersections are detected: SMALL OBJECT

711 712 713

• If more than four but lesser or equal to seven intersections are detected: MEDIUM OBJECT

714 715



716 717

• If more than twelve intersections are detected: VERY LARGE OBJECT DETECTED. SEND IN

718 719 720

In Section 5, four actual sample executions, one without any object to detect and five with progressively larger objects in different orientations and positions, are detailed and discussed in to demonstrate the satisfactory performance of FaDIS.

Detection and Mapping (Routine 4)

PROBABILITY OF BEING A COLLAPSED PERSON.

DETECTED. LOW PROBABILITY OF BEING A COLLAPSED PERSON.

DETECTED. MEDIUM PROBABILITY OF IT BEING A COLLAPSED PERSON. SEND IN ROVER TO GATHER MORE INFORMATION.

If more than seven but lesser than twelve intersections are detected: LARGE OBJECT DETECTED. SEND IN ROVER FOR INTERVENTION / VERIFICATION.

ROVER FOR INTERVENTION / VERIFICATION. SENT IoT NOTIFICATIONS.

20

721

Figure 17. FaDIS Part 2 Operation Flowchart.

21

722

5

Results and Discussion

723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743

The detection aspect of the system is intended to identify the presence of collapsed elderly people. It does this by first detecting non-descript objects and then gauging whether they may be people based on their size (i.e., based on the number of corresponding intersection instantiations). Uniform boxes were used as standard and aggragatable units to represent a variety of non-descript objects of different sizes in order to demonstrate the operational feasibility of the system across different sizes. One box unit was used to verify that an object would indeed be detected by the system, but that it would be deemed too small to represent an elderly adult. Two box units were used to verify that a larger object would also be detected yet not mistaken for an elderly adult. Finally, three box units were used to represent the average size of an elderly adult, and to verify that the system would recognize it as such. The objective was for the system to not only be able to detect objects but to discern—however rudimentarily—among them. Accordingly, the first two experimentation cases dealt with equivocation while the third with validation of detected objects. Additional aggregations and configurations were tested in the process of gauging accuracy, but the three included result samples are indicative of the performance of the system. Furthermore, the object’s position and orientation was informed by usability considerations. The objects in the three sample experiments in question were placed closed to the sink area and directly adjacent to the shower / bath area. These two regions were identified as common areas for water-related accidents. The orientation of the largest object was intended to simulate a body collapsed after having slipped while exiting the shower / bath area. There are, naturally, other possible scenarios. But in the interest of brevity, a focus on said high-risk areas suffice to demonstrate the feasibility of the proposed system in such scenarios.

744

5.1 Initialization and First run: Calibration and Detection of “No Objects”

745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766

Having found the average ambient illumination from Module A’s perspective, Routine 0-1 begins by rotating the motor until the lasers “hit” reference LDR and cause a drastic surge in illumination increase with respect to the previously gathered average. When this happens, position “zero” is found, and Routine 1 remains fixed waiting for operation confirmation from Module B, without which Module A could never know when an LDR has been “hit”. Once the confirmation is received, Routine 2-0 begins the first part of the self-calibration sequence and the counting of step-positions begins. From position “zero”, the motor rotates at a hundred steps per second looking for Module B’s ZigBee confirmation that an LDR has been “hit” (see, for instance, Figure 11). Assuming that the appropriate detection thresholds are set in Module B with respect to the anticipated ambient illumination, each LDR should be read multiple times at the presently set speed. But Module A only stores the step-count of the first confirmation received, as it is advantageous to correlate a stepcount position with a corresponding LDR as the lasers enter the LDR’s sensing surface, and not as they exit. Having successfully found all six LDRs in the present configuration, Routine 2-1 begins. It should be noted, however, that if in five minutes Routine 2-0 failed to receive confirmation of any LDRs in direct line of sight, the timeout mechanism already mentioned iss executed and the motor is instructed to return to position “zero”, reconfirm Module B’s operational status, reset the previously stored direct line of sight positions, and restart Routine 2-0. If, however, all LDRs were found in direct line of sight, Routine 2-1 proceeds to look for the same LDRs in indirect line of sight. Immediately after finding the last LDR in direct line of sight, the motor is instructed to move five steps away before starting Routine 2-1. The speed at Routine 2-1 is half of that in the previous routine since the second part of the self-calibration process involves reflecting the lasers against conventional mirrors that degrade the quality. Speed is therefore slowed, in order to increase the

22

767

probabilities that the LDRs will read the lasers in indirect line of sight.

768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797

As with the previous contingency measure, if in five minutes Routine 2-1 has failed to receive confirmation from a single LDR in indirect line of sight, Routine 2-1 instructs the motor to return to the position of the last LDR found in direct line of sight plus five steps (i.e., where Routine 2-1 began officially looking for LDRs in indirect line of sight), reconfirm Module B’s operational status, and proceed to rescan for LDRs in indirect line of sight. As discussed in Section 5.5, this is the weakest part of the system due to laser degradation and magnification of each half-step taken as reflected by the mirror (i.e., a half-step in direct line of sight is considerably smaller than one being reflected at a sharp angle)—however, Routine 2-1 did not fail in any of the following sample runs. Once the positions of the LDRs in indirect line of sight are found, Routine 2-1 instructs the motor to reset position “zero” and to find it once again—this is FaDIS’s way of counteracting loss of steps and/or deviations caused by inertia. Routine 2-1, therefore, ends my beckoning Routine 0-1. Routine 0-1 successfully finds position “zero” again within a few half-steps of the first instance—in the present sample runs, Routine 0-1 consistently redefined position “zero” with accuracy with respect to the initial position “zero” instance. Having redefined position “zero”, exiting Routine 0-1 leads directly to Routine 3, since the success flags for Routines 1, 2-0, and 2-1 are detected. Routine 3 has Module A compute the XY-coordinates of every LDR in direct and indirect line of sight based on their stored rotational step-position while remaining fixed at position “zero”. As each is computed, it is sent to the BBB system-node, i.e., the Coordinator, via ZigBee protocol. From there, the BBB system-node begins to plot the received XY-coordinates in Plotly via the previously configured credentials and identified data streams (see Section 4.2). Routine 4 sees the motor move directly to the step-position associated with LDR0 minus stepOffset, from which point it rotates at fifty steps per second until position LDR0 plus stepOffset. The lasers then turn off and jump directly to the step-position associated with LDR1 minus stepOffset and so forth and so on until it visits every LDR in direct and in indirect lines of sight. Those LDRs which confirmed a “hit”, whether in direct or indirect lines of sight, are assigned a status value of “1” and those who did not, of “0”. Routine 4 then uses an intersection function to compute the theoretical intersections caused by the combination of LDRs in both direct and indirect lines of sight with status “0”. As these intersections are discovered, Routine 4 uses the same mechanism used in Routine 3 to send these XY-coordinates to the BBB system-node, which—in turn—maps the received intersection XYcoordinates to Plotly via the same credentials used in Routine 3 but with a different data stream.

Figure 18. Left: Sample run with no blocking objects, inside view. Center: Sample run with no blocking objects, expected output. Right: Sample run with no blocking objects, actual Plotly output.

798 799 800 801 802 803

In the first of the present four sample runs, Routine 4 visited each LDR step-position range (i.e., LDR minus stepOffset to LDR plus stepOffset) in both direct and indirect lines of sight, and having received ZigBee confirmation—via Module B—from all, concludes that there are no collapsed objects across the field of LDRs. Indeed, FaDIS is correct (see Figure 18). Without any objects to detect, it is not possible to gauge how the expected graphical output—based on the actual dimensions of the room—compares to the actual Plotly output. The second sample run, however, 23

804

provides an initial baseline for comparison.

805

5.2 Detection of “Small Object”

806 807 808 809 810 811 812 813 814 815 816 817 818 819 820

The second sample run continues from the end of Routine 4 in the first run. Immediately after it reported that no objects were detected, Module A sleeps for three minutes. This may be changed to suit the frequency desired by the user. After three minutes, Module A returns to Routine 0-1 to redefine position “zero”, which is predictably and reliably found within a few steps of the previous instances, and reconfirms Module B’s operational status. After this, Routine 4 is executed directly since the system is already calibrated and the field of LDRs is already mapped (i.e., the success flag for Routine 3 is still on). In Routine 4 it is noticed that a potential problem may take place. While the lasers are scanning LDR0’s position range, by accident LDR5 triggers a detection confirmation that has Module B inform Module A via ZigBee that LDR5 has been “hit”. Fortunately, Routine 4 uses a function that is written with a security mechanism. If while scanning the position range of a particular LDR, if it receives a confirmation from some other LDR, it ignores it—after all, if the lasers are nowhere near LDR5 when it triggered its confirmation, then said confirmation is accidental. In this particular sample run, a non-trivial intersection8 is instantiated by a relatively small Styrofoam cube (see Figure 19 Left). Compare the expected output (see Figure 19 Center) with its corresponding actual Plotly output (see Figure 19 Right).

Figure 19. Left: Sample run with Small blocking object, inside view. Center: Sample run with Small blocking object, expected output. Right: Sample run with Small blocking object, actual Plotly output.

821

5.3 Detection of “Large Object”

822 823 824 825 826 827 828 829 830 831 832 833 834 835 836

As with the previous two runs, Routine 0-1 redefines position “zero” at the expected location again. After fetching operational confirmation from Module B, Module A enters Routine 4—skipping those routines with their success flag still present—to scan for new objects. In this sample run, an addition Styrofoam cube was added next to the previous one (see Figure 20 Left). What is particularly interesting in this run is the first occurrence of an intersection between two indirect lines of sight. But there is actually no object justifying that intersection’s instantiation, as opposed to the other three which do correspond to non-trivial intersections instantiated by the two Styrofoam cubes. In the scope of the present work, no mechanism has been developed to ignore the presence of these ghost intersections that only occur between beams in indirect lines of sight. It should be noted, however, that whatever intersection mechanism that may be added cannot automatically discard such kinds of intersections, since there may be instances where their instantiation is justified by an object. Although said mechanism is not presently integrated in the code of any of the modules, Section 5.5 does elaborate on a potential solution for distinguishing when these ghost intersections are indeed groundless and when they are actually indicative of an object. For now, it is noted that the expected and actual outputs (see Figure 20 Center and Right) once again correspond to each 8

Defined as one that occurs within the field of LDRs and does not occur directly over any LDRs where direct and indirect laser beams meet along the spatial boundaries.

24

837 838

other and to the physical configuration of the space, in spite of the fact that warping in the latter is quite evident. In this detection instance, the rover is beckoned for further verification.

Figure 20. Left: Sample run with Large blocking object, inside view. Center: Sample run with Large blocking object, expected output. Right: Sample run with Large blocking object, actual Plotly output.

839

5.4 Detection of “Very Large Object”

840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855

As with the previous three runs, Routine 0-1 redefines position “zero” at the expected location successfully again. In this run a third Styrofoam cube is placed next to the previous two to form a first approximation of a relatively small person’s size (see Figure 21 Left). It is noted that two more instances of ghost intersections have appeared, and one look at either the expected or actual output is strongly suggestive of more than one collapsed object. This is why the mechanism for discerning the character of these ghost intersections is important, since their presence along with other nontrivial intersections could be ambiguous. Since the potential solution will be described later, for now the focus is placed on the strong correspondence between the expected and actual outputs (see and Figure 21 Center and Right). The warping previously noticed, which is primarily caused by the dramatically inaccurate positioning of LDR2, remains. But it is important to consider that the consistency retained still permits one to be recognized from and by the other, and that they both do successfully reflect the dimensions and positioning of the blocking objects in question. Once again the number of intersections as well as their approximate locations—relative to each one’s LDR positions—are indicative of the physical configurations. In this run, a Very Large Object is detected, and accordingly, all intervention services are deployed automatically. That is to say, the TurtleBot rover is sent in, and IoT notifications are triggered.

Figure 21. Left: Sample run with Very Large blocking object, inside view. Center: Sample run with Very Large blocking object, expected output. Right: Sample run with Very Large blocking object, actual Plotly output.

856

5.5 Performance and Limitations

857 858 859 860 861

Before discussing specific limitations with respect tot he experiments outlined above, two core system-wide limitations with impacts to scalability may be noted, i.e., laser-degradation as well as laser-alignment and -precision / -deviation. With respect to laser-degradation: Laser-intensity as gauged by the LDRs increases as the distance between light-emitter and light-sensor decreases. Depending on the class and category of the laser, the sensitivity factor of the light-sensor, and the

25

862 863 864 865 866 867 868 869 870 871 872 873 874 875 876

quality of the mirrors against which the laser is reflected, the spatial limitations with respect to laser degradation would vary. In the present system, two Class 2M lasers, low-cost LDRs, and consumergrade mirrors were used. Given this configuration, direct laser detection was reliable within distances of 3-3.5 meters, and indirect—i.e., reflected—laser within 2-2.5 meters. The present deployment area was 3.1 meters by 1.95 meters, which enabled the system routines involving direct lasers to perform robustly and reliably (i.e., both Routines 0-1 and 2-0: ninety-nine per cent success rate). However, already in such a relatively small area, the routines involving indirect lasers performed unreliably (i.e., Routine 2-1: sixty-three per cent success rate). It may be surmised from these considerations that when the quotient factor between a detected laser event set against the unified ambient illumination average reaches below the laser-detection threshold constant mentioned in Section 4.5, the detected laser will be confused for a minor deviation of ambient illumination. In addition to the considerations detailed above, considerations of alignment precision are also crucial determining factors with respect to scalability. The smallest angular deviations in the rotation of the lasers could cause their beams to altogether miss the field of LDRs beyond certain distances, depending on the extent of the deviations.

877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905

With respect to the four detailed detection cycles performed reliably, accurately—within tolerable deviations9—and without errors. However, the latter three cycles did not have to go through the calibration routines again, which is where the weakest aspect of FaDIS is found. All the other routines in Module A are reliable, but Routine 2-1, which involves finding the LDRs in indirect line of sight, has a considerably high rate of failure. To conduct a per cent rating of each routine, the entire initialization process was executed over a hundred times. In one hundred executions, Routine 0-0’s success rate was a hundred per cent. This was expected, since Routine 0-0’s task only involves the gathering of the ambient illumination average against which a sudden increase in the readings of reference LDR would be compared against. Similarly, it may also be observed that Routine 0-1 is extremely reliable, having a success rate of ninety-nine per cent. Routine 0-1 is the routine used to redefine position “zero” after each scan cycle. As was previously mentioned, however, there was a very unlikely possibility that reference LDR may have gathered its tenthousand readings under Routine 0-0—to compute the ambient illumination average—while the laser was shining on it or near it, skewing the ambient average calculation. The one failure instance of Routine 0-1 was due to this unlikely possibility. However, it is important to clarify that this failure cannot be strictly said to belong to Routine 0-1, even though it occurred in it. The failure is actually caused by Routine 0-0’s unfortunate gathering of overtly intense readings that did not reflect the actual ambient illumination. The success rate of Routine 1 was also one hundred per cent. This was also expected, since from the first instance the XBee antennae in Modules A and B began communicating, as long as both antennae were powered, the notifications from one never went unacknowledged by the other. Routine 1 has Module A send an operational notification to Module B, which having registered it sends back an acknowledgement. Routine 1 will run forever until Module A receives automatic acknowledgement from Module B, and so even if it did fail, it would make no significant difference since its failure has it default back to itself. On previous occasions there were instances of TX and RX errors, but these were always quickly and consistently replaced by a renewed confirmation and notification. Turning to Routine 2-0’s rating, which is slightly surprising but not because of its high success rate but because of its one instance of failure. This is surprising because for numerous round before the rating was gauged, Routine 2-0 never failed to trigger a notification response from Module B on account of this latter’s LDRs recognition 9

The deviations between the Plotly outputs and the expected outputs in the sample runs above are said to be tolerable if they are mutually recognized in and from one another.

26

906 907 908 909 910

of the lasers. The lasers used in Part 2 are two Class 2M line lasers, which are sufficiently powerful in order for the LDRs to notice its presence. In this particular instance of failure, LDR1 failed to register the presence of the laser. After finding all other LDRs and five minutes of searching for the missing LDR, the contingency measure sent the motor back to position “zero” to rescan the field of LDRs. In this second pass, no LDRs failed to register the laser.

911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936

The weakest routine of FaDIS may be identified as Routine 2-1, which has a success rate of sixtythree per cent). The reasons for this rather compromising rate have already been mentioned briefly in Sections 4.5. The principal problem is the laser degradation occasioned by the two-layer conventional mirrors used to reflect the lasers. A secondary problem is that at certain sharp angles, the gap between steps—i.e., half-steps—taken by the motor increase in size, thereby potentially missing the LDR in an instance of a gap. It was previously mentioned that one of the reasons for using two line lasers with a minimal gap between their parallel projections was so that the LDRs would have higher opportunities of being scanned by at least one of the lasers. This is indeed the case, for the most part, but in those instances involving sharp angles, at times the LDRs fall in the gap between the two lasers and the gap between each step of the motor. There are a number of possible solutions to these problems. With respect to the principal problem, higher grade singlelayer mirror may be used to prevent—insofar as possible, or at least mitigate—laser degradation. Since these kinds of mirror tend to be much more expensive than their conventional counterparts, perhaps the user could first run the self-calibration process, identify the laser-to-mirror intersection points, and only put high quality mirror in those specific positions. It may be advisable for the user to put these high quality mirrors along the step-position ranges (i.e., LDR step-position minus and plus stepOffset) corresponding to every LDR. As for the secondary problem, the resolution of the LDR field may be adjusted in order to increase the possibilities of the LDRs registering the lasers, in spite of the inevitable step gaps and their magnification by sharp reflection angles. That is to say, at present only one physical LDR represents an instance of an LDR, but perhaps the field could be modified to have three physical LDRs directly next to each other represent an instance of an LDR. For example, perhaps what is presently referred to as LDR1 could actually consist of three LDR components. This is a plausible solution that makes minimal impact on budget, since LDRs are extremely inexpensive. Another more expensive solution to the secondary problem would be to use an absolute sensor, which would contribute to the robustness of FaDIS. However, the presented solution is a more cost-effective alternative, costing a fraction of absolute sensors.

937 938 939 940 941 942 943 944 945 946 947 948 949 950 951

Having executed Routine 2-1 over a hundred times, both Routines 3 and 4 were able to gauge their per cent success ratings. Since they attained the same ninety-eight per cent, and since they involve the same mechanism for sending XY-coordinates over the BBB system-node for mapping in Plotly, they will be detailed and discussed together. It should be evident that their success rate corresponds more to the BBB system-node, and that their actual success-rate should be higher, since their nonZigBee related tasks involve simple trigonometric and algebraic calculations that are not likely to cause failures. At any rate, Routines 3 and 4 cannot be detached from their relationship to the BBB system-node, and so the ninety-eight success rate will be retained. Recall that Routine 3 is the routine that translates step-counts corresponding to LDR positions into equivalent XY-coordinates via basic trigonometry. When each XY-coordinate is computed, it is sent directly to the BBB system-node, whereupon the receiving Python script maps the XY-coordinate in Plotly via this latter’s API and preconfigured user-credentials. The two failures in question occurred when the established connection between the BBB system-node and Plotly timed out. Similarly, Routine 4, which is the routine that calculates the intersection points between undetected LDRs in direct and in indirect line of sight via basic algebra, also incurred its two failures at the same point as Routine 3.

27

952 953 954 955 956 957 958 959 960

In Section 4.3 it was mentioned that there are inherent limitations in the system that prevent it from being practicably deployed in areas beyond a certain size. The inherent limitation of the system is found in the lasers. That is to say, lasers will lose their threshold required intensity after a certain distance. If with the present dimensions Routine 2-1 still has a high failure rate in calibrating LDRs in indirect lines of sight, it may be inferred that this failure rate would increase if the lasers were further weakened. Naturally, a practical answer to this limitation would be that FaDIS is intended to for personal bathrooms and not for public ones with larger dimensions—in those bathrooms, presumably other people would notify pertinent personnel if a collapsed person were to be found in them.

961 962 963 964 965 966 967 968 969 970 971 972

In Section 4.5 it was mentioned Module B’s detection threshold constant could be configured as a function of a real-time lux measurement device, which would enable FaDIS to maximize the probabilities of LDR scan detection and to minimize the “noise” created by fluctuating ambient illumination caused by both natural and artificial sources. While this is certainly a plausible solution, it appears that a better solution would be to adjust the configurations of LDR in their aluminum frame in such a way that an optical filter or a bandpass selectively favors the lasers’ wavelength over very other. In this manner the detection threshold could indeed remain constant in spite of change in the ambient illumination. At present, the threshold constant has been set based on multiple test runs. But the risk for the ambient light noise to overcome FaDIS with false LDR registration positives (on the one hand) or no laser registrations at all (on the other hand) is quite evident. If, however, a blocking of all other wavelengths except for that of the lasers were to be successful, then this challenge would be resolved.

973 974 975 976 977 978 979 980

In Section 4.6 it is mentioned that more work needed to be carried out in order to identify the underlying causes for the resulting warped mapping of the space. Observations based on all the test runs conducted so far suggest that the cause lies in the inevitable mechanical imprecisions of the motor in question. However, as was also pointed out in the same section, this warping is actually not a problem if—as an overall rendition of a perceived physical space—it remains consistent. There is no doubt that the motor in question does lose steps and incurs deviations caused by inertia, however these losses are entirely within an acceptable tolerance margin. The warping issue is only a problem if an equivocation between XY-coordinate references occurs.

981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997

In Sections 5.3 it is mentioned that a mechanism to distinguish ghost intersections instantiated by intersecting two or more indirect lines together needs to be developed in order to know when to recognize such intersections as valid. The criteria for the consideration of ghost intersections could be whether they are in continuity with other non-trivial intersections instantiated between direct and indirect lines of sight. That is to say, if the possibility of having two collapsed objects in the same bathroom at the same time is precluded, the instantiation of ghost intersections may be safely ignored and the remaining non-trivial intersections instantiated between direct and indirect lines of sight favored, if there is no continuity between both groups. Admittedly, continuity is not an entirely fail-safe criterion, but in the present run it so happens that some of the [apparently] ghost intersections do occur where an actual object sits, while some other ones occur due to the objects occlusion effect. Perhaps it may be considered that as long as there are two regions of intersection instantiations, that those created by a set of indirect lines be ignored in favor of the other non-trivial intersections instantiations created by one direct and one indirect line of sight. Additionally, perhaps once continuity was established and all intersection instantiations coalesced into one large region, that so-called ghost intersection instantiations be considered as part of the occlusion region created by the object. However, development of FaDIS as a WSN system will involve the implementation of more sophisticated and discriminating mechanisms—involving optical filters—

28

998 999 1000 1001 1002 1003 1004 1005 1006

in the calibration processes in order to ensure that the lasers signature is detected in any illumination condition. Furthermore, for safety reasons the lasers used may be switched to infrared in order to avoid accidentally damaging the eyes of those who for whatever reason stare directly at the laser for too long. Although the presently used Class 2M lasers require longer direct eye-contact than the blink reflex allows—which is why it is considered a consumer-friendly laser—there are some people who lack this reflex. Finally, are higher resolution stepper motor may be used in order to reduce the step gap of the present one. The fact that the future development of FaDIS does not speak of drastic reconsiderations of strategy and methodology is a strong recommendation of its present high-TRL feasibility and functionality.

1007

6

1008 1009 1010 1011 1012 1013 1014

In the present paper the authors have attempted to describe and detail a high-performing and costeffective Fall-Detection and -Intervention System that demonstrated the advantages of WSN solutions. Given the reliability of the cost-effective components and the simplicity of the laserreflectivity scheme, the system was able to detect an object at any theoretical intersection point every time. Furthemore, in the event of a simulated system-node failure, the fall-detection modules were able to establish communication with another function’s BBB system-node, thereby demonstrating system robustness in a decentralized architecture.

1015 1016 1017 1018 1019 1020 1021 1022 1023 1024

In its capacity of a fall-detection and -intervention solution, the system demonstrated that an intelligent solution need not rely on costly and highly integrated components nor on user-dependent fee-based software. Instead, a demonstrated reliable performance was achieved with components that are easily accessible by the general public, and with resilient proprietary software that requires no user intervention. Similarly, in its capacity of a typological WSN-based intelligent solution, the system demonstrated the potential for seamless compatibility across a variety of services and functions within the same architecture, and it showcased flexibility, scalability, resilience, and robustness in an energy-efficient mesh network at a considerably low cost. Its success in these two capacities makes FaDIS an intelligent alternative to similar services based on centralized and closed systems.

1025

Acknowledgements

1026 1027 1028 1029

The operability of the TurtleBot rover mentioned in Part 1 was partly developed by Mr. Ahmad Raza and integrated in the PassAGE project, which was financed by the German Federal Ministry of Science and Research [4]. Part 1 of the present work extends the functionality of said rover. The authors would like to thank Mr. Andreas Bittner for the installation of the LDRs field arrangement.

1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044

References [1] [2] [3] [4] [5] [6] [7] [8]

Conclusions

Intille, S. S. A New Research Challenge. Persuasive Technology to Motivate Healthy Aging. IEEE Trans. Inform. Technol. Biomed. 8, 3, 235–237. 2004. Maier, H. 2010. Supercentenarians. Springer, Heidelberg [Germany], New York: Springer. Pyo, Y., Hasegawa, T., Tsuji, T., Kurazume, R., and Morooka, K. Floor sensing system using laser reflectivity for localizing everyday objects and robot. Sensors (Basel, Switzerland) 14, 4, 7524–7540. 2014. Guettler, J., Linner, T., Georgoulas, C., and Bock, T. 2015. Development of a seamless mobility chain in the home environment. In Proceedings of the 8th AAL Conference. Linner, T., Güttler, J., Bock, T., and Georgoulas, C. Assistive robotic micro-rooms for independent living. Automation in Construction 51, 8–22. 2015. Mastorakis, G. and Makris, D. Fall detection system using Kinect’s infrared sensor. J Real-Time Image Proc 9, 4, 635–646. 2014. Wu, Y.-G. FALL DETECTION SYSTEM DESIGN BY SMART PHONE. IJDIWC 4, 4, 474–485. 2014. Abbate, S., Avvenuti, M., Bonatesta, F., Cola, G., Corsini, P., and Vecchio, A. A smartphone-based fall detection system. Pervasive and Mobile Computing 8, 6, 883–899. 2012.

29

1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106

[9] [10] [11] [12]

[13]

[14] [15]

[16]

[17]

[18] [19]

[20] [21]

[22] [23] [24] [25] [26] [27]

[28]

[29] [30]

[31] [32] [33]

Wu, F., Zhao, H., Zhao, Y., and Zhong, H. Development of a wearable-sensor-based fall detection system. International journal of telemedicine and applications 2015, 576364. 2015. Luque, R., Casilari, E., Morón, M.-J., and Redondo, G. Comparison and characterization of Android-based fall detection systems. Sensors (Basel, Switzerland) 14, 10, 18543–18574. 2014. Sato, T., Harada, T., and Mori, T. Environment-type robot system "RoboticRoom" featured by behavior media, behavior contents, and behavior adaptation. IEEE/ASME Transactions on Mechatronics 1; 9, 3, 529–534. 2004. Sugano, S., Shirai, Y., and Chae, S. 2006. Environment Design for Human-Robot Symbiosis. Introduction of WABOT-HOUSE Project. In Proceedings of the 23rd International Symposium on Automation and Robotics in Construction, Tokyo, Japan, 152–157. Kidd, C. D., Orr, R., Abowd, G. D., Atkeson, C. G., Essa, I. A., MacIntyre, B., Mynatt, E. D., and Starner, T. 1999. The Aware Home: A Living Laboratory for Ubiquitous Computing Research. In Proceedings of the Second International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture. Springer Verlag, London, UK. Murakami, K., Hasegawa, T., Kurazume, R., and Kimuro, Y. 2008. A structured environment with sensor networks for intelligent robots. In Sensors, 2008 IEEE, 705–708. Sit, G. F., Shen, C., Stort, H., and Hofmann, C. 2012. Application-Oriented Fusion and Aggregation of Sensor Data. In Ambient Assisted Living 5. AAL-Kongress 2012 Berlin, Germany, January 24-25, 2012, R. Wichert and B. Eberhardt, Eds. Springer, Heidelberg, New York, 3–13. Chiriac, S. and Rosales, B. 2012. An Ambient Assisted Living Monitoring System for Activity Recognition -Results from the First Evaluation Stages. In Ambient Assisted Living 5. AAL-Kongress 2012 Berlin, Germany, January 24-25, 2012, R. Wichert and B. Eberhardt, Eds. Springer, Heidelberg, New York, 15–28. Gaden, U., Löhrke, E., Reich, M., Schröer, W., Stevens, T., and Vieregge, T. 2011. SAMDY – Ein sensorbasiertes adaptives Monitoringsystem für die Verhaltensanalyse von Senioren. In Proceedings of the 4th German AAL Congress, Berlin, Germany. Mayer, P., Rauhala, M., and Panek, P. 2011. Field test of the eHome system. In Proceedings of the 4th German AAL Congress, Berlin, Germany. Wichert, R., Furfari, F., Kung, A., and Tazari, M. R. 2012. How to Overcome the Market Entrance Barrier and Achieve the Market Breakthrough in AAL. In Ambient Assisted Living 5. AAL-Kongress 2012 Berlin, Germany, January 24-25, 2012, R. Wichert and B. Eberhardt, Eds. Springer, Heidelberg, New York, 349–358. Andò, B., Baglio, S., Lombardo, C. O., and Marletta, V. 2014. An advanced tracking solution fully based on native sensing features of smartphone. In 2014 IEEE Sensors Applications Symposium (SAS), 141–144. Georgoulas, C., Linner, T., Kasatkin, A., and Bock, T. 2012. An AmI Environment Implementation: Embedding TurtleBot into a novel Robotic Service Wall. In Proceedings of the 7th German Conference on Robotics. VDE Verlag, Munich, Germany. Wichert, R. and Eberhardt, B., Eds. 2012. Ambient Assisted Living 5. AAL-Kongress 2012 Berlin, Germany, January 24-25, 2012. Springer, Heidelberg, New York. Wichert, R. and Klausing, H., Eds. 2013. Ambient Assisted Living 6. AAL-Kongress 2013 Berlin, Germany, January 22-23, 2013. Springer Berlin Heidelberg, Berlin/Heidelberg, Germany. Wichert, R. and Eberhardt, B., Eds. 2011. Ambient Assisted Living 4. AAL-Kongress 2011 Berlin, Germany, January 25-26, 2011. Springer, New York. Cook, D. J., Augusto, J. C., and Jakkula, V. R. Ambient intelligence: Technologies, applications, and opportunities. Pervasive and Mobile Computing 5, 4, 277–298. 2009. Yan, H., Huo, H., Xu, Y., and Gidlund, M. Wireless sensor network based E-health system--implementation and experimental results. IEEE Transactions on Consumer Electronics 56, 4, 2288–2295. 2010. Bähr, M., Klein, S., Diewald, S., Haag, C., Hofstetter, G., Khoury, M., Kurz, D., Winkler, A., König, A., Holzer, N., Siegrist, M., Pressler, A., Roalter, L., Linner, T., Heuberger, M., Wessig, K., Kranz, M., and Bock, T. 2013. PASSAge: Personalized Mobility, Assistance and Service Systems in an Ageing Society. In Ambient Assisted Living 6. AAL-Kongress 2013 Berlin, Germany, January 22-23, 2013, R. Wichert and H. Klausing, Eds. Springer Berlin Heidelberg, Berlin/Heidelberg, Germany, 109–119. Liu Cheng, A., Georgoulas, C., and Bock, T. 2015. Design And Implementation Of A Novel Cost-effective Fall Detection And Intervention System For Independent Living Based On Wireless Sensor Network Technologies. In Proceedings of the 32nd International Symposium on Automation and Robotics in Construction and Mining (ISARC 2015). Linner, T., Georgoulas, C., and Bock, T. Advanced building engineering: Deploying mechatronics and robotics in architecture. Gerontechnology 11, 2, 380. 2012. Liu Cheng, A. 2015. Design and Implementation of a Novel Cost-effective Fall-Detection and -Intervention System for Independent Living based on Wireless Sensor Network Technologies. Master of Science, Technische Universität München. BeagleBoard®. 2015. BeagleBone Black. http://beagleboard.org/black. Accessed 10 June 2015. Digi International® Inc. 2015. Knowledge Base. The Major Differences in the XBee Series 1 vs. the XBee Series 2. http://www.digi.com/lp/xbee/. Accessed 13 July 2015. Firefly®. 2015. Home. http://www.fireflyexperiments.com/. Accessed 14 February 2015.

30

Suggest Documents