Using Embedded Systems to Spread Assistive Technology on Multiple ...

0 downloads 0 Views 1MB Size Report
Abstract—Nowadays, Assistive Technology (AT) systems are closely tied to the devices that they control. Considering a smart environment where a person with ...
Using Embedded Systems to Spread Assistive Technology on Multiple Devices in Smart Environments Davide Mulfari, Antonio Celesti, Maria Fazio, Massimo Villari, Antonio Puliafito DICIEAMA, Faculty of Engineering, University of Messina Contrada di Dio, S. Agata, 98166 Messina, Italy. e-mail: {dmulfari,acelesti,mfazio,mvillari,apuliafito}@unime.it

Abstract—Nowadays, Assistive Technology (AT) systems are closely tied to the devices that they control. Considering a smart environment where a person with a disability needs to interact with multiple devices, the user is forced to rely on AT software tools available on the each used platform. Therefore, computer skills are required to adjust any different computing environment configuration according to the user’s needs and preferences. To address such issues, in this paper, we discuss the usage of embedded systems able to interface sensors and existing AT software tools running on user’s personal equipments, in order to natively interact with many platforms. Thus, our work aims to decouple AT software tools from the accessed computer systems, allowing us to control various kinds of computer systems, even those that do not provide any AT features, by using just a personal assistive equipment. Keywords-Assistive Technology, Embedded Systems, Internet of Things, Human Computer Interaction, Users with disabilities.

I NTRODUCTION Users with disabilities interact with various electronic and computing devices (e.g., Personal Computers (PCs), smartphones, home appliances, televisions, infrared controlled systems, domotic equipments, electric powered wheelchairs) by means of tailored Assistive Technology (AT) solutions. Generally each AT software tool is closely related to the device on which it has been installed. For instance, if a person uses a speech recognition software on a tablet, he / she cannot manage the same AT solution for controlling a different device, like a computer. It forces the user with a disability to execute various AT tools according to the end device he / she is going to manage. On the other hand, each AT product requires a specific setup process according to the user’s needs and preferences, and such a task is often performed by an AT expert, who works close to the person with a disability in order to adjust the computing environment properly. In addition, by considering the usage of PCs, AT software tools are usually tied to the specific hardware / configuration available on the end device, e.g., an AT software designed for a Microsoft Windows machine does not work on a different operating system. The configuration process of an AT equipment can also require suitable administrative permissions on the device that the user needs to manage. This may be a

problem whenever a person with disability has to work with a shared PC [1] (e.g., in an Internet Point). To address these issues, in this paper, we discuss the usage of embedded systems able to interface sensors and existing AT equipments in order to achieve a generic solution allowing a person with a disability to use his / her own personalized AT tool for interacting with as many computing devices as possible (e.g., a PC, a tablet, a smartphone, etc). Thus, by using our solution, we intend to fill the gap regarding the compliance of different AT solutions, allowing a user with a disability to control multiple devices by means of a unique assistive interface. The input device discussed in this paper is currently designed to operate on any computing device (e.g., PC, tablet, smartphone, etc) which supports a mouse and / or a keyboard compatible with the Human Interface Device (HID) standard protocol. The usage of such a protocol is a key feature because all commonly-used operating systems (e.g., Microsoft Windows, Mac OS X, Linux, Android, etc) recognize our hardware peripheral without needing a specialized driver. Furthermore, in order to enhance the interaction between a generic computer device and our embedded system, the latter includes a dedicated microcontroller board able to emulate a keyboard or a mouse through a USB client connection with a computer. In addition, the proposed system supports suitable wireless and physical interfaces in order to process data from external sensors and AT platforms. Specifically, the interaction between an external AT tool, (i.e., a software running on a particular computing platform) and our system requires the development of dedicated pieces of software interfaces known as plugins. By working on the plugins development, we highlight how the AT equipments that have been intended for running on a particular hardware / software configuration can be used to interact with a different computer-based device, indifferently from its architecture, like a traditional PC. To explain how our system can operate, in this paper, we present a concrete application regarding a smart environment. It concerns the usage of a speech recognition tool running on an Android smartphone in order to correlate particular recognized command to computer cursor movements. The rest of the paper is structured as follows. Background

and related works are discussed in Section I. Our design choices are presented in Section II. Section III contains the primary implementation highlights. Our case study is discussed in Section IV. Section V concludes the paper. I. BACKGROUND AND R ELATED W ORKS Humans interact with computer systems by means of input and output peripherals. Such an interaction is bidirectional, we send commands to computers by using input devices and we get information coming from a computer through output devices. Primary input peripherals are mice, keyboards and touch screens; the main output equipment is the monitor. In addition, the functionality of an input device can be replaced by a specialized piece of software available on the user’s computer. As an example, a speech recognition application is a piece of software that executes on the same computing device you need to access and it is able to perform the same actions made with a mouse and a keyboard through a microphone. In the following, we focus our attention on input devices and we briefly discuss how AT provides solutions for people who cannot use the aforementioned main input peripherals. Such AT tools include hardware and software equipments [2] aimed at exploiting the user’s capabilities to perform I/O (Input Output) actions for a computer-based system. For instance, people with motor disabilities who can not use a standard input equipment can rely on special joystics, keyboards, or sensors as alternate pointing / input devices, or they can activate universal access software options enclosed in the computer’s operating system in order to simulate mouse movements by pressing particular keys on the keyboard. In other contexts, AT tools consist of specialized pieces of software only, such as screen reader and screen magnification computer programs. Different types of AT equipments include both software and hardware tools. For instance, eye tracking tools use one or more cameras to track the user’s pupils using visible or infrared light and they also require a tailored computer application in order to process data coming from cameras. These considerations may be easily applied to any AT solutions based on sensors [3] (e.g., Brein Computer Interface (BCI) techniques [4], Electromyography (EMG)[5] and Inertial Measurement Unit (IMU) systems [6]), which rely on a given hardware platform and a dedicated software in order to work properly. An increasing number of works are appearing in literature considering AT in the perspective of Cloud computing [1] and smart environments in IoT scenarios. An approach to use AT software tools running on virtual machines through Cloud computing is discussed in [7] and [8]. Considering a smart environment [9], where users need to interact with various computing devices and platforms, it is required to adjust any device according the end user’s personal demands and preferences. The usage of any AT piece of software, it is closely tied to the physical device on which it runs. So, it allows the user with a disability to access only a given computer system. In general, the setup process is not trivial at all, specific computer skills may be needed to tailor an AT piece of software, i.e., we need to know its features, its

options, its interaction with the computer’s operating system. Moreover the customization process has to be repeated on each computer-based system used by the person with a disability. Therefore, the suitable configuration of a smart environment for disabled users may be a difficult task to be accomplished. In order to address the aforementioned issues, this work aims to evaluate how embedded systems based on SBC (Single Board Computer) hardware platforms can interface existing AT software solutions, running on a personal device, with other different physical devices (e.g, a traditional PC, a smartphone, a tablet, etc). By considering the depicted scenario, the interaction between our system and the end device, i.e., the controlled computing system, needs to be native and it does not require a specialized driver or software installation. Hence, our main effort is to decouple the used AT tools from the computer system that the user with a disability needs to control. In addition, we intend to make easier the interaction between a personal device and the proposed system equipped with an Arduino-based microcontroller board able to emulate a keyboard or a mouse through a USB client connection with a computer. Arduino [10] is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It is intended for designers interested in creating interactive objects or environments. Several works focusing on the usage of Arduino boards for the development of HCI for people with disabilities are available in literature. In [11], the authors discuss the development of an alternative computer input system that allows a person with quadriplegia to move a computer’s cursor and activate left and right click button inputs. This solution employs a head mounted Inertial Measurement Unit (IMU) with 9 DoF (Degrees of Freedom) to track head movements and correlate these motions to computer cursor movements. A Sip-Puff transducer detects the pressure of the air blown through to a vinyl tube as analog voltages, which are then interpreted over time to trigger left and right click events. Finally, an Arduino Due board is used to interpret and process these inputs and send mouse commands to the PC via a USB connection. Another kind of HCI equipment [12] is an augmentative communication system allowing people with severe motor disabilities to use a computer. Specifically, an interface adapted to detect eye winks has been implemented and the system uses these signals, after the processing, to control a mouse or a virtual keyboard. This proposed equipment consists of three primary components: the interface, the processing system based on an Arduino Yun hardware platform and the dedicated software installed on the user’s computer. Novel input and output devices designed for power wheelchair users are presented in [13]: these peripherals are mainly based on the Makey Makey microcontroller board, which is an Arduino based device for making tangible user interfaces [14]. The major benefit of such a platform is that it implements the HID protocol, which allows it to send keyboard and mouse events to a computer without installing drivers or other pieces of software. The user can connect everyday objects and natural materials to Makey Makey so as to create a tangible user

Fig. 1.

Fig. 2.

Reference scenario.

Software components of the proposed system.

interface that controls any software running on a computer. In the AT field, an interesting Makey Makey based application allows developers to build a computer input device composed of a wheelchair headrest on which conductive threads are utilized to incorporate a number of inputs- the number of inputs would vary depending on how severe the user’s case of paralysis is, and it can be changed by simply attaching/ detaching the leads to the headrest. Differently from the aforementioned Arduino based applications, in our work we merge a HID-enabled Arduino microcontroller board with a SBC based on Linux, which is responsible for processing data coming from external AT equipments and sensors. II. D ESIGN CHOICES The reference scenario for our work is depicted in Figure 1. We assume that the user with a disability uses his/her “Personal AT equipments”, which are dedicated physical devices running tailored AT applications. Each hardware platform (such as a smartphone, a tablet and a PC) executes its own specific AT software. By using suitable wired or wireless connections, we interface such AT equipments with our embedded system in order to allow users to access other generic computer-based systems that does not support any AT piece of software. The personal AT equipments can also consist in external sensors (e.g., motion sensors) and IoT (Internet of Things) systems used by the user to access his/her own personal computing platform. Generally, IoT devices have low computing capabilities and they require a wireless (or a wired connection) with our SBC system. Therefore, the user with a disability can use only one personalized AT solution to interact with multiple computerbased platforms, without having to be tied on a particular piece of software (e.g., the operating system) and hardware configuration available on the devices. For these reasons, the proposed tool is designed to work as a sort of adapter for some kinds of AT equipments and sensors: it is able to process data coming from an existing AT tool, that has been previously tailored according the user’s needs, and to correlate this to input commands for a generic computer. By exploiting the HID protocol and the features of low-cost microcontroller boards equipped with a USB client port, the system emulates the behavior of a standard input peripheral acting as mouse / keyboard so that its usage does not require additional drivers installation on the used end device. In the following we consider a traditional PC as end device, although our dissertation can be easily applied to multiple computing platforms, like tablets or smartphones. Especially, the present work deals with the mouse simulation process, because the usage of the mouse is very critical for accessing all the resources available on a computer. To this end, the proposed system includes a dedicated microcontroller board equipped with the Atmel Atmega32U4 MCU (MicroController Unit), and the latter has been programmed with a specialized firmware able to emulate a standard HID mouse through a USB client connection with a PC.

Another primary electrical component of the proposed system is a Linux-based single board computer (SBC system). It comes with an open source operating system and it has wireless (e.g., Bluetooth and WiFi) and wired interfaces (e.g., USB host port) to interact with external AT tools and sensors. The chosen board includes also General Purpose Input and Output (GPIO) pins to acquire data from sensors and other physical devices. According to our design choices, the interaction between any external hardware / software platform and the proposed embedded solution needs a dedicated plugin, that is a software interface which reads the output signals coming from the aforementioned device and correlate them to mouse movement commands that are sent to the computer by means of the Atmega32U4 microcontroller board. As an example, a plugin may consist of a Python script for reading data from a mobile app on a smartphone, which acts as assistive tool. In general, the described process can be summarized in Figure 2; we remark that even a sensor, such as a digital accelerometer, which is connected to the Linux embedded board through the GPIO extensions, may be considered as an AT tool and then it needs a specialized plugin to act as a mouse device for a PC. As shown in the block diagram, there is an one-way communication between the Linux SBC system and the microcontroller platform. This is a specific design choice because the Linux processor executes the selected plugin and it drives the Atmel chip for mouse simulation tasks. III. I MPLEMENTATION From a technical point of view, the two main components of the proposed solution, e.g., the Linux-based SBC and the Atmel Atmega32U4 microcontroller board can be realized by using several electrical components available on the market. In particular, the used Atmel chip is enclosed in many Arduino original and cloned boards: we have programmed the MCU by using the Arduino bootloarder with dedicated Mouse / Keyboard libraries for the mouse emulation [15]. As low-cost Arduino compatible board, our prototype currently supports the Pololu A-Star 32U4 Micro device (see Figure 3). This is a general-purpose programmable module based on modern Arduino Atmega microprocessor with HID capabilities, which has 32 KB of flash program memory, 2.5 KB of RAM, and built-in USB functionality. Moreover, the tiny board breaks out 15 general-purpose I/O lines along two rows of pins, including 7 usable as PWM outputs and 8 usable as analog inputs. It fits all this into a 20-pin dual in-line package (DIP) measuring only 1” x 0.6” (even smaller than competing Atmega32U4 boards like the Teensy 2.0 and Arduino Micro), and its 0.1” pin spacing makes the A easy to use with solderless breadboards, perfboards, and 0.1”-pitch connectors [16]. Regarding the SBC component, we have chosen a hardware system electrically compatible with our Atmega32U4 board. The benefit of this choice is that we do not have to rely on any additional electric equipment component, such as a voltage regulator, in order to interface the two boards. To meet these requirements we have used the Raspberry Pi model B (RPi)

Fig. 3.

The Pololu A-Star 32U4 Micro board.

as SBC system. Further details on the RPi platform, including its hardware and software features, are available in [17] [18].

Fig. 5.

Fig. 4.

The Raspberry Pi model B and its hardware components.

While the RPi is, in essence, a very inexpensive Linux computer, there are a few things that distinguish it from a general purpose machine. One of the main differences is that the RPi can be directly used in electronics projects because it includes GPIO pins right on the board, as shown in Figure 4. These GPIO hardware extensions can be accessed for controlling hardware such as LEDs, motors, and relays, which are all examples of outputs. As for inputs, the used Raspberry Pi can read the status of buttons, switches, and dials, or it can read sensors like temperature, light, motion, or proximity sensors [18]. In the reference scenario, the two boards play different roles; the RPi employs its embedded capabilities with external wireless connections (we equipped our SBC with bluetooth and wifi USB adapters) in order to interact with the user’s personal AT equipments and to process data coming from them, while the second board exploits the HID protocol to emulate mouse input actions for the connected end-device. To this end, we have first connected our hardware peripherals via an I2C connection, because it is a convenient way to interface an Arduino system with a RPi. Specifically, this kind of electrical link requires to connect the SDA pin, the SCL pin and the GND pin on the Raspberry Pi to their counterparts on the Pololu A-Star 32U4 Micro board. The result is shown in Figure 5. This I2C communication requires the configuration of the RPi as master device [19], while the Pololu A-Star 32U4 Micro board has been config-

Final design.

ured as slave device. From a software point of view, there is an one-way interaction between our two boards: essentially, the RPi has been programmed to send one byte of data (i.e., a character) to the uniquely addressed Atmega32U4 board. Once these data are received, the latter system translates them, after processing, into mouse movements for a computer. This is described in Table I. Character sent by RPi to the 32U4 board 1 2 3 4 5 6

Mouse actions Up displacement Down displacement Left displacement Right displacement Left button click Right button click

TABLE I R ELATION BETWEEN DATA RECEIVED BY THE 32U4 BOARD AND THE MOUSE ACTIONS .

Furthermore, the firmware running on the 32U4 board allows us to customize the amount of the mouse cursor displacement, expressed in pixels. Regarding the interaction between our Raspberry Pi and the user’s personal AT equipments, it needs the development of specialized pieces of software interfaces, called plugins, on the top of the Raspberry’s operating system. As highlighted in Figure 2, each plugin interacts with a specific AT tool and it exchanges data with the RPi’s operating

system in order to drive the 32U4 board. In general, a plugin has to be implemented according to the type of the AT solution used by the user with a disability. For instance, if the AT tool consists of a digital sensor connected to the RPi by a cable, the plugin has to interface the particular port (e.g., the GPIO header) on which the hardware component is attached. A different approach is required whenever the AT tool is a piece of software running on a dedicated device (e.g., a smartphone) because the RPi has to interact with an external computing platform so that the usage of a wireless connection is needed. In addition, in these cases, the plugin is composed by two separate pieces of software: the former is a specialized client app running on the user’s personal device (e.g., a smartphone), while the latter is a server application on the RPi aimed at processing data coming from the client. IV. C ASE STUDY Speech recognition technology allows us to send input commands to a computer-based system without the use of hands. In the AT field, this technique can be seen as an alternative input method for a computer used by individuals who are unable to manage a hand-driven input peripheral. As an example, users with dyslexia can benefit from using voice recognition software to transfer their ideas into print. In general, the configuration of a speech recognition piece of software for a computer is not trivial at all, it requires a specific customization process according to the user’s speaking and preferences [20]. Nowadays, markets for smartphones and tablets provide many applications with speech recognition implementations. For instance, Google’s Android Voice Actions and Apple’s Siri are applications that enable a user to control a mobile device by using voice, such as calling contacts, sending texts and emails, and completing common tasks. The case study described in this Section focuses the attention on how the developed embedded system can be used to interface a speech recognition app running on an Android smartphone in order to access computers in a smart environment. However, the same approach can also be applied to interact with a touch screen device, like a tablet. In particular, our primary objective is to show how a user with a disability that uses the aforementioned personal AT equipment, together with our SBC platform, can be able to control any resource in an advanced computer laboratory in a university, where students use many computing devices and platforms; here they do not bring their personal laptop to work, but they have to manage any machine in the laboratory. In this scenario, we assume that our user wants to control the mouse cursor on a generic computer by using his/her voice. As shown in Section III, it requires the development of a specialized plugin on the RPi’s operating system that interacts with a custom app running on the user’s personal Android smartphone. Such a piece of software exploits the Google’s speech recognizer service, it captures audio from the mobile device’s microphone and sends these data to Google’s remote servers in order to perform speech recognition. Once the latter process has been terminated, our app receives the recognized

Fig. 6. Our graphical user interface for speech recognition on an Android smartphone.

string (e.g., a single word), and, after processing, it sends an appropriate command to the Raspberry Pi system. Lastly, the RPi translates the received input in computer cursor movements by means of the Pololu A-Star 32U4 Micro board, attached on the end device’s USB port. Figure 6 shows the interface of the app running on a smartphone. In the depicted scenario there are the following kinds of connections between the elements: • an Internet connection between our Android app and the Google’s servers for the speech recognition process; • a serial bluetooth wireless connection between our Android app and the plugin running on the Raspberry Pi; • an electrical I2C one-way communication between the Raspberry Pi and the Pololu A-Star 32U4 Micro board; • a USB connection between the Pololu A-Star 32U4 Micro board and the computer-based end device. In this way, the user with a disability can interact with any PC available in the computer laboratory, indifferently from its software architecture. Several different application scenarios can be imagined for the embedded solution presented in this paper; we believe that its usage may really support the interaction between users with disabilities and computers in smart environments [21] [22]. V. C ONCLUSION The solution proposed in this paper aims to support the interaction with computing devices for people with disabilities in smart environments. Generally, these types of users interact with computer-based platform (e.g., PCs, smartphone, tablets

and so on) by means of tailored AT pieces of software, that are closely tied to the particular physical devices when they have been installed. By using embedded platforms, based on low-cost Linux SBC systems and Arduino microcontroller boards, it is possible to interface these user’s personal AT equipments in order to control other different types of computers without having to install any additional software on them. Specifically the proposed system exploits a USB client connection to interface a generic end-device, while it uses the HID protocol to emulate the mouse cursor actions and movements on the controlled computer. A prototype of our system has been presented. Currently, it consists of two separate pieces of hardware, which are the Raspberry Pi and the Pololu A-Star 32U4 Micro board. Here, the RPi employs its embedded capabilities with external wireless connections in order to interact with the user’s personal AT equipments and to process data coming from them, while the second board manages the communication with the enddevice via a USB port. As we highlighted in this paper, the interaction between the RPi and external AT equipments requires the development of dedicated pieces of software interfaces, called plugins. Each plugin is able to process data from a specific AT tool. Nowadays, some plugins for AT running on an Android smartphone have been developed. In future works, we plan to review our system by developing plugins for others kind of AT equipments and sensors. In addition, new features will be added in next versions of our system and we will be able to also emulate a computer’s keyboard, in order to achieve a complete input device. ACKNOWLEDGEMENTS The research leading to the results presented in this paper has received funding from the Project “Design and Implementation of a Community Cloud Platform aimed at SaaS services for on-demand Assistive Technology”. The authors would like to thank Giuseppe Pellegrino for his significant contribution to the work. R EFERENCES [1] D. Mulfari, A. Celesti, A. Puliafito, and M. Villari, “How cloud computing can support on-demand assistive services,” in Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, W4A ’13, (New York, NY, USA), pp. 27:1–27:4, ACM, 2013. [2] ISO, 9999-2007: Assistive Products for Persons with Disability - Classification and Terminology. 2007. [3] D. Mariano, A. Freitas, L. Luiz, A. Silva, P. Pierre, and E. Naves, “An accelerometer-based human computer interface driving an alternative communication system,” in Biosignals and Biorobotics Conference (2014): Biosignals and Robotics for Better and Safer Living (BRC), 5th ISSNIP-IEEE, pp. 1–5, IEEE, 2014. [4] R. Poli, C. Cinel, A. Matran-Fernandez, F. Sepulveda, and A. Stoica, “Towards cooperative brain-computer interfaces for space navigation,” in Proceedings of the 2013 International Conference on Intelligent User Interfaces, IUI ’13, (New York, NY, USA), pp. 149–160, ACM, 2013. [5] D. Zhang, Y. Wang, X. Chen, and F. Xu, “Emg classification for application in hierarchical fes system for lower limb movement control,” in Proceedings of the 4th International Conference on Intelligent Robotics and Applications, ICIRA’11, (Berlin, Heidelberg), pp. 162–171, Springer-Verlag, 2011.

[6] A. Xiong, Y. Chen, X. Zhao, J. Han, and G. Liu, “A novel hci based on emg and imu,” in Robotics and Biomimetics (ROBIO), 2011 IEEE International Conference on, pp. 2653–2657, Dec 2011. [7] D. Mulfari, A. Celesti, M. Villari, and A. Puliafito, “Using virtualization and novnc to support assistive technology in cloud computing,” in Third Symposium on Network Cloud Computing and Applications (NCCA), 2014. [8] D. Mulfari, A. Celesti, M. Villari, and A. Puliafito, “Using virtualization and guacamole/vnc to provide adaptive user interfaces to disabled people in cloud computing,” in 10th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC), pp. 72–79, 2013. [9] J. Kiljander, J. Takalo-Mattila, M. Etelapera, J.-P. Soininen, and K. Keinanen, “Enabling end-users to configure smart environments,” in Applications and the Internet (SAINT), 2011 IEEE/IPSJ 11th International Symposium on, pp. 303–308, July 2011. [10] M. Michael, “Arduino cookbook,” 2011. [11] T. Elder, M. Martinez, and D. Sylvester, “Alternate computer input device for individuals with quadriplegia,” 2014. [12] A. Carrera, A. A. Alonso, R. de la Rosa, J. M. Aguiar, et al., “Biomechanical signals human-computer interface for severe motor disabilities,” E-Health Telecommunication Systems and Networks, vol. 2, no. 04, p. 65, 2013. [13] P. Carrington, A. Hurst, and S. K. Kane, “Wearables and chairables: inclusive design of mobile input and output techniques for power wheelchair users,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pp. 3103–3112, ACM, 2014. [14] B. M. Collective and D. Shaw, “Makey makey: improvising tangible and nature-based user interfaces,” in Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, pp. 367– 370, ACM, 2012. [15] J. Younker and T. Ribaric, “Beyond open source software: Solving common library problems using the open source hardware arduino platform,” Partnership: the Canadian Journal of Library and Information Practice and Research, vol. 8, no. 1, 2013. [16] “Pololu A-Star 32U4 Micro Web Site, http://www.pololu.com/product/3101,” 2014. [17] G. Calixto, C. Hira, L. Costa, and R. De Deus Lopes, “An open source and low cost solution for consumer electronics middleware validation,” in Consumer Electronics (ISCE), 2013 IEEE 17th International Symposium on, pp. 159–160, June 2013. [18] M. Richardson and S. Wallace, Getting Started with Raspberry Pi. ” O’Reilly Media, Inc.”, 2012. [19] F. Leens, “An introduction to i 2 c and spi protocols,” Instrumentation & Measurement Magazine, IEEE, vol. 12, no. 1, pp. 8–13, 2009. [20] “Types of Assistive Techonology Products, http://www.microsoft.com/enable/at/types.aspx.” [21] R. Carabalona, F. Grossi, A. Tessadri, A. Caracciolo, P. Castiglioni, and I. de Munari, “Home smart home: Brain-computer interface control for real smart home environments,” in Proceedings of the 4th International Convention on Rehabilitation Engineering & Assistive Technology, iCREATe ’10, (Kaki Bukit TechPark II,, Singapore), pp. 51:1– 51:4, Singapore Therapeutic, Assistive & Rehabilitative Technologies (START) Centre, 2010. [22] D. Mulfari, A. Celesti, M. Villari, and A. Puliafito, “Using virtualization and guacamole/vnc to provide adaptive user interfaces to disabled people in cloud computing,” in Ubiquitous Intelligence and Computing, 2013 IEEE 10th International Conference on and 10th International Conference on Autonomic and Trusted Computing (UIC/ATC), pp. 72– 79, Dec 2013.