Automated Configuration of Applications for People with Specific Needs Peter Heumader1, Reinhard Koutny1, Klaus Miesenberger1, and Karl Kaser2 1
University of Linz, Institut Integriert Studieren, Austria {peter.heumader,reinhard,koutny,klaus.miesenberger}@jku.at 2 LIFETool, Linz, Austria
[email protected]
Abstract. This paper presents an approach to store user settings and abilities in a user profile that can be used to automatically adjust the settings of applications on mobile or desktop devices for people with special needs. The user profile and the settings are determined automatically with a wizard like application or manually with a carer and are dispatched to other devices with the use of cloud services. By this users with special needs will be able to operate new applications without the need of a carer setting up the application for them.
1
Introduction
A major concern for carers of people with physical and cognitive disabilities is the adjustment of applications for the target group. Due to their impairments, they interact in their own specific way or with a special input device with the application. As the target group is usually not able to setup the application by itself, a carer is needed. This setup process has to be done for every application and for every device the user wants to operate. On shared computers, things get even worse because every time the user changes, the setup process has to be repeated. In this paper we present an approach that counteracts this drawback. We determine the settings of a user automatically with a game-like wizard or with the help of a carer and save those settings in a user profile. This profile is then distributed over cloud services to every other application based on our framework, even if the user owns multiple devices. As a result the settings only need be configured once for each user.
2
State of the Art
Intense research has shown that up to date there are hardly any related projects in this field that store information about the user’s capabilities and preferred input methods in the profile. The Capcom Project aimed to offer adaptive user interfaces for people with cognitive disabilities. Depending on the user's needs, the user interface changes in terms of difficulty and complexity of language, colour, icons etc. These preferences K. Miesenberger et al. (Eds.): ICCHP 2014, Part II, LNCS 8548, pp. 234–237, 2014. © Springer International Publishing Switzerland 2014
Automated Confiiguration of Applications for People with Specific Needs
235
are also determined by a game-like wizard and stored in a profile. The Capccom approach however only deaals with the presentation of information and how to adjust it to people with cognitive disabilities in contrary to our approach where we consiider necessary adaptations of thee relevant parameter for interaction [3]. Cloud4All also aims att creating a profile describing the user and his or her capabilities which is used to determine fitting applications in terms of necesssary accessibility and context of o use. According to the user’s preferences both the uuser interface and the contentt get adapted including augmentation of the level of accessibility through speciaal web services and cloud based assistive technology [4].
3
Concept
The concept had its orig gins in a research project called Assistive Technoloogy Laboratory (ATLab) which h focuses on the development of a software framework for NUI-based tablet apps an nd desktop applications that allows rapid creation of accessible games. The fram mework features the easy integration of new hardware(ee.g., IntegraMouse, external swiitch buttons) and offers alternative interaction methods for applications based upon the framework, namely touch-input, mouse, keyboaard, switches, eye-tracking, gesttures and touch-scanning, which is similar to switch acccess scanning but uses a touch sccreen instead.
Fig. 1. Concept Overview
Out of this project we saaw the need for an easier way to configure apps for peoople with special needs. Dep pending on the user’s disabilities, users have differrent capabilities to interact witth an application. While people with spastic tetrapleegia might be able to operate a tablet t device with touch input - people with paraplegia are
236
P. Heumader et al.
unable to do so. They have different ways of controlling applications like switch access scanning or eye-tracking. In our approach we determine the user’s abilities for each of these input methods. We defined several features for each input method that describe the user’s capabilities in handling an input method. An example for a feature used would be the user's ability to reach every region of the screen of a tablet device. The screen is divided into several regions and for each region it is determined whether the user is able to touch the region. If he is able to touch the region, it is also saved whether the user had problems like shivering through a tremor when touching the region. Features like this are defined for each of the framework’s currently supported input methods (touch-input, mouse, keyboard, switches, eye-tracking and touchscanning). As seen in Figure 1, the concrete values of these features are either determined by a carer or automatically generated by an avatar based wizard without the need of interference by the consultant. It is designed as a little game with an avatar that guides the user through different screens. In each screen the user has to perform a specific task, by which the features are unobtrusively filled with values. As soon as features are detected they are stored in the user profile which is uploaded to our cloud service. Whenever the user starts an application that is based on the ATLab-Framework and an internet connection is available, the user profile is downloaded. Depending on the device (tablet pc, desktop, laptop) the user is currently using and the input devices (hardware switch, mouse, touch) that are currently plugged in, the preferred way of interaction for the user is automatically set up by the application. So if the user prefers to control tablet computers with one hardware switch, the application will automatically select switches with the predefined settings that are optimized for this user as the default input method.
4
Technical Implementation
The technical implementation covers two sides: the framework and cloud services. The framework serves as basis to create applications. Cloud services on the other hand are used to store and exchange data. 4.1
Framework
One key requirement is that the framework allows deploying applications on different platforms with just one code base. After an extensive evaluation phase of cross platform frameworks the Adobe AIR [6] runtime was chosen. Its components are based on Robotlegs [1] and Apache Flex [2]. Robotlegs is a framework that provides tools to ease communication tasks within the application, structuring the project and managing dependency injection. Flex SDK is an open source framework for creation of web apps that run on Adobe AIR[5]. This environment allows rapid deployment of applications that can be operated by everyone and that run on the most popular mobile devices and desktop systems.
Automated Configuration of Applications for People with Specific Needs
4.2
237
Cloud Services
As for technology used, the cloud service is based on Windows Azure[6], which is one of the major cloud computing platforms and infrastructures, developed by Microsoft, through a globally distributed network of Microsoft-managed data centres. This cloud service uses WCF and is based on SOAP. Since sensitive data will be transferred using the cloud service, communication is established through HTTPS. From the point of view of the cloud side, the service is stateless, meaning that there is basically no login-process. Every request carries its own user credentials for authentication and authorization which are stored in the SOAP header encrypted through HTTPS as well. However, applications can still support user logins at the application start-up and store the credentials temporarily and locally which can be added to every request.
5
Current Status and Further Work
Currently the framework and the basis for the cloud services have been implemented. Also a prototypic implementation of the wizard has been implemented and already tested. However, the automatic synchronization between user profiles and applications is still work in progress and will need some time until the implementation is finished. Another long term goal is the automated adaption of the user settings based on user input. At this, the system would monitor all input of the user. If the system recognizes that the user fails to activate a control like a button multiple times, the system will change the settings of the input device or even the size and position of the control so that the user is able to activate the control.
References 1. Robotlegs: Robotlegs AS3 Micro Architecture (2013), http://www.robotlegs. org/ 2. Apache: Flex (2013), http://flex.apache.org/ 3. Petz, A., Radu, N., Lassnig, M.: CAPKOM – innovative graphical user interface supporting people with cognitive disabilities. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 377–384. Springer, Heidelberg (2012) 4. Vanderheiden, G.C., Treviranus, J., Gemou, M., Bekiaris, E., Markus, K., Clark, C., Basman, A.: The evolving global public inclusive infrastructure (GPII). In: Stephanidis, C., Antona, M. (eds.) UAHCI 2013, Part I. LNCS, vol. 8009, pp. 107–116. Springer, Heidelberg (2013) 5. Wikipedia: Adobe Integraded Runtime (2014), http://en.wikipedia.org/wiki/ Adobe_Integrated_Runtime 6. Microsoft: Windows Azure (2014), http://www.windowsazure.com/de-de/