Towards Standardization of User Models for ...

8 downloads 0 Views 827KB Size Report
Jul 2, 2008 - main goal of the VUMS cluster is the development of a unified user model ..... to accurately track the three-dimensional orientation of the trunk,.
Towards Standardization of User Models for Simulation and Adaptation Purposes Kaklanis N.*1, Biswas P.2, Mohamad Y.3 , Gonzalez M. F.4, Peissner M5, Langdon P.2, Tzovaras D.1 and Jung C.6 1

Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece 2

The University of Cambridge, Department of Engineering, UK 3

Fraunhofer FIT, 53754 Sankt Augustin, Germany 4

5

Fraunhofer IAO, Nobelstr. 12, 70569 Stuttgart, German 6

*

INGEMA, Spain

Fraunhofer IGD, Darmstadt, Germany

Corresponding author: Tel.: 0030 2311 257751, Fax: 0030 2310 474128

Abstract. The use of user models can be very valuable when trying to develop accessible and ergonomic products and services taken into account users’ specific needs and preferences. Simulation of user-product interaction using user models may reveal accessibility issues at the early stages of design and development and this results to a significant reduction of costs and development time. Moreover, user models can be used in adaptive interfaces enabling the personalized customization of user interfaces that enhances the accessibility and usability of products and services. This paper presents the efforts of the VUMS cluster of projects towards the development of an interoperable user model, able to describe both able-bodied and people with various kinds of disabilities. The VUMS cluster is consisted by the VERITAS, MyUI, GUIDE and VICON FP7 European projects, all involved in user modelling from different perspectives. The main goal of the VUMS cluster is the development of a unified user model that could be used by all the participant projects and that could be the basis of a new user model standard. Currently, within the VUMS cluster, a common user model has been defined and converters that enable the transformation from each project’s specific user model to the VUMS user model and vice versa have been developed enabling, thus, the exchange of user models between the projects. Keywords. User modelling, virtual user model, simulation, adaptation, accessibility, usability

1. Introduction In our everyday life we use a plenty of gadgets offering a variety of services, especially considering electronic devices. The enormous number of features often turns overwhelming for older users or users with disabilities, and may make devices unusable. Similarly, in providing accessibility for digital devices, it often turns problematic to select the appropriate way to provide it. The issue becomes more pertinent for selecting appropriate accessibility device. At present there is no way of choosing appropriate accessibility options for different users and media except a case-by-case analysis, which is not a scalable approach. Furthermore, there exists a gap between mainstream system designers and accessibility practitioners in terms of universal or inclusive design. Mainstream designer often assume inclusivity as another ‘TopUp’ feature and underestimates its need. On the other hand, accessibility practitioners often work for specific type of disability and application. This paper presents a concept of using user models in both design time and run time to personalize a wide variety of applications with respect to users with a wide range of abilities. User Modelling provides a way of choosing an appropriate feature or service based on the user and context of use. It is implemented as a simulator to model effect of impairment during design, which will help designers to visualize problems with people with age related or physical impairment. The user model is also used as a means of adapting interfaces in run time to further customize systems for different users. User models can be considered as explicit representations of the properties of an individual user including user’s needs, preferences as well as physical, cognitive and behavioural characteristics. Due to the wide range of applications, it is often difficult to have a common format or even 1

definition of user models. The lack of a common definition also makes different user models even developed for same purpose be incompatible to each other. Not only it reduces portability of user models but also restrict new models to leverage benefit from earlier research on similar field. The present paper presents a concept of an interoperable user model and a set of prototype applications to demonstrate its interoperability between the different projects of the VUMS cluster. VUMS stands for "Virtual User Modelling and Simulation Standardisation". The cluster is formed by four projects (GUIDE, MyUI, VERITAS and VICON) funded by the European Commission and is partially based on the results of the VAALID1 project. The concept of user modelling has been explored in many different fields like Ergonomics, Psychology, Pedagogy and Computer Science. However, it still lacks a holistic approach. Psychological models often need a lot of parameter tuning reducing their use by non-experts [7] while ergonomic models often miss to model cognition [20]. Carmagnola and colleagues [12] presented an excellent literature survey on web based user models but completely missed out user models in Human Computer Interaction [37].

2. User modelling requirements 2.1 User modelling requirements from the simulation perspective Traditionally, the needs of people with physical impairments such as visual, hearing, and dexterity impairments are often not considered sufficiently by industry when designing user interfaces for automotive, consumer products and so on. This is exacerbated by the fact that it is not unusual for an individual to have multiple impairments, for example when elderly people may experience hearing and sight loss as well as loss of dexterity. Now the terms ‘inclusive design’ or ‘universal design’ are becoming more common within the designers vernacular. There is a greater awareness of the value of inclusive design methodologies for both designer and end user, such as the user testing of product prototypes [73]. In practical terms, Universal Design methodologies must ideally complement the existing product design workflows of designer or at the very least be as disruptive as possible. Ideally, they should enhance how designers currently do things, and put the least cognitive load as possible on the designer. So, ideally, tools that support the tenants of Universal Design, which are not difficult to use and that can plug into common existing tools are desirable. For that purpose simulation environments should be integrated within the design cycle. The main aim of simulation environments incorporated within a design cycle is to support the designer to produce inclusive designs. The simulation environments simulate the interaction of pre-configured virtual users with the virtual environment. For example a disabled virtual user is the main “actor” of a simulation that aims to assess if the virtual user is able to accomplish all necessary actions described in the Simulation Model, taking into account the constraints posed by the disabilities (as defined in the Virtual User Model). Simulation planning can be performed using inverse kinematics, while dynamic properties of the human limbs (e.g. torques and forces) related to the corresponding actions (e.g. grasping) can be obtained using inverse dynamics. Simulation environments vary in their targets, powerfulness and granularity, therefore their requirements on user models vary as well. The exchange of user profiles between models will be influenced in both ways by the parameters mentioned above. The main issues, which should be considered, are:

1



Representation format of user profiles e.g. RDF, XML (UsiXML), etc. Converters should be made available to convert the profiles from one format into another.



Granularity of the user profiles, so one model can be more or less detailed than the other. In this case filters and extenders should be developed to allow automatic or semiautomatic adoption of user profiles.



There may be some variables that are non-applicable for another simulation environment and vice versa. In this case filters should be made available to filter any not required variables out of the user profile.

http://www.vaalid-project.org/ 2



Similar variables may be included in different user models but they may differ in crucial properties e.g. will not have the same range, measurement method etc. Special interface plopped procedures should be developed to cope with this challenge.

2.2 User modelling requirements from the UI adaptation perspective A common user profile and adaptation standard will make it possible to adapt user interfaces for multiple device and platforms without requiring the end user or designer to work on each individual device. For example, an end user may use multiple devices like computer, mobile, electronic display on his/her car, but the level of visual acuity and type of colour blindness described in his/her user profile can be used to fix the font size and colour contrast for user interfaces across all displays. Similarly, if the common user profile describes tremor in finger, then all electronic interfaces (like computer, tablet, mobile phone, etc.) can invoke adaptation algorithms to remove jitters in pointing movements. However, the adaptation use case is different from the simulation use case in the sense that it should consider users’ range of abilities while they are interacting with the system while simulation happens during design time. So, adaptation considers real users than simulated users, which has the following implications: o User abilities may not be measured in as much detail as in simulation as it will increase the response time of the system. o User variables should have a direct consequence on interface and adaptation parameters. o Adaptation should be customizable to individual users rather than a group or type of users. Therefore, user modelling requirements from the UI adaptation perspective include the following: o A dimensional reduction algorithm for the detailed user profiles developed for simulation purposes. o A mapping mechanism between user variables and interface parameters. 2.3 User model sharing between simulation and adaptation frameworks If a user profile from a simulation-oriented approach shall be transformed into a UI adaptation user modelling format, several user model variables related to simulation need to be interpreted and integrated into one variable related to adaptation (Figure 1). This process can be regarded as a reduction of dimensions, which is generally associated with a loss of information. From a technical point of view, this transformation is possible, but it requires suitable discrimination rules for the “translation” of detailed user information into coarser UI concepts and requirements. As the mapping between both models is not bijective, a reverse transformation from the compressed to the extensive user model is not possible.

Figure 1 User Model sharing overview

Interesting use cases for the exchange of user models between simulation- and adaptation-oriented approaches include the following: •

Validate the effectiveness of UI adaptation mechanisms using simulation: o

Select hypothetical user profiles as test cases.

o

Generate an adapted user interface in an adaptation framework.

3

o

Simulate the interaction between hypothetical users and the UI, and analyse accessibility problems.

o

Evaluate if simulation yields significantly better accessibility results for the adapted UI than the standard UI.



Use frequent/representative user profiles developed for simulation purposes as stereotypes for UI adaptation: o

Create/select user profiles which cover a majority of actual disabled users.

o

Transform hypothetical user profiles into individual user profiles.

o

Generate an adapted user interface in the adaptation framework.

o

Optimize the resulting adapted UIs.

o

User profiles and adapted UIs can be used as stereotypes, in order to simplify and improve run-time user profiling and UI adaptation

3. Related Work 3.1 Standards related to User Modelling The existing standards related to User Modelling provide guidance to ICT and non-ICT product and service designers on issues and design practices related to Human Factors. They aim to help designers and developers maximize the level of usability of products and services by providing a comprehensive set of Human Factors design guidelines and meta-models in machine-readable formats. Within the context of the VUMS cluster activities towards the development of interoperable and multipurpose user models, a comparative review of these standards has been performed, in order to understand their similarities and differences and also to examine their potential use in the user modelling procedures of the cluster. Table 1 presents a comparison of the standards according to the following dimensions: • Focus on accessibility: indicates if the standard focuses on people with special needs (provides guidelines for developing accessible products/services, analyses special needs of people with disabilities, etc.). • Tasks support: indicates if the standard introduces new task models or includes guidelines for developing task models. • Workflows support: indicates if the standard introduces new workflow models or includes guidelines for developing workflows. • Description of user needs/preferences: indicates if the standard describes user needs/preferences using models (meta-models, ontology-schema, UML class diagrams, etc.) or includes guidelines for covering user needs/preferences during products and services design and development. User needs/preferences include: o General interaction preferences o Interaction modality preferences o Multicultural aspects o Visual preferences o Audio preferences o Tactile/haptic related preferences o Date and time preferences o Notifications and alerts o Connectivity preferences

4





• • •

Focus on accessibility

Standard

Description of device characteristics: indicates if the standard describes device characteristics or provides guidelines to be followed during the design and development of input/output devices. Description of user characteristics: indicates if the standard describes user characteristics including sensory abilities (seeing, hearing, touch, taste, smell, balance, etc.), physical abilities (speech, dexterity, manipulation, mobility, strength, endurance, etc.) and cognitive abilities (intellect, memory, language, literacy, etc.). A standard may include definitions of user characteristics, changes of these characteristics with age, analysis of user populations and their characteristics, etc. UI definition support: indicates if the standard provides guidelines for developing user interfaces or introduces a language for defining user interfaces. Guidelines: indicates if the standard provides guidelines/recommendations that have to be followed by designers and developers of products and services. Implementation: indicates if the standard provides meta-models, UML diagrams, ontology schemas, XML schemas, and machine-readable formats in general.

Tasks support

Workflows support

Description of user needs/ preferences

Descriptio n of device characteris tics

Description of user characteristics (physical, cognitive, etc.)

UI definition support

Implement ation details

Guidel ines

ETSI TS 102 747 ETSI ES 202 746 ISO/IEC 1:2008

24751-

ISO/IEC 2:2008

24751-

MARIA XML (Multimod al) W3C Delivery Context Ontology W3C CC/PP URC Standard (ISO/IEC 24752) IMS Access For All Personal Needs and Preferences Descript ion for Digital Delivery Information Model2 ETSI EG 202 116 (Multimod al) ETSI TR 102 068 ETSI EG 202 325 (limited) BS EN 1332-4:2007 ISO 11228-2:2007 ISO/DIS 24502 XPDL WHO ICF

2

In July 2003, IMS released IMS Learner Information Package Accessibility for LIP v1.0 and in August 2004 'Access For All Metadata v1.0'. Under agreement, these documents were adopted by ISO/IEC SC36 resulting in the publication, in 2008, of ISO/IEC 24751-1, ISO/IEC 24751-2 and ISO/IEC 24751-3. 5

Focus on accessibility

Standard

Tasks support

Workflows support

Description of user needs/ preferences

Descriptio n of device characteris tics

Description of user characteristics (physical, cognitive, etc.)

UI definition support

Implement ation details

Guidel ines

WHO ICD FMA H-Anim

ANSUR RULA

REBA LUBA OWAS SNOOK ISO/FDIS 129:2010

9241-

EMMA

Table 1 Standards related to User Modelling - Comparison 3.2 Physical Modelling In the last years researchers have made significant progress towards the development of virtual humans, by focusing their attention on biomechanically modelling. Efforts have been made in modelling various body parts, including the face [18][39], the neck [49], the torso [19], the hand [88], and the leg [43]. For instance, [71][45][28][44][14][32] deal with the biomechanical analysis of the human upper limb. In [31], an upper extremity (UE) model for application in stroke rehabilitation was constructed to accurately track the three-dimensional orientation of the trunk, shoulder, elbow, and wrist during task performance. Research has also focused on the lower human body. For example [3] deals with the modelling of the human lower limbs, and [22] presents a three-dimensional mechanical model of the human body, in order to analyze kinetic features such as joint torques. Dealing with human gait analysis from a biomechanical perspective, [68][69][63][64][65] propose models that deal with the postural stability and balance control of young and older humans. Rao et al. [77] use a three-dimensional biomechanical model to determine upper extremity kinematics of 16 male subjects with low-level paraplegia while performing wheelchair propulsion. Sapin et al. [79] report a comparison of the gait patterns of trans-femoral amputees using a single-axis prosthetic knee that coordinates ankle and knee flexions (Proteor’s Hydracadence1 system) with the gait patterns of patients using other knee joints without a knee–ankle link and the gait patterns of individuals with normal gait. In [76], spatio-temporal, kinematics, kinetics and EMG data as well as the physiological changes associated with gait and aging are reviewed, in order for the authors to provide gait analysis regarding the older people. Coluccini et al., [16] assessed and analyzed upper limb kinematics of normal and motor impaired children, with the aim to propose a kinematic based framework for the objective assessment of the upper limb, including the evaluation of compensatory movements of both the head and the trunk. In Ouerfelli et al. [62] have applied two identification methods to study the kinematics of head-neck movements of able-body as well as neck-injured subjects. As a result, a spatial three-revolute joint system had been employed to model 3D head-neck movements. The simulation of virtual humans can be a powerful approach to support engineers in the product development process. Virtual human modelling reduces the need for the production of real prototypes and can even make it obsolete [13]. During the last years, the research interest in using digital human modelling for ergonomics purposes has increased significantly [46]. Lamkull et al. 6

[48] performed a comparative analysis on digital human modeling simulation results and their outcomes in the real world. The results of the study showed that ergonomic digital human modeling tools are useful for providing designs of standing and unconstrained working postures. The use of virtual humans and simulation in the automotive industry showed also great potential. Porter et al. [74] present a summary of applications of digital human models in vehicle ergonomics during the early years of personal computers. Existing available tools and frameworks provide designers with the means for creating virtual humans with different capabilities and use them for simulation purposes. DANCE [82], for instance, is an open framework for computer animation research focusing on the development of simulations and dynamic controllers, unlike many other animation systems, which are oriented towards geometric modelling and kinematic animation. SimTk's OpenSim [83] is also a freely available, user extensible software system that lets users develop models of musculoskeletal structures and create dynamic simulations of movement. Another tool using virtual environments for ergonomic analysis is the VR ANTHROPOS [2], which simulates the human body in the virtual environment realistically and in real-time. There are also many tools such as JACK [72], RAMSIS [54], SAMMIE [75], HADRIAN [52], SIMTER [50], Safework [25], SantosTM [89], offering considerable benefits to designers looking to design for all, as they allow the evaluation of a virtual prototype using virtual users with specific abilities. A list of software tools for ergonomics analysis is reported in [23]. RAMSIS and JACK are the most popular accessibility design software packages, focusing in automotive industry. Both RAMSIS and JACK have anthropometric data sets based on measurements taken from the healthy and the able-bodied groups. Even though significant effort has been given in physical user modeling and many tools have been developed using virtual humans for simulation purposes, there is not any widely accepted formal way for the description of the virtual users, able also to describe users with special needs and functional limitations, such as the elderly and users with disabilities. The present paper aims to present the VUMS cluster approach towards the development of a common interoperable and multipurpose user model covering a large set of human aspects (physical, cognitive, etc.). 3.3 Cognitive modelling Research on simulating user behaviour to predict machine performance was originally started during the Second World War. Researchers tried to simulate operators’ performance to explore their limitations while operating different military hardware. During the same time, computational psychologists were trying to model the mind by considering it as an ensemble of processes or programs. McCulloch and Pitts’ model of the neuron and subsequent models of neural networks, and Marr’s model of vision are two influential works in this discipline. Boden [8] presents a detailed discussion of such computational mental models. In the late 70s, as interactive computer systems became cheaper and accessible to more people, modelling human computer interaction (HCI) also gained much attention. However, models like Hick’s Law [30] or Fitts’ Law [24], which predict visual search time and movement time respectively were individually not enough to simulate a whole interaction. The Command Language Grammar [55] developed by Moran at Xerox PARC could be considered as the first HCI model. It took a top down approach to decompose an interaction task and gave a conceptual view of the interface before its implementation. However it completely ignored the human aspect of the interaction and did not model the capabilities and limitations of users. Card, Moran and Newell’s Model Human Processor (MHP) [11] was an important milestone in modelling HCI since it introduced the concept of simulating HCI from the perspective of users. It gave birth to the GOMS family of models [11] that are still the most popular modelling tools in HCI. Allen Newell [56] developed the SOAR (State Operator And Result) architecture as a possible candidate for his unified theories of cognition. According to Newell [56] and Johnson-Laird [38], the vast variety of human response functions for different stimuli in the environment can be explained by a symbolic system. So the SOAR system models human cognition as a rule-based system and any task is carried out by a search in a problem space. The heart of the SOAR system is its chunking mechanism. Chunking is “a way of converting goal-based problem solving into accessible long-term memory (productions)” [56]. It operates in the following way. During a problem solving task, whenever the system cannot determine a single operator for achieving a task and thus cannot move to a new state, an impasse is said to occur. An impasse models a situation 7

where a user does not have sufficient knowledge to carry out a task. At this stage SOAR explores all possible operators and selects the one that brings it nearest to the goal. It then learns a rule that can solve a similar situation in future. Other studies successfully explained the power law of practice through the chunking mechanism. However, there are certain aspects of human cognition (such as perception, recognition, and motor action) that can better be explained by a connectionist approach than a symbolic one [60]. It is believed that initially conscious processes control our responses to any situation while after sufficient practice, automatic processes are in charge for the same set of responses [29]. Lallement and Alexandre [47] have classified all cognitive processes into synthetic or analytical processes. Synthetic operations are concerned with low level, non decomposable, unconscious, perceptual tasks. In contrast, analytical operations signify high level, conscious, decomposable, reasoning tasks. From the modelling point of view, synthetic operations can be mapped on to connectionist models while analytic operations correspond to symbolic models. Considering these facts, the ACT-R (Adaptive Control of Thought- Rational) system [1] does not follow the pure symbolic modelling strategy of the SOAR, rather it was developed as a hybrid model, which has both symbolic and sub symbolic levels of processing. At the symbolic level, ACT-R operates as a rulebased system. It divides the long-term memory into declarative and procedural memory. Declarative memory is used to store facts in the form of ‘chunks’ and the procedural memory stores production rules. The system works to achieve a goal by firing appropriate productions from the production memory and retrieving relevant facts from the declarative memory. However the variability of human behaviour is modeled at the sub-symbolic level. The long-term memory is implemented as a semantic network. Calculation of the retrieval time of a fact and conflict resolution among rules is done based on the activation values of the nodes and links of the semantic network. The EPIC (Executive-Process/Interactive Control) [42] architecture pioneers to incorporate separate perception and motor behaviour modules in a cognitive architecture. It mainly concentrates on modelling the capability of simultaneous multiple task performance of users. It also inspired the ACT-R architecture to install separate perception and motor modules and developing the ACT-R/PM system. A few examples of their usage in HCI are the modelling of menu searching and icon searching tasks [33][9]. The CORE system (Constraint-based Optimizing Reasoning Engine) [34][87][21] takes a different approach to model cognition. Instead of a rule-based system, it models cognition as a set of constraints and an objective function. Constraints are specified in terms of the relationship between events in the environment, tasks and psychological processes. Unlike the other systems, it does not execute a task hierarchy; rather prediction is obtained by solving a constraint satisfaction problem. The objective function of the problem can be tuned to simulate the flexibility in human behaviour. There exist additional cognitive architectures (such as Interactive Cognitive Subsystems [5], Apex, DUAL, CLARION [15], etc.), but they are not yet as extensively used as the previously discussed systems. Table 2 presents a comparative analysis of the before mentioned cognitive modeling approaches, according to the following criteria: • Fidelity: signifies how detailed the model is. A high fidelity model simulates human behaviour uses more detailed theories of psychology than a low fidelity model. • Ease of use: signifies the usability of the model itself. • Perception and Motor action models: signify whether the model has separate modules to model perception and motor action in detail. These models are important for simulating performance of visual and mobility impaired users. • Supporting disability: signifies whether the model is used to simulate performance of people with disabilities. • Validation for disabled users: indicates whether the model has been validated with user trials involving people with disabilities.

8

Fidelity Ease of use Perception and Motor action models Supporting disability Validation for disabled users

GOMS

SOAR

ACT-R /PM

EPIC

CORE

COGTOOL

AVANTI

EASE

SUPPLE

COSPAL

Low Easy No

High Difficult No

High Difficult Yes

High Difficult Yes

High Difficult No

Low Easy Yes

Low Not known No

Low Easy Yes

High Not known Yes

High Easy Yes

Yes

No

Yes

No

No

Just started

Yes

Yes

Yes

No

Yes

No

No

No

No

No

Yes

No

Yes

No

Table 2 Cognitive modeling approaches comparison 3.4 Modeling users with disabilities There is not much reported work on systematic modelling of people with disabilities. McMillan [53] felt the need to use HCI models to unify different research streams in assistive technology, but his work aimed to model the system rather than the user. The AVANTI project [85][86] modeled an assistive interface for a web browser based on static and dynamic characteristics of users. The interface is initialized according to static characteristics (such as age, expertise, type of disability and so on) of the user. During interaction, the interface records users’ interaction and adapts itself based on dynamic characteristics (such as idle time, error rate and so on) of the user. This model is based on a rule based system and does not address the basic perceptual, cognitive and motor behaviour of users and so it is hard to generalize to other applications. A few researchers also worked on basic perceptual, cognitive and motor aspects. The EASE tool [51] simulates effects of interaction for a few visual and mobility impairments. However the model is demonstrated for a sample application of using word prediction software but not yet validated for basic pointing or visual search tasks performed by people with disabilities. Keates and colleagues [41] measured the difference between able-bodied and motor impaired users with respect to the Model Human Processor (MHP) [11] and motor impaired users were found to have a greater motor action time than their able-bodied counterparts. The finding is obviously important, but the KLM model itself is too primitive to model complex interaction and especially the performance of novice users. Gajos, Wobbrock and Weld [27] developed a model to estimate the pointing time of disabled users by selecting a set of features from a pool of seven functions of movement amplitude and target width, and then using the selected features in a linear regression model. This model shows interesting characteristics of movement patterns among different users but fails to develop a single model for all. Movement patterns of different users are found to be inclined to different functions of distance and width of targets. Serna and colleagues [81] used ACT-R cognitive architecture [1] to model progress of Dementia in Alzheimer’s patient. They simulated the loss of memory and increase in error for a representative task at kitchen by changing different ACT-R parameters. The technique is interesting but their model still needs rigorous validation through other tasks and user communities. Quade’s user model simulates perception and motor-action using a probabilistic rule based system and does not model the effect of sensory and motor impairments in sufficient detail. 3.5 Adaptation Models in Computing The role of user modeling in adaptive systems is best understood by referring to the general concept of adaptive systems based on the work of Jameson [36] and Oppermann [61] (summarized by [90]). They describe three functional components as the technical basis of an adaptive system: 1. Afference – collection of observational data about the user. 2. Inference – creating or updating a user model based on that data. 3. Efference – deciding how to adapt the system behavior. Typically, context management integrates the stages of afference and inference with the goal of providing a useful representation of contextual information as a basis for an effective adaptation behavior (efference). The provided context information is stored in a user and context model (called “profile” for a specific instantiation of the model). In adaptive systems, the information stored in the involved models typically reflects the specific adaptation targets and facilities of the system. For example, systems which automatically generate user interfaces for multiple devices require information about the device-specific constraints and capabilities. Examples include the Personal Universal Controller (PUC) [57] as used in UNIFORM [58] and HUDDLE [59] and MARIA [67] with the earlier TERESA [66]. Kane, Wobbrock and Smith describe an adaptive user 9

interface that changes its layout when the user is moving or walking. Their profile includes situational factors such as walking speed, distractions and locations [40]. SUPPLE uses a device model to describe the capabilities and limitations of the technical platform and a usage model to represent relevant user preferences and physical abilities [26]. In contrast to most other modelbased generated adaptive user interfaces, SUPPLE does not rely on simple adaptation rules defining specific design solutions for different conditions in the user and context profile. The adaptation mechanisms in SUPPLE, however, are based on optimizing a cost function which describes the interaction effort of different design solutions for the current user and context conditions (cf. [26]). Most adaptive systems concentrate on one specific purpose of adaptation: individual user needs and disabilities, multiple devices and modalities, or context conditions. Adaptive systems, however, which aim at an increased accessibility in the sense of ISO 9241-171 [35] will need to address all these three factors in one approach. The conceptual framework of PLASTIC USER INTERFACES takes such an extensive perspective. The authors define user interface plasticity as “the capacity of user interfaces to adapt or to be adapted to the context of use while preserving usability” (cf. [17], [84]). They interpret the context of use as a structured information space including a user model, a model of the social and physical environment and a model of the used technical platform ([84], [10]). While the conceptual framework covers all relevant aspects of accessibility, the detailed elaborations as well as the described scenarios and implemented demonstrators concentrate exclusively on the comfortable use of multiple technical devices and complex interaction spaces. Also the technical reference model CAMELON-RT clearly points to the original and main application field of ubiquitous computing [4]. Mechanisms for adaptations to different user needs and abilities are not addressed in detail in any of the publications concerning PLASTIC USER INTERFACES. In summary, there are some conceptual frameworks such as the UNIFIED USER INTERFACE DESIGN [80] in AVANTI [86], ASK-IT [78] or the PLASTIC USER INTERFACES [17], which outline a suitable infrastructure for model-based adaptive user interfaces for increased accessibility. Their concrete implementations and demonstrators, however, cover only parts of an extensive context modelling approach. MyUI is one of the first systems to cover adaptations to diverse user needs, devices and environmental conditions during run time in generated user interfaces for increased accessibility [70]. Another important aspect of user modelling systems is their interoperability. From an end user perspective, the use of one user profile in different technical environments is not only desirable but an absolute requirement for future real-world applications. It would not be acceptable to maintain multiple proprietary user profiles for all the different applications of our everyday life. The current EU-project Cloud4All3 addresses this issue by cloud technologies with a user profile stored in the cloud which serves as the basis for personalized user interfaces in diverse products and services. However, the interoperability with other systems is still not resolved. Most approaches from the academic research were focused on developing their own working environment – ending up with a closed system. Interoperability with other approaches has never been addressed before the VUMS cluster. This deficit also becomes obvious by the fact that most of the above mentioned approaches have never published their user models in terms of structure, formats, and content. A standard format for user models for adaptive and adaptable systems would mark a significant step towards the mainstreaming of model-based accessible products and services.

4. VUMS Cluster Standardisation Activities 4.1 Purpose The VUMS cluster of projects aims to lay the foundations for the development of a user modeling methodology targeting mainly people with disabilities as well as elderly people. More specifically, the VUMS cluster aims to develop: •

A standard user model able to describe in detail older people and people with various types of disabilities

3

http://cloud4all.info/ 10



A common data storage format for user profiles



Common calibration / validation techniques



Collaboration on ethical issues



Ensuring sustainability by making them available within a standard

Main goal of the proposed methodology is to cover the need for definition of user characteristics, needs and preferences within simulation and adaptation frameworks. More specifically, the proposed user models aim to be used in two major application areas: • Simulating users behavior during interaction with products and services, in order to reveal accessibility and ergonomy problems of the designs. • Adapting interfaces to cater users with a wide range of abilities The VUMS user modeling methodology is based on existing standards related to human factors, accessibility, ergonomy, user interface design, interface description languages and task modeling techniques. 4.2 Summary of VUMS Approach The VUMS cluster followed the following approach, in order to develop an interoperable user model: 1. Definition of a common vocabulary to avoid confusion among terms like user model, user profile, simulation, adaptation, etc. 2. Description of the terms in accordance to the existing standards. 3. Definition of a set of user characteristics covering physical, perceptual, cognitive and motor abilities of users with a wide range of abilities. 4. Definition of a VUMS Exchange Format to store these characteristics in a machine-readable form. The VUMS exchange format will provide the means that will allow the exchange of user models between the projects of the VUMS cluster, as depicted in Figure 2. More specifically, as the VUMS Exchange Format will contain a superset of user variables (defined in step 3), any user model expressed in every project-specific format will be able to be transformed into a model following the VUMS Exchange Format.

Figure 2 VUMS Exchange Format

5.

Development of a set of converters, able to transform a user profile following the VUMS Exchange Format into each project’s specific user model and vice versa (Figure 3). As the VUMS Exchange Format includes the superset of the variables contained in each the user model of each project of the VUMS cluster, the transformation of a project-specific user model into a VUMS user model is straightforward. Contrariwise, during the transformation of a VUMS user model into a project-specific user model, some information may be lost, as some variables included in a VUMS user model may not be included in the project-specific user model. 11

Figure 3 VUMS Converters

4.3 Glossary of Terms As a first step towards standardisation of user models, the VUMS cluster has defined a Glossary of Terms for supporting a common language. Its scope and contexts of usage is the adaptation of human-machine interfaces to the needs of the real user or the simulation of the interaction between a human and a product during design phase. The definitions given in the glossary are based on literature. User Model. An (abstract) user model is a set of user characteristics required to describe the user of a product. The characteristics are represented by variables. The user model is established by the declaration of these variables. It is formally described in a machine-readable and human-readable format. An instantiation of the user model is a user profile. User Profile. A user profile is an instantiation of a user model representing either a specific real user or a representative of a group of real users. It is an instantiation of an (abstract) user model it is formally described in a machine-readable and human-readable format, compatible with. Virtual user. A virtual user is a representation of a user based on a User Profile. The virtual user exists in a computer memory during the run time of an application. It includes components, which are able to interact with other virtual entities e.g. virtual products or software applications. Virtual users intended for simulation purposes represent the human body as e.g. a kinematic system, a series of links connected by rotational degrees of freedom (DOF) that collectively represent musculoskeletal joints such as the wrist, elbow, vertebra, or shoulder. The basic skeleton of the model is described usually in terms of kinematics. In this sense, a human body is essentially a series of links connected by kinematic revolute joints. Each DOF corresponds to one kinematic revolute joint, and these revolute joints can be combined to model various musculoskeletal joints. Environmental Model. An environmental model is formal machine-readable set of characteristics used to describe the use environment. It includes all required contextual characteristics besides the user model, the interaction model, the device model, the product and related user tasks. Device Model. It is a formal machine-readable representation of the features and capabilities of one or several physical components involved in user interaction. It is important to carefully discriminate between user and device model as they are two kinds of models. Too often they are conflated together, with device properties sprinkled into user profiles and vice versa. The device model expresses capabilities of the device. A given device can be used by many different users and a given user could use different devices. By carefully separating the different functionalities of device modelling and user modelling in design scenarios it will be easier to enumerate the attributes for each model and from them develop the matching function and attributes of the adaptation process.

12

User Agent. A User Agent is any end user software (like browser, or other user interface component) that can retrieve and render application content and invoke requests to the User Agent Capabilities Model to modify the application content. User Agent Capabilities Model. A User Agent Capabilities Model is a formal machine-readable representation of the capabilities of the user agent related to user interaction. Application Model. An Application Model is a formal machine-readable representation of the states, transitions and functions of the application. User Interaction Model. The interaction model is a machine readable representation of the interaction behaviour of an application. The interaction model is maintained UI-agnostic, which means it is independent of the concrete format of user interface output- and input data. Interaction model is often also referred to as abstract user interface model, like for example UIML, UI Socket, XForms, etc. It should be noted that the Interaction model can be used for adaptation of Human Machine Interfaces (HMI) and for simulating the use of an application /product with a virtual user. Context Model. It is a machine-readable representation of information that can be used to characterize the situation of an entity. An entity is a person, a place, a device, or a product that is considered relevant to the interaction between a user and an application, including the user and applications themselves. Simulation. Simulation is the process that enables the interaction of the virtual user with the application model within an artificial environment. The simulation can be real-time or off-line. Real-time simulation can be performed autonomously or manually, where the operator can interact with the environment from a 1st or 3rd person perspective. Accessibility assessment and evaluation can be performed automatically or subjectively by the operator. User Model/Profile Validation. User Models are always simplified descriptions of the user. Validation is the process to determine whether the model is an appropriate representation of the user for a specific application. Mathematical then it needs a statistical validation process. If the model is non-mathematical then it should be validated through qualitative processes. We can standardize the type, process and metrics of validation. Adaptive User Interfaces. User interfaces that adapt their appearance and/or interaction behaviour to an individual user according to a user profile. In contrast to adaptable user interfaces, which are modified by a deliberate and conscious choice of a user, adaptive user interfaces automatically initiate and perform changes according to an updated user profile. User Interface Design Pattern. This is an approved user interface solution to a recurring design problem. User Interface Design has a formalized description. For the use in adaptive user interfaces, design patterns have a representation in form of reusable software components which can be put together to complete user interfaces during run-time.

5. VUMS User Model 5.1 Definition When defining users and user profiles, all VUMS cluster projects started from the description of certain characteristics of the user. In order to create user model that would be useful in simulation and adaptation, a main question has to be answered first: What characteristics describe the user in a certain use context? In order to work with these characteristics by measuring them or calculating with them, a formal description as mathematical variables is a natural and rather compelling approach. This means that the generic user model can be described as a set of variables describing the user adequately for a certain use case or application. In order to work appropriately with variables, a precise definition is needed, a way to express them in numbers and the unit of measure to relate them to physical reality. Those items are needed for each of the variables in the set. A standard for user models should include this definition and approach. This section explains the 13

structure of VUMS user model. In short, we defined a set of parameters4 through a set of descriptors, categorized them following taxonomy and defined a syntax to represent them in both human and machine readable form. 5.1.1 Taxonomy of variables The categories of the user variables’ taxonomy are the following: •

Anthropometrics: Physical dimensions, proportions, and composition of the human body (e.g. weight, stature, etc.)



Motor parameters: Parameters concerning the motor function of the human body o

Gait parameters: Parameters concerning human gait (e.g. step length, step width, etc.)

o

Upper body parameters: Parameters concerning human upper limbs (e.g. wrist flexion, etc.)

o

Lower body parameters: Parameters concerning human lower limbs (e.g. hip extension, etc.)

o

Head and neck parameters: Parameters concerning human head and neck (e.g. lateral bending, etc.)

o

Spinal column parameters: Parameters concerning the spinal column (e.g. spinal column flexion, etc.)



Strength parameters: Parameters concerning human strength (e.g. maximum gripping force of one hand, etc.)



Dexterity/control parameters: Parameters concerning motor skills of hands and fingers



Affective parameters: Parameters concerning human emotions (e.g. anger, disgust, etc.)



Interaction related states: Parameters concerning human body response to situations of physical or emotional pressure (e.g. stress, fatigue, etc.)



Hearing parameters: Parameters concerning hearing (e.g. hearing thresholds in specific frequencies, etc.)



Visual parameters: Parameters concerning vision (e.g. visual acuity, colour perception, etc.)



Cognitive parameters: Parameters related to information-processing abilities of humans, including perception, learning, remembering, judging and problem-solving (e.g. working memory capacity, etc.)

4

https://docs.google.com/spreadsheet/ccc?key=0AnAwpf4jk8LSdDd3TEJWLUtmN290YzVfTkN vcHYyMUE&authkey=CPOO65oE 14



Equilibrium: Parameters concerning the sense of balance.



Others: Parameters that cannot be included in the before mentioned categories.

5.1.2 Descriptors for variables In order to describe a virtual human in detail, for each user model variable the following properties are defined: •

Name: The name of the variable



ID/tag: The tag to be used for defining the specific variable in a user profile



Description/definition: A description/definition of the variable



Unit: The measurement unit of the variable



Value Space: The value space of the variable (nominal, ordinal, interval, ratio, absolute)



Taxonomy/super categories: Refers to the categories described in previous section



Data type: The data type of the variable (character/string, enumeration, list/vector, integer, float, set)



How to measure/detect: Refers to techniques/devices used to measure the value of the variable (e.g. goniometer, tape measure method, etc.)



Reference/source: Literature references where information regarding the variable can be found



Relations: Statistical correlation to other variables, function of others, dependency of others



Source project: The name of the project of VUMS cluster that introduced the variable



Supported/used by project: The name(s) of the project(s) of VUMS cluster that use the variable in their user profiles.



Comment: Comments concerning the variable (status, cross-references, etc.)

5.2 Implementation – VUMS Exchange Format A prerequisite for achieving interoperability in using user models is a clear understanding about what the abstract user model behind communicated data is. Coded and machine readable user model representations tend to be rather specific to a problem and use case. The abstract user model behind the code is not explicitly defined but only implicitly contained. In order to make user models and user profiles interoperable, they should, however, be provided together with the underlying abstract user model. Thus, a both human and machine readable abstract user model definition is required, which can be delivered with a user model.

15

Given that an abstract user model can be seen as a set of variables or parameters which describe the human resources to fulfill an interaction task and their definitions, an abstract user model can be defined by a set of parameters together with their descriptors. The following tree structure (Table 3) shows, how this can be illustrated graphically. Abtract user modelName |__Parameter |__Name |__Category |__Definition |__Reference to sources |__Data type in computer terms (integer, float,...) |__Type of scale in empirical terms (nominal, ordinal,...) |__Dimension, physical unit |__Range | |__Minimum | |__Maximum |__Test code / measuring instructions |__Relation to other parameters |__Specification of relation |__Correlation |__Covariance |__Parameter... Table 3 Abstract user model generic structure

Thus, an abstract user model consists of a name and a set of parameters, which are specified by a number of descriptors. The descriptors are partly human readable information, which try to make the user model understandable (such as names, definitions, units, measuring instructions, references to more information, level of scale of the data). But they also include machine- readable information like data type. There are two principles behind the design of the notation above: 1.

Semi structured document

2.

Tag-oriented definitions

Semi structured means that it stands between a strictly data-centred format, which focuses on machine-readability and a document-cantered format, which is optimised for human-readability. On the one hand all items are clearly labelled by standardised tags. On the other hand there is flexibility and freedom in defining the content/values between the tags. Tag-oriented definitions means that each item is described by using the tag-syntax content/value. A competing way would be to write information as attributes (e.g. Range of motion). However, this format is not preferred in general. For the definition of the VUMS user model in machine-readable format the VUMS Exchange Format5 has been developed. Figure 4 presents the UML class diagram that describes the main containers of the VUMS Exchange Format. UML class diagrams describing each container in detail can be found in the Appendix 2. A complete VUMS user profile in the VUMS Exchange format can be found in the Appendix 1.

5

The complete UML class diagram of the VUMS Exchange Format can be found at: http://160.40.50.183/VUMS/VUMSExchangeFormat.jpg 16

Figure 4 VUMS Exchange Format main containers – UML Class diagram

5.3 Exchanging user profiles between VUMS projects The proposed VUMS Exchange Format includes a large set of variables describing various human characteristics, which actually is the superset of all user models used in the projects of the VUMS cluster. A set of converters allowing the transformation of a project-specific user model to a VUMS user model and vice versa has been developed. These converters enable the sharing of user models between the projects of the VUMS cluster. For example, the VERITAS project investigates automobile interface design and stores anthropomorphic details of users including range of motions of different joints in VERITAS user models. On the other hand, the GUIDE project develops adaptable interfaces for digital TV interfaces and it uses the active range of motion of wrist to predict movement time for simulating interaction [6]. So, using the converters, the GUIDE framework reads the values of pronation and supination from a VERITAS profile stored in VUMS Exchange Format and uses them to derive active range of motion of wrist. Similar case studies may also include other variables (visual, hearing, etc.) and projects like VICON and MyUI. Currently, all VUMS projects can import profiles from one project to other, and, thus, the simulation of interaction in different application domains (automobile, mobile phone, digital TV, adaptive interfaces) is achieved for any user profile. 5.4 Common Ethics Format The ethics task force was created with the aim of carrying out three different tasks together: 1) The organization of a special session dealing with Ethics in the first scheduled workshop of VERITAS (November 2010), to which all mentioned projects were invited. This special session was held as planned and in that, a general overview of the ethical framework in the VUMS cluster and in each one of the projects belonging to this cluster was presented. 2) The exchange of ethics forms like informed consent, manuals, etc. At the beginning of the ethics task force, the different ethics-related materials were exchanged between the different projects. The aim was to define a common informed consent for the 4 projects in the first three months of the projects. This informed consent can be found in Appendix 3. 17

3) The development of an Ethical Guideline or Manual about the general ethical approach to be followed throughout the projects. In this Manual all the factors that should be taken into account before starting the research activities with humans are addressed. The aim is to describe how the consortium is going to maintain security, privacy and confidentiality norms and respect the common values of respect for autonomy, beneficence, non-maleficence and justice throughout the project. This Manual has been used in the studies conducted with elderly persons in all the projects of the cluster. The Manual also includes aspects related with the interaction of the elderly with the technology in the possible future in live operations (e.g. how can the privacy be safeguarded in a multi-user scenario?), and the potential issues that may arise when dealing with virtual models both in the design and use phases (e.g. responsibilities of providing honest and accurate user data).

6. Conclusions and future work In this paper we have presented the efforts of the VUMS cluster towards the development of an interoperable user model, able to describe both able-bodied and people with various kinds of disabilities. The followed user modelling approach was based on existing virtual user models relevant for adaptation and simulation, as well as on the analysis of Human Factors and Human Activities. The VUMS cluster team has worked on the creation of term definitions and gathered the attributes from the single user models used in the participating projects. Except from these attributes’ list, the VUMS Exchange Format, a common format for the definition of user models has been defined. In order to exchange user profiles between the VUMS cluster projects, converters that enable the transformation from a project-specific user model to a VUMS user model and vice versa have been developed. The VUMS cluster aims to set the basis of a new user model standard and continue its efforts even after the end of the participant projects. Thus, the idea is to establish an open repository of definitions of variables and/or user models on the internet, which could be updated and changed by the community, towards the development of a living standard in the Internet.

References [1] Anderson J. R. and Lebiere C. "The Atomic Components of Thought." Hillsdale, NJ, USA: Lawrence Erlbaum Associates, 1998. [2] Anthropos ErgoMAX. 2004. Available on-line via http://www.ergomax.de/html/welcome.html (accessed January 2004). [3] Apkarian, J., Naumann, S. and Cairns, B. (1989) A three-dimensional kinematic and dynamic model of the lower limb. J Biomech 22, 143-55. [4] Balme, L., Demeure, A., Barralon, N., Coutaz, J. & Calvary, G. (2004). CAMELEON-RT: A Software Architecture Reference Model for Distributed, Migratable, and Plastic User Interfaces", In: Proceedings of SOC EUSAI 2004, pp. 291-302 [5] Barnard P. "The Emotion Research Group Website, MRC Cognition and Brain Sciences Unit." Available at: http://www.mrc-cbu.cam.ac.uk/~philb , Accessed on 1st July, 2007 [6] Biswas P., and Langdon P. (2012) Developing multimodal adaptation algorithm for mobility impaired users by evaluating their hand strength, International Journal of Human-Computer Interaction 28(9), Taylor & Francis, Print ISSN: 1044-7318 [7] Biswas P., Langdon P. & Robinson P. (2012) Designing inclusive interfaces through user modelling and simulation, International Journal of Human Computer Interaction, Taylor & Francis, Vol 28 Issue 1 [8] Boden M. A., Computer Models of Mind: Computational Approaches in Theoretical Psychology, Cambridge University Press 1985 [9] Byrne M. D. "ACT-R/PM And Menu Selection: Applying A Cognitive Architecture To HCI." International Journal of Human Computer Studies 55 (2001):41-84. [10] Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L. & Vanderdonckt, J. (2003). A Unifying Reference Framework for Multi-Target User Interfaces. Interacting with Computers 15, 3 (June 2003) pp. 289–308. [11] Card S., Moran T. and Newell A. The Psychology of Human-Computer Interaction. Hillsdale, NJ, USA: Lawrence Erlbaum Associates, 1983. 18

[12] Carmagnola F., Cena F. and Gena C., User model interoperability: a survey, User Modeling And User-Adapted Interaction, Vol 21,Number 3 (2011), 285-331 [13] Cappelli, T.M. & Duffy, V.G. (2006). Motion Capture for Job Risk Classifications Incorporating Dynamic Aspects of Work. Digital Human Modeling for Design and Engineering Conference, Lyon, 4-6 July 2006. Warrendale: SAE International. [14] Choi, J. Developing a 3-Dimensional Kinematic Model of the Hand for Ergonomic Analyses of Hand Posture, Hand Space Envelope, and Tendon Excursion. PhD thesis, The University of Michigan, 2008. [15] Cognitive Architectures. Available at: http://en.wikipedia.org/wiki/ Cognitive_architecture, Accessed on 1st July, 2007 [16] Coluccini, M., Maini, E.S., Martelloni, C., Sgandurra, G., Cioni, G. Kinematic characterization of functional reach to grasp in normal and in motor disabled children, Gait & Posture, Volume 25, Issue 4, April 2007, Pages 493-501, ISSN 0966-6362, 10.1016/j.gaitpost.2006.12.015. (http://www.sciencedirect.com/science/article/pii/S0966636207000136) [17] Coutaz, J. (2010). User interface plasticity: model driven engineering to the limit!. In Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS '10). New York: ACM. pp. 1-8. [18] DeCarlo, D., Metaxas, D., and Stone, M. 1998. An anthropometric face model using variational techniques. In Proceedings of the 25th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '98. ACM, New York, NY, 67-74. [19] DiLorenzo, P.C., Zordan, V.B., Sanders, B.L. (2008). Laughing out loud: Control for modeling anatomically inspired laughter using audio. ACM Transactions on Graphics 27, 5 (Dec.), 125:1–8. [20] Duffy V. G. "Handbook of Digital Human Modeling: Research for Applied Ergonomics and Human Factors Engineering." FL, USA: CRC Press,2008 [21] Eng K., Lewis R. L., Tollinger I., Chu A., Howes A. and Vera A. "Generating Automated Predictions of Behavior Strategically Adapted To Specific Performance Objectives." ACM/SIGCHI Conference on Human Factors in Computing Systems 2006. 621-630. [22] Eng J.J., Winter D.A., Kinetic analysis of the lower limbs during walking: what information can be gained from a three-dimensional model? Journal of Biomechanics, Volume 28, Number 6, June 1995 , pp. 753-758(6) [23] Feyen, R., Liu, Y., Chaffin, D., Jemmerson, G., Joseph, B. (2000). Computer-aided ergonomics: a case study of incorporating ergonomics analyses into workplace design. Appl. Ergonomics, 2000, 31, 291–300. [24] Fitts P.M. "The Information Capacity of The Human Motor System In Controlling The Amplitude of Movement." Journal of Experimental Psychology 47 (1954): 381-391. [25] Fortin, C., Gilbert, R., Beuter, A., Laurent, F., Schiettekatte, J., Carrier, R., Dechamplain, B. (1990). SAFEWORK: a microcomputer-aided workstation design and analysis. New advances and future developments. In: Karkowski, W., Genaidy, A.M., Asfour, S.S. (Eds.), ComputerAided Ergonomics. Taylor and Francis, London, pp. 157-180. [26] Gajos, K.Z., Weld, D.S., & Wobbrock, J.O. (2010). Automatically generating personalized user interfaces with Supple. Artificial Intelligence 174, 12-13. pp. 910–950. [27] Gajos K. Z., Wobbrock J. O. and Weld D. S. (2007) Automatically generating user interfaces adapted to users' motor and vision capabilities. ACM symposium on User interface software and technology, 231-240. [28] Garner, B.A. and Pandy, M.G. (2003). Estimation of Musculotendon Properties in the Human Upper Limb. Annals of Biomedical Engineering 31: 207-220. [29] Hampson P. J. and Moris P. E. "Understanding Cognition." Oxford, UK: Blackwell Publishers Ltd.,1996. [30] Hick W.E. "On the rate of gain of information." Journal of Experimental Psychology 4 (1952): 11-26. [31] Hingtgen, B.A., McGuire, J.R., Wang, M., Harris, G.F. Design and validation of an upper extremity kinematic model for application in stroke rehabilitation, Engineering in Medicine and Biology Society, 2003. Proceedings of the 25th Annual International Conference of the IEEE , vol.2, no., pp. 1682-1685 Vol.2, 17-21 Sept. 2003. [32] Holzbaur, K.R.S., Murray, W.M. and Delp, S.L. A model of the upper extremity for simulating musculoskeletal surgery and analyzing neuromuscular control. Annals of Biomedical Engineering, vol 33, pp 829-840, 2005. 19

[33] Hornof A. J. and Kieras D. E. "Cognitive Modeling Reveals Menu Search Is Both Random And Systematic." ACM/SIGCHI Conference on Human Factors in Computing Systems 1997. 107-114. [34] Howes A., Vera A., Lewis R.L. and Mccurdy, M. "Cognitive Constraint Modeling: A Formal Approach To Reasoning About Behavior." Annual meeting of the Cognitive Science Society Lawrence Erlbaum Associates, 2004. [35] ISO/TC 159 „Ergonomics“ (2008). ISO 9241-171:2008 Ergonomics of human-system interaction -- Part 171: Guidance on software accessibility. [36] Jameson, A. (2001). Systems That Adapt to Their Users: An Integrative Perspective. Saarbrücken: Saarland University. [37] John B. E. and Kieras D. "The GOMS Family of User Interface Analysis Techniques: Comparison And Contrast." ACM Transactions on Computer Human Interaction 3 (1996): 320-351. [38] Johnson-Laird P.A. "The Computer and The Mind." Cambridge, MA, USA: Harvard University Press, 1988. [39] Kähler, K., Haber, J., Yamauchi, H., Seidel, H.P. (2002). Head shop: Generating animated head models with anatomical structure. In ACM SIGGRAPH/EG Symposium on Computer Animation, 55–64. [40] Kane, S.K., Wobbrock, J.O. & Smith, I.E. (2008). Getting off the treadmill: evaluating walking user interfaces for mobile devices in public spaces. In Proceedings of the 10th international conference on Human computer interaction with mobile devices and services (MobileHCI '08). New York: ACM. p. 109-118. [41] Keates S., Clarkson J. and Robinson P. "Investigating The Applicability of User Models For Motion Impaired Users." ACM/SIGACCESS Conference On Computers And Accessibility 2000. 129-136. [42] Kieras D. and Meyer D. E. "An Overview of The EPIC Architecture For Cognition And Performance With Application to Human-Computer Interaction." Human-Computer Interaction 12 (1990): 391-438. [43] Komura, T., Shinagawa, Y., Kunii, T.L. (2000). Creating and retargeting motion by the musculoskeletal human body model. The Visual Computer 16, 5, 254–270. [44] Koo, T.k., Mak, A.F. Feasibility of using EMG driven neuromusculoskeletal model for prediction of dynamic movement of the elbow, Journal of Electromyography and Kinesiology, Volume 15, Issue 1, February 2005, Pages 12-26, ISSN 1050-6411, 10.1016/j.jelekin.2004.06.007. (http://www.sciencedirect.com/science/article/pii/S1050641104000616) [45] Koo, T.K., Mak, A.F., Hung, L.K. In vivo determination of subject-specific musculotendon parameters: applications to the prime elbow flexors in normal and hemiparetic subjects, Clinical Biomechanics, Volume 17, Issue 5, June 2002, Pages 390-399, ISSN 0268-0033, 10.1016/S0268-0033(02)00031-1. (http://www.sciencedirect.com/science/article/pii/S0268003302000311) [46] Laitila, L. (2005). Datormanikinprogram om verktyg vid arbetsplatsutformning – En kritisk studie av programanvändning. Thesis. Luleå Technical University, Luleå. [47] Lallement Y. and Alexandre F. "Cognitive Aspects of Neurosymbolic Integration." Connectionist-Symbolic Integration Ed. Sun R. and Alexandre F.London, UK: Lawrence Erlbaum Associates, 1997. [48] Lamkull, D., Hanson, L., Ortengren, R. (2009). A comparative study of digital human modelling simulation results and their outcomes in reality: A case study within manual assembly of automobiles. International Journal of Industrial Ergonomics 39 (2009) 428-441. [49] Lee, S.H., Terzopoulos, D. (2006). Heads up! Biomechanical modeling and neuromuscular control of the neck. ACM Transactions on Graphics 25, 3 (July), 1188–1198. Proc. ACM SIGGRAPH 06. [50] Lind, S., Krassi, B., Johansson, B., Viitaniemi, J., Heilala, J., Stahre, J., Vatanen, S., Fasth, Å., Berlin., C. (2008). SIMTER: A Production Simulation Tool for Joint Assessment of Ergonomics, Level of Automation and Environmental Impacts. The 18th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM 2008), June 30 – July 2, 2008. [51] Mankoff J., Fait H. and Juang R. Evaluating accessibility through simulating the experiences of users with vision or motor impairments. IBM Sytems Journal 44.3 (2005): 505-518.

20

[52] Marshall, R., Case, K., Porter, J.M., Sims, R.E., Gyi, D.E. (2004). Using HADRIAN for Eliciting Virtual User Feedback in 'Design for All', Journal of Engineering Manufacture; Proceedings of the Institution of Mechanical Engineers, Part B, 218(9), 1st September 2004, 1203-1210. [53] Mcmillan W. W. "Computing For Users with Special Needs And Models of omputer-Human Interaction." ACM/SIGCHI Conference on Human Factors In Computing Systems 1992. 143148. [54] Meulen, P. van der, Seidl, A. (2007). RAMSIS- The Leading Cad Tool for Ergonomic Analysis of Vehicles, Digital Human Modeling, HCII 2007, LNCS 4561, pp. 1008–1017. [55] Moran T.P. "Command Language Grammar: A Representation For The User Interface of Interactive Computer Systems." International Journal of Man-Machine Studies 15.1 (1981): 350. [56] Newell A. "Unified Theories of Cognition." Cambridge, MA, USA: Harvard University Press, 1990. [57] Nichols, J & Myers, B. A. (2009). Creating a lightweight user interface description language: An overview and analysis of the personal universal controller project. ACM Trans. Comput.Hum. Interact. 16, 4, Article 17 (November 2009), 37 pages. [58] Nichols, J., Myers, B. A. & Rothrock, B. (2006). UNIFORM: automatically generating consistent remote control user interfaces. In Proceedings CHI '06. New York: ACM. p. 611620. [59] Nichols, J., Rothrock, B., Chau, D. H. & Myers, B. A. (2006). Huddle: automatically generating interfaces for systems of multiple connected appliances. In Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST '06). New York: ACM. p. 279-288. [60] Oka N. "Hybrid cognitive model of conscious level processing and unconscious level processing." IEEE International Joint Conference on Neural Networks 1991. 485-490. [61] Oppermann, R. (1994). Adaptively supported adaptability. International Journal of Human Computer Studies, 40(3), p. 455–472. [62] Ouerfelli, M., Kumar, V., Harwin, W.S. Kinematic modeling of head-neck movements, Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on , vol.29, no.6, pp.604-615, Nov 1999, doi: 10.1109/3468.798064, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=798064&isnumber=17313 [63] Pai Y.C. and Patton, J.L. Center of mass velocity-position predictions for balance control, Journal of Biomechanics, vol. 11, pp. 341-349, 1997. [64] Pai, Y.C. and Patton, J.L. Erratum: Center of mass velocity-position predictions for balance control, Journal of Biomechanics, vol. 31, pp. 199, 1998. [65] Pai, Y.C., Rogers, M.W., Patton, J.L., Cain, T.D., and Hanke, T. Static versus dynamic predictions of protective stepping following waist-pull perturbations in young and older adults, Journal of Biomechanics, vol. 31, pp. 1111-8, 1998. [66] Paterno, F., Santoro, C., Mäntyjärvi, J., Mori, G. & Sansone, S. (2008). Authoring pervasive multimodal user interfaces. Int. J. Web Eng. Technol. 4, 2 (May 2008). p. 235-261. [67] Paterno, F., Santoro, C. & Spano, L. D. (2009). MARIA: A universal, declarative, multiple abstraction-level language for service-oriented applications in ubiquitous environments. ACM Trans. Comput.-Hum. Interact. 16, 4, Article 19 (November 2009), 30 pages. [68] Patton, J.L., Pai, Y.C., and Lee, W.A. A Simple Model of the Feasible Limits to Postural Stability, presented at IEEE/Engineering in Medicine an Biology Society Meeting, Chicago, 1997 [69] Patton, J.L., Lee, W.A., and Pai, Y.C.. Relative stability improves with experience in a dynamic standing task, Experimental Brain Research, vol. 135, pp. 117-126, 2000. [70] Peissner, M., Häbe, D., Janssen, D. & Sellner, T. (2012). MyUI: generating accessible user interfaces from multimodal design patterns. In Proceedings of the 4th ACM SIGCHI symposium on Engineering interactive computing systems (EICS '12). New York: ACM. pp. 81-90. [71] Pennestrì, E., Stefanelli, R., Valentini, P.P., Vita, L. Virtual musculo-skeletal model for the biomechanical analysis of the upper limb, Journal of Biomechanics, Volume 40, Issue 6, 2007, Pages 1350-1361, ISSN 0021-9290, 10.1016/j.jbiomech.2006.05.013. (http://www.sciencedirect.com/science/article/pii/S0021929006001679)

21

[72] Phillips, C. B., & Badler, N. I. (1988). Jack: A toolkit for manipulating articulated figures. In: Proceedings of the 1st Annual ACM SIGGRAPH Symposium on User Interface Software (pp. 221–229). New York: ACM. [73] Pierre T. Kirisci, Patrick Klein, Markus Modzelewski, Michael Lawo, Yehya Mohamad, Thomas Fiddian, Chris Bowden, Antoinette Fennell, Joshue O. Connor: Supporting Inclusive Design of User Interfaces with a Virtual User Model. HCI (6) 2011: PP 69-78. The fourvolume set LNCS 6765-6768 [74] Porter, J., Case, K., Freer, M.T., Bonney, M.C. (1993). Automotive Ergonomics, Chapter Computer-aided ergonomics design of automobiles. London: Taylor and Francis. [75] Porter, J.M., Marshall, R., Freer, M. and Case, K. (2004). SAMMIE: a computer aided ergonomics design tool. In: N.J. Delleman, C.M. Haslegrave, and D.B. Chaffin eds. Working Postures and Movements – tools for evaluation and engineering. Boca Raton: CRC Press LLC, 454-462. [76] Prince, F., Corriveau, H., Hebert, R., Winter, D.A. Gait in the elderly. Gait and Posture, Volume 5, Number 2, April 1997 , pp. 128-135(8). [77] Rao, S.S., Bontrager, E.L., Gronley, J.K., Newsam, C.J., Perry J. Threedimensional kinematics of wheelchair propulsion. IEEE Trans Rehabil Eng 1996;4:152-60. [78] Ringbauer, B., Peissner, M., & Gemou, M. (2007). From “design for all” towards “design for one”– A modular user interface approach. In: C. Stephanidis (eds.): Universal Access in HCI, Part I, HCII 2007, LNCS 4554, Berlin: Springer-Verlag. pp. 517–526. [79] Sapin, E., Goujon, H., de Almeida, F., Fodé, P. and Lavaste, F.(2008) Functional gait analysis of trans-femoral amputees using two different single-axis prosthetic knees with hydraulic swing-phase control: Kinematic and kinetic comparison of two prosthetic knees,Prosthetics and Orthotics International,32:2,201 — 218. [80] Savidis, A. & Stephanidis, C. (2004). Unified user interface design: designing universally accessible interactions. Interacting with Computers 16(2): 243-270. [81] Serna A., Pigot H. and Rialle V. (2007) Modeling the progression of Alzheimer's disease for cognitive assistance in smart homes. User Modeling and User-Adapted Interaction 17, 415438. [82] Shapiro, A., Faloutsos, P., Ng-Thow-Hing V. (2005). Dynamic animation and control environment, In Proceedings of Graphics Interface 2005, pp. 61-70. [83] SimTk. OpenSim (2008). URL https://simtk.org/home/opensim. [84] Sottet, J-.S., Ganneau, V., Calvary, G., Coutaz, J., Demeure, A., Favre, J.-M. & Demumieux, R. (2007). Model-driven adaptation for plastic user interfaces. In C. Baranauskas, P. Palanque, J. Abascal & S. Junqueira Barbosa (eds.). Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction (INTERACT'07), Berlin, Heidelberg: SpringerVerlag. pp. 397-410. [85] Stephanidis C. and Constantinou P. "Designing Human Computer Interfaces For Quadriplegic People." ACM Transactions On Computer-Human Interaction 10.2 (2003): 87-118. [86] Stephanidis C., Paramythis A., Sfyrakis M., Stergiou A., Maou N., Leventis A.,Paparoulis G. and Karagiannidis C. "Adaptable And Adaptive User Interfaces for Disabled Users in the AVANTI Project." Intelligence In Services And Networks, LNCS-1430, Springer-Verlag 1998. 153-166. [87] Tollinger I., Lewis R. L., McCurdy M., Tollinger P., Vera A., Howes A. and Pelton L. "Supporting Efficient Development of Cognitive Models At Multiple Skill Levels: Exploring Recent Advances In Constraint-Based Modeling." ACM/SIGCHI Conference on Human Factors in Computing Systems 2005. 411-420. [88] Van Nierop, O.A., Van der Helm, A., Overbeeke, K.J., Djajadiningrat, T.J. (2008). A natural human hand model. The Visual Computer 24, 1 (Jan.), 31–44. [89] VSR Research Group (2004). Technical report for project virtual soldier research. Tech. rep., Center for Computer-Aided Design, The University of IOWA. [90] Weibelzahl, S. (2002). Evaluation of Adaptive Systems. Dissertation. Trier: University of Trier.

22

Appendix 1 – VUMS User Profile example A sample user profile in VUMS Exchange Format: -1 Exported Virtual User Model Undefined Undefined Undefined Undefined 60 Undefined Undefined Undefined Undefined Undefined -1.0 -1.0 -1.0 Undefined Undefined -1 Undefined -1.0 Undefined -1 -1 -1.0

23

1.0 -1 -1 1.0 1.0 0 -1 -1 1.0 -1 -1 1.0 1.0 0 -1 -1 10.0 -1 -1 -1 -1 -1 -1 10.0 -1 -1 -1 -1 -1 -1 -1 -1 -1.0 -1.0 0.0 -1.0 -1.0 0.0 -1.0 -1.0 -1.0 -1.0 Undefined Undefined Undefined Undefined

24

120.0 135.0 -1.0

25



26



27



28

-1.0

29

Appendix 2 – VUMS Exchange Format – UML Class diagrams

Figure 5 Affected (by the disabilities) tasks and general information about the user – UML class diagram

Figure 6 Anthropometric variables – UML class diagram

30

Figure 7 Visual variables – UML class diagram

Figure 8 Auditory variables – UML class diagram

Figure 9 Speech variables – UML class diagram

31

Figure 10 Cognitive variables – UML class diagram

Figure 11 Mobility variables – UML class diagram

32

Appendix 3 – Common Informed Consent Title of the project: Coordinator: Local Principal Researcher: Institution: Financed by: Project duration: Participant’s name: The study described in this document is a part of the project called “Title of the project”, th financed by the European Commission under the 7 Framework Programme (Consortium Agreement: Number of Consortium Agreement). This consent sheet may contain words you do not understand. Please ask either the contact researcher or any professional in the study to explain any word or give any further information. You may take a copy of this consent to think about it or talk to your family before taking a decision. At all times, we try to assure the compliance of the current legislation. Introduction: You have been invited to participate in a research study. Before deciding whether you want to participate, we would kindly request that you read this consent carefully. Please ask any questions that may come to your mind in order to make sure you understand all the procedures of the study, including the risks and benefits. Purpose Of The Study: The main aim of the project is ... (describe here the aim of the project). In the document entitled: “Information Page”, you will find more information about the purpose of the study. Type of Research Intervention This research will consist of focus groups, completion of questionnaires, etc. Your participation would consist of..... Participant Selection Explain why a person was selected to participate in the research. Participants In The Study And Possible Participation In It We kindly request your voluntary participation in this research study. This informed consent includes information about the study. We would like to assure that you are perfectly informed about the purpose of our study and what your participation in it implies. Please ask us to clarify any section in this information document that may be necessary. Please, do not sign if you are not sure that you have understood all the aspects of the study and its objectives. In this part of the study we would like to know (complete with the concrete aim of the study for which the participation of the participant is required).

Voluntary Participation Your participation in this research is entirely voluntary. It is your choice whether to participate or not. You can give up at any moment without being penalized or losing benefits. The participants will be elderly people older than 60 years old. The travel costs from your home to the lab will be covered by us.

33

Duration The research takes place over ___ (number of) days/ or ___ (number of) months in total. During that time, we will contact you ____ times for ____ you at ____ interval and each interview will last for about ____ hour each. Risks Or Inconveniences: No risk or damage is foreseen during the test application.

Benefits It is probable that you will not receive any personal benefit for your participation in this study. In any case, the data collected in this study might result in a better knowledge and later intervention for elderly people.

Reimbursements E.g.: You will not be provided any incentive to take part in the research. However, we will give you [provide a figure, if money is involved] for your time, and travel expense (if applicable). Privacy And Confidentiality: We will record your answers to our notes that will not hold any identification of yourself nor will it be possible to identify yourself later on. In other words, when someone agree to participate in the research, they receive a code-number, and from that moment every personal data are under that code, because of that no one could know to whom the data belongs to. The information will be processed during the analysis of the data obtained and will appear in the project deliverables but again – only in the way that it will not be possible to identify from whom we received the information assuring in every moment the performance of (include the national laws that will be guaranteed). The results of this research can be published in scientific magazines or be presented in gerontological sessions, always guaranteeing the complete anonymity. The authorization for the use and access of the information for the aim of research is totally voluntary. This authorization will be applied to the end of the study unless you cancel it before. In this case we will stop the using of your data. All the data will be destroyed five years after the end of the project. If you decide to withdraw your consent later on, we ask you to contact the principal researcher of this study and let him know that you are withdrawing from the study. Sharing results Nothing that you tell us today will be shared with anybody outside the research team, and nothing will be attributed to you by name. The knowledge that we get from this research will be shared with you and your community before it is made widely available to the public. Each participant will receive a summary of the results. There will also be small meetings in the community and these will be announced. Following the meetings, we will publish the results so that other interested people may learn from the research. Right to Refuse or Withdraw You do not have to take part in this research if you do not wish to do so, and choosing to participate will not affect your rights in any way. You may stop participating in the study at any time that you wish. We will give you an opportunity at the end of the study to review your remarks, and you can ask to modify or remove portions of those, if you do not agree with our notes or if we did not understand you correctly. From the moment of your withdrawal, your data will not be newly processed in any further phases of the research project. However, it will not be possible to alter already existing published documents or completed project deliverables.

34

Who to Contact The principal researcher can be contacted under the following address: Name of the Contact Person Organisation name Street City Telephone

For further information about your rights as a research participant, or if you are not satisfied with the manner in which this study is being conducted or if you have any questions or sustain any injury during the course of the research or experience any adverse reaction to a study procedure, please contact the principal researcher.

Consent Certificate Your participation in the study is possible only if you sign a stand-alone consent form that would authorize us to use your personal information and the information about your health status. If you do not wish to do so, please do not take part in this study. I confirm that I have read and understand the information sheet dated ......................... for the above study. •

I have had the opportunity to consider the information, ask questions and have had these answered satisfactorily. YES / NO



I understand that my participation is voluntary and that I am free to withdraw at any time, without giving any reason, without my medical care or legal rights being affected. YES / NO



I understand that relevant sections of any of my data collected during the study, may be looked at by responsible individuals from [company/organisation name], where it is relevant to my taking part in this research. I give permission for these individuals to have access to my anonymised records. YES / NO



I consent voluntarily to be a participant in this study. YES / NO

Name of Participant Name of Person taking consent (if different from researcher) Researcher

Date Date

Signature Signature

Date

Signature

When completed, 1 copy will be for the participant and 1 copy for the researcher site file.

If illiterate

6

6

A literate witness must sign (if possible, this person should be selected by the participant and should have no connection to the research team). Participants who are illiterate should include their thumb print as well 35

I have witnessed the accurate reading of the consent form to the potential participant, and the individual has had the opportunity to ask questions. I confirm that the individual has given consent freely. Print name of witness____________ Signature of witness _____________ Date ________________________ Day/month/year

Thumb print of participant

Statement by the researcher/person taking consent I have accurately read out the information sheet to the potential participant, and to the best of my ability made sure that the participant understands that the following will be done: 1. 2. 3. 4. … I confirm that the participant was given an opportunity to ask questions about the study, and all the questions asked by the participant have been answered correctly and to the best of my ability. I confirm that the individual has not been coerced into giving consent, and the consent has been given freely and voluntarily. A copy of this informed consent form has been provided to the participant. Print Name of Researcher/person taking the consent________________________ Signature of Researcher /person taking the consent__________________________ Date ___________________________ Day/month/year

36

Suggest Documents