User Modcl to Design Adaptabie Interfaces for Motor ... - CiteSeerX

74 downloads 67 Views 6MB Size Report
designer because of a large variety of user profiles based on task, situation and user ... individuals for a web-based multimedia application. The distinctive ...
User Modcl to Design Adaptabie Interfaces for

Motor-Impaired Users

Pradipta Biswas

Samit Bhattacharya

Debasis Samanta

School of Information Technology Indian Institute of Technology, Kharagpur WB-721302, India pbiswas@,sit. iitkgp. ernet. in

Department of Computer Sc. & Engg. Indian Institute of Technology, Kharagpur WB-721302, India samit@,cse. iitkgp. ernet. in

School of Information Technology Indian Institute of Technology, Kharagpur WB-721302, India debasis. samanta@,ieee. org

Abstract- User modeling is an important strategy for designing effective user interface. User model for able-bodied user is not always suitable for physically and mentally retarded people. There are some special characteristics of users and also of the interfaces, which have to be considered when building a user model for disabled users. This paper concerns about modeling motor-impaired users for developing personalized interfaces for those people. In the proposed user model, main emphasis is given on making the user model application independent, clustering users according to their physical disability and cognitive level and adapting the model with respect to individual user as well as cluster of users. I. INTRODUCTION User is the most important component in human computer interaction design. Users possess a great challenge to the HCI designer because of a large variety of user profiles based on task, situation and user characteristics itself. So HCI centers on understanding of the users. Understanding users include both knowing the users and tracking user's behavior. In general, understanding user's mental model is a crucial part in any software design. The concept of mental model is more elaborated in HCI and gave birth to user model. User model is the explicit assumption about the knowledge and mentality of a user. A lot of works have been done on user modeling for varieties of applications. In [1], a user model, namely, Generative User Model for information retrieval is discussed. In this model, given a user's query, it relates to the user's mental state and retrieved objects using latent probabilistic variables. In [2], fuzzy logic is used to classify users of an intelligent tutoring system. The fuzzy groups are used to derive certain characteristic of the user and thus deriving new rules for each class of users. In [3], artificial neural network is used for the same purpose as in [1]. The user's characteristic is stored as user image and neural networks are used as pattern associates or pattern classifiers to get user's knowledge, detect user's goal etc. Lumiere convenience project [4] of ASC group of Microsoft research pioneered another probabilistic model, viz. influence diagram in modeling users. Lumiere project is the background theory of the Office Assistant shipped with Microsoft Office application. The influence diagram shows the relationships among user's acute needs, goals, user background

etc.

When the users are not able-bodied normal user, the design of user model becomes more difficult. Some implicit assumption in case of able-bodied users has to be taken explicitly for disabled users. For example, the intellectual level of able-bodied users is assumed in accordance to their age, but it is not true for mentally retarded users. AVANTI [5] project aims to address the interaction requirements of disabled individuals for a web-based multimedia application. The distinctive characteristic of the user interface in AVANTI is its capability to dynamically tailor itself to the abilities, skills, requirements and preferences of the users, to the different contexts of use, as well as to the changing characteristics of the users, while interacting with the system. The categories of disabled users supported in the current version of the system are: people with light, or severe motor disabilities, and blind people. The user model is based on static and dynamic characteristics of the users. The user interface of AVANTI project first gets initialized according to some static characteristics of the user. After getting started, the interface keeps changing its behavior depending on some dynamic characteristics of the user. There is a decision mechanism, which triggers certain actions to modify the interface depending on the dynamic behaviors of the users. There are some other software available commercially, such as EZ-Keys [6], Clicker [7] etc. The user models used for query prediction or information retrieval [1] [3] are very much application dependent. Only few works have been reported on modeling disabled users. AVANTI project models disabled users but its application is on developing a web browser. Further, there is no significant work reported on clustering disabled users based on cognitive level. In [8], Card & Moran's Model Human Processor (MHP) is used as a user model. Different parameters of this model like perceptual response time, cognitive response time etc. are measured using special techniques for motor-impaired users and the metrics are compared with that of able-bodied users. The largest difference is observed in the motor function time. The MHP model is found to give good insight for understanding interaction of motor-impaired users with computer. However, this model is very simple and no discussion has been provided about how the model can be used for making inferences useful for an actual interface design. In [9], an informative discussion is presented on a data-logging tool (BASE) for disabled users and also the analysis technique

of the logged data are discussed. However, the discussion is limited to usage logging only and there is no discussion on user model. In this paper, a user model has been proposed for motorimpaired users. The model does not center on a particular application. It consolidates on user clustering and usage logging for deriving adaptive actions during interactions. The model is dynamic in nature and gets more personalized with increased usage. A case study of application of the proposed model in the design of an interface for augmentative and alternative communication (AAC) is presented also. The rest of the paper is organized as follows. In Sec. 2, a brief discussion on user models and adaptable interface has been presented. The proposed user model is discussed in Sec. 3. Based on the user model, an architectural view of an interface design for motorimpaired users is explained in Sec. 4. Section 5 is the concluding section.

II.

PRELIMINARIES

User Model Most users are satisfied with the technical functionalities offered by software but not with the interface. They want a friendlier interface for making the full utilization of the functionalities. At the same time they want a comprehensible and predictable user interface, which masks the underlying computational complexity. However, achieving all these things is not an easy task. The typical aim of an interface designer is to empower users to use a system efficiently and effectively. It should consider the overall social and technical environment by examining human psychological and behavioral needs. The designer has to deal with the diversity in user behavior due to different tasks, environment and situation. Considering all these issues user modeling becomes an important part of software design. User model is a representation of the knowledge and preferences which the system 'believes' that the users posses. However, any information stored about the user or usage pattern (like event log or user log) is not the user model unless it can be used to get some explicit assumption about the user. The user profile of user model is developed using the user characteristics. User characteristics can be categorized as application dependent and application independent. The application dependent part of the user characteristics includes: prior experience with computer, knowledge of systems and applications, goals, intentions, expectations etc. On the other hand, the application independent part of the user characteristics considers: neuro-motor skills, preference, capabilities, cognitive level, learning abilities etc. Some parts of the user characteristics are taken explicitly from users, while others are learned in the course of interactions.

A.

While representing user characteristics into a user model, it can be broadly classified into three categories: student model, profile model and cognitive model [10]. Student model holds data about the domain knowledge of the user. The profile model holds data about the background, interest, and general knowledge of the user. The cognitive model holds data related

to user cognition, personality etc.

B. Adaptable Interface An adaptable interface is an interface, which can follow the user and get personalized to him to serve him better. It actually shares the labor of accomplishing a task with the user. The following is the main goals of any adaptable interface.

* Making the interaction efficient * Increase speed of use * Solving some complex tasks, which the user cannot solve alone due to some disability. An adaptable interface should have the capability of following the user's actions and mental state and offering assistance by prediction of future actions. For designing an adaptive interface the type of assistances the user wants is to be identified first. The other things include: learning (and if) when to interrupt the user, discovering how the user wants to be assisted in different contexts. The assistance may be of three kinds: warnings, suggestions and actions on the behalf of the users.

For an adaptable interface, major set of actions must include the following. 1. Addition / deletion of task details 2. Addition / deletion of help or feedback window 3. Changing the formatting or organization of information etc.

The general architecture of an adaptable interface is shown in Figure 1. Dynamic Characteristics

Q Domain

ExpertlAction

Static Characteristics

Intelligent Processing Unit

PoolI

Fig. 1 Architecture of an adaptable interface *

*

*

*

The sensor captures dynamic user characteristics like user idle time, repetitions in interaction pattern, mental state etc. Some portion of the sensor can be incorporated within the software as well as hardware can be used like web camera to recognize facial expression or for eye gaze tracking. The domain expert has domain knowledge for a

particular application and it initializes the interface by taking static characteristics of users like age, gender, language, expertise, experience etc. The knowledge base will store the knowledge either in the form of formal languages (propositional logic, first order logic) or in the form of networks (like neural networks). Knowledge interpreter actually represents the inference mechanism for deciding adaptation actions.

The action pool stores a set of action, which depends on a particular application. By taking output from the Knowledge interpreter the actuator selects an action from the action pool and executes it. The learning module does the job of adaptation. It stores the newly generated knowledge by the execution of action into the knowledge base. So the system gradually gets more accustomed with the user.

*

*

The adaptability can be offered before the start of the interaction, during the interaction session and between sessions of interaction. Adaptability before first session of interactions mainly depend on static user characteristics like user's familiarity with the interface, general background etc. The runtime adaptation (during session) is governed by the usage pattern like error rate, task history, user idle time etc. [5]. Between sessions, the user profile is updated based on the experience gained in the latest interaction to provide better adaptation at the next interaction. III.

PROPOSED USER MODEL

Table 1. User characteristics High level characteristics Experience

Low level characteristics With the Software

With Similar

Type of Software

Age

Actual Age

Occulomotor Characteristics

Vision

Sex Language Level

Sex Language Medium

The proposed user model is discussed in this section. The an application from the user model are identified first. Following the requirements, steps for designing the user model are discussed. exact requirements of

A. Requirementfrom the Model Users want a comprehensible and predictable interface, which will mask underlying computations. This can be provided by incorporating consistency in the interface and also by predicting outcome of the users actions. For a disabled user, where the user's actions are limited, personalization of the interface is an additional requirement to provide the locus of control. For a better human computer interaction, a user model would correctly assess the type of the user before initialization of the interaction and should personalize to the user during interaction. The high level requirements from a user model for disabled user include the following. * *

* *

Proper categorization of the users Providing an appropriate interface to the user Track user's interaction with the interface Update user's profile

B. Design of the User Model Following the requirements for a user model as stated above, the proposed user model is constructed with the following steps:

1) Selecting user characteristics an organization dealing with disabled persons when a new patient arrives, s/he is assessed by a group of experts including language therapist, social worker, physician etc. They collect a number of user characteristics before taking a treatment decision. The data collected thus are important ingredients for designing computer interface for them. Some of the user characteristics selected for the purpose is mentioned in Table 1. In

Interaction Language Education Level

Personality

Education Level

Motivation

1. 2. 3. 1. 2. 3. 1. 2. 3. 4. 5. 6. 1. 2. 3.

1. 2. 3. 4. 5. 6. 7. 1. 2. 3.

1.

2. 3. 4. 5.

1. 2. 3.

Values Novice Intermittent Expert Novice Intermittent Expert Below 5 (6-15) (16-25) (25-40) (40-65) Above 65 Superior Medium Inferior 1. Male 2. Female Only Picture One word Two Words Three Words Phrases Sentence Normal English French Hindi etc. Below Primary

Primary

Secondary Higher Secondary Above Higher Secondary High Normal Low

2) Collecting data A consolidated user assesment sheet is then prepared for collecting data about the users. The collected data constitute the initial user profile.

3) Reducing attributes The attribute list is too big and it may happen that some attributes are redundant. To avoid the redundancy among the attributes the Rough Set Attribute Reduction Algorithm can be used. It will also increase the efficiency of the clustering algorithm. 4) Clustering users Most of the motor-impaired users are in a process of treatment and so some of their characteristics may not remain static. Moreover, it is quite obvious that many users partially belong to more than one cluster. So, fuzzy clustering technique is being chosen rather than crisp clustering technique. We propose fuzzy c-means clustering algorithm to use with c = 6. Classification entropy is used as the metric for cluster validation.

5) Finding relationship among the attributes It has been found that all the attributes are not independent to each other. For example, mental age is dependent on education level and language level. The scanning properties are dependent on occulomotor characteristics of a user. Some of these dependencies among attributes are shown in Figure 2. The relationships among the attributes are derived from the cluster centers in the form of (P1 , v1 ) - (P2 , v2 ) and (P2 V2 ) -> (P1, I1V) where Pi, P2 are attribute names and v1 and v2 are their corresponding values. These relationships can be used to predict some attribute values when appropriate data is not available and cross checking of collected attribute values. I

Fig.3 Interface components

Fig. 4 Interface components and user characteristics Fig. 2 Relationship among the attributes

6) Deriving action rules The attribute values thus derived are used to support a user in two ways: to select the appropriate interface and providing run-time adaptation during interaction. To select an appropriate interface, the interface components have to be identified first. Then these components have to be related with the attributes. For a simple interface (like interface of Clicker [7]) the interface components and their properties can be identified as shown in Figure 3. From Figure 3 some relationship can be directly followed like relationship between scanning preference of user and scanning properties of interface components. The menu name and button label text will be selected from a vocabulary, which depends upon the language level and education level of the user. Button pictures depend on user's mental age. If there is some provision of text to speech conversion, the actual age and sex determines the pitch and tone of the voice. Like these, other attributes can also be related to interface components. An example of dependencies of different interface components on user characteristics is shown in Figure 4. Finally each cluster of user will be associated with a particular instance of the interface.

An influence diagram for incorporating the run time adaptation is shown in Fig 5. The actual adaptation action will depend on the user's personality, mental age, experience and the task history. The actual description of the components of the action pool is very much application dependent. Followings are some example actions for run time adaptation: * Predicting words or sentences * Changing scan speed * Changing text font * Providing suggestion * Run time addition of shortcuts for expert user

After getting the relationship among interface components, proposed actions and user characteristics it is easy to develop the action rules for personalization. Following is an example to illustrate it. If Motivation= 'Low' and Experience with the Software= 'Low' and Idle Time> 'Threshold' Then Action= 'Provide Suggestion' If Motivation= 'High ' and Error_Rate > 'Threshold' Then Action= 'Change Button Layout'

as

C. SalientFeatures ofthe Proposed Model The proposed user model has some unique features, which are listed below.

sto

E

ctin

level

*

|Action Pool

Fig. 5 Influence diagramfor run time adaptation

*

7) Usage logging The usage logging is used to store all the events during the interaction. The basic structure of a log file is more or less fixed for different applications. An optional header defines the type of information that will be provided in the logfile. The header (if present) is followed by an arbitrary number of logfile entries, which corresponds, to distinct events in the interaction. Each entry is subdivided into different fields (typically separated by spaces) that provide details on different aspects of the event. Each event has to be attached with unique event identification number. There should be a mandatory field for time of occurrence of each event. The time field will be used to calculate the user idle time, response time etc. The definition of an event is dependent on the application, the interface, the type of information being collected and many other factors. An event may be defined as single key press event or mouse (switch) click events. Again it may be high-level events like change in output text for an augmentative and alternative communication application. The user model proposed in this paper is not dependent on a particular log file format. It requires only that the log file should be expressive enough *

to recognize the intention of user actions for updating

*

to identify similar patterns of user actions that

user

profile

helpful in prediction of user action.

can

be

8) Modifying knowledge base Two ways have been proposed to modifying the knowledge base: * *

Taking explicit input from the users or their instructors Analyzing the log file after each interaction.

User characteristics like actual age, education level, and language level are periodically updated taking explicit user input. The log file entries illustrate different dynamic user characteristics like user idle time and user error rate at a task situation, user acceptance of run time adaptations etc. Based on these dynamic user characteristics, the preferred run time actions (like providing suggestion, changing font size etc.) for particular user as well as cluster of users are identified. The user model memorizes these preferred actions for using in the next interaction.

*

*

IV.

The first important thing about the model is that it is application independent. In the proposed model the application dependent parts are clearly identified (like action pool for run time adaptation, event definition in case of usage logging) and no assumption is made on it to keep the model flexible for any application. The model is not a static one and it will be continually updated as mentioned in previous section. So better personalization will be provided with more usage. Inconsistency or lack of data will not be a problem for initial user profile creation. The relationship among the user attributes can be used for predicting unknown or inconsistent attribute values. In the proposed model, users are clustered according to their characteristics. The interface component characteristics will be stored for each cluster, not for all users. The number of clusters can be controlled. Hence the proposed model is easily scalable for a huge number of users.

CASE STUDY: AN APPLICATION OF THE USER MODEL Currently the implementation of the proposed user model is in progress. However the partial implementation of the user model is used for AAC software for the motor-impaired people. The whole interaction has been modeled is shown in Figure 6. The interaction starts with configuring the interface. The configuration can be done manually by adjusting button size; font size etc. or it can be done automatically. After getting an interface the user can fine-tune the interface by adjusting its components. Now the user starts using the interface. The interface agent captures each user action and logs them. This log is used to identify the actual events and an event identification number is attached with each event. Based on the pattern of events the task history of usage is determined and prediction is provided in accordance with the particular user's model and task history. The predicted action, which may be warning, suggestion or action on user behalf, actually constitutes the run time adaptation. When the user exits the application the fine tuned form of the interface and the user logs are stored for modification of the existing knowledge base. The screenshot of manual configuration window is shown in Figure 7.

A user can manually attach picture, text and message to each button of the interface. The interface will continually log each user actions with time stamp. After the end of the current interaction, the particular user profile will be updated with the new data gathered. For the same user, the next session will begin with this updated profile. V. CONCLUSION The present paper discusses a user model for motorimpaired people. Main emphasis is given on making the model application independent, clustering users and adapting the interface with reference to individual user as well as cluster of users. The clusters obtained in the user model can give valuable information for categorizing a newcomer and decision-making about a treatment policy for a particular class of users. The aim of the user model is to develop personalized applications for motor-impaired persons. The implementation of the model is under progress. The partial implementation is used in developing an adaptable interface for augmentative and alternative communication of motor-impaired people. Fig. 6 Flow of interaction in the adaptable interface

ACKNOWLEDGEMENT The authors are grateful to Indian Institute of Cerebral Palsy, Kolkata, India for giving information and case studies of several students of the institute. REFERENCES [1] [2] [3]

Fig. 7 Manual configuration screen

Instead of manual configuration the interface can be configured automatically also. For automatic configuration the system will provide facility to add new user or edit existing users' details. As mentioned earlier, the interface can be modified according to the user preference any time after initial configuration has taken place. A sample screenshot of the interface for a particular context is shown in Figure 8.

[4]

[5] [6] [7]

[8]

[9] [10]

Fig. 8 Screenshot of the interface

Yoichi Motomura, Kaori Yoshida, Kazunori Fujimoto, Generative user models for Adaptive Information Retrieval, Proc. IEEE Conf on Systems, 2000. A. F. Norcio, Adaptive Interfaces: Modelling Tasks and Users, IEEE Trans. Systems, Man, Cybernetics, 19(2) pp. 399-408, 1989. A. F. Norcio, Q. Chen, Modeling User's with Neural Architecture, Proc. Intl. Joint Conf on Neural Networks, 1992. pp. 547-552. Horovitz, E. et. al., Microsoft Research, The Lumiere Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users, available at: http://research.microsoft.com/adapt//horvitz/lumiere.htm Stephanidis, C. et. al., Adaptable and Adaptive User Interfaces for Disabled Users in the AVANTI Project, Intelligence in Services and Networks, LNCS-1430 pp. 153-166, Springer-Verlag 1998. available at: EZ-Keys http://www.wordsplus.com/website/products/soft/ezkeys.htm Clicker available at: http: /www.cricksoft.com/us/products/clicker/default.asp Keates S., Clarkson J., Robinson P., Investigating the Applicability of User Models for Motion Impaired Users, Proc. ASSETS 2000, November 13-15, 2000. O'Neill P, Roast C., Hawley M., Evaluation of Scanning User Interfaces using Real Time Data Usage Log, Proc. ASSETS 2000, November 1315, 2000. Benyon D., Murray D., Applying User Modeling to Human Computer Interaction Design, Artificial Intelligence Review, Volume 7, Numbers 3-4 August 1993, pp. 199 - 225.