Supporting visually impaired children with software ... - Springer Link

10 downloads 7601 Views 352KB Size Report
based on software agents, and has specific support for visual, auditory and haptic ... The phenomena chosen for computer simulation were our solar system, the ...
Virtual Reality (2006) 9: 108–117 DOI 10.1007/s10055-005-0011-5

O R I GI N A L A R T IC L E

Rami Saarinen Æ Janne Ja¨rvi Æ Roope Raisamo Eva Tuominen Æ Marjatta Kangassalo Æ Kari Peltola Jouni Salo

Supporting visually impaired children with software agents in a multimodal learning environment Received: 18 July 2005 / Accepted: 7 October 2005 / Published online: 11 January 2006 Ó Springer-Verlag London Limited 2006

Abstract Visually impaired children have a great disadvantage in the modern society since their ability to use modern computer technology is limited due to inappropriate user interfaces. The aim of the work presented in this paper was to develop a multimodal software architecture and applications to support learning of visually impaired children. The software architecture is based on software agents, and has specific support for visual, auditory and haptic interaction. It has been used successfully with different groups of 7-8 year-old and 12 year-old visually impaired children. In this paper we discuss the enabling software technology and interaction techniques aimed to realize our goal and our experiences in the actual use of the system. Keywords Multimodal interaction Æ Software agent architecture Æ Visually impaired children Æ Learning environments Æ Haptics Æ Auditory feedback

1 Introduction In the modern society computers are used to teach many subjects in schools. This has created a new problem for visually impaired children, since they often cannot take advantage of the teaching materials created to be used with a graphical user interface. Even if they had up-todate hardware at home, it is of no real use without appropriate software that supports non-visual use. There has been some development in teaching materials for the blind and visually impaired, but these are R. Saarinen Æ J. Ja¨rvi (&) Æ R. Raisamo Æ J. Salo Tampere Unit for Computer-Human Interaction (TAUCHI), Department of Computer Sciences, University of Tampere, 33014 Tampere, Finland E-mail: [email protected].fi Tel.: +358-3-35518554 E. Tuominen Æ M. Kangassalo Æ K. Peltola Department of Teacher Education, Early Childhood Education, University of Tampere, 33014 Tampere, Finland

not available widely or have limited use. An exception is audio books that have become popular, but they lack the interactive quality of teaching programs. Children are a special group of computer users who require their own software and user interfaces that are consistent with their development level. It is challenging to design user interfaces for young children, but this challenge is much greater when the children are blind or visually impaired. Since it is necessary to support their learning and using of computers with other modalities, the complexity of user interface design and supporting architecture is greater than in interfaces for common computer users. In studies by Patoma¨ki et al. [1] games and learning environments were built for visually impaired children 3.5-7.5 years of age. It is evident that young children’s use of the PHANTOM haptic device [2] (see Fig. 1) is greatly affected by their motor abilities and development level. When planning the present study we decided to direct our efforts in pre-school and elementary school children who should have more developed fine-motor skills and who are more capable of expressing themselves. The goal of our research was to produce a proactive and multimodal agent-based learning environment that would support visually impaired children’s learning and cognitive development. The pedagogical approach of this system is based on exploratory learning. This means that a child can explore the phenomena independently, guided by his/her own interests and questions. No direct drills or tasks were included in the system. A single application in the system is named as a micro world. We used a Reachin Display System [3] with a SensAble PHANTOM Desktop haptic device [2] and stereo audio through speakers (Fig. 1). There was also a 2D projected view of the virtual environment in case the child had a residual sight and could make use of it. The buttons of the Magellan SpaceMouse [4] were used for input. The term software agent is used in the literature for a variety of purposes. Also, in this paper it is used in two

109

Fig. 1 A child using the PHANTOM device

levels of abstraction. In the lower level, agent-based models describe a system as a set of interconnected agents. An agent in this sense is an autonomous software component which has a state and which communicates with other agents by using messages. In the higher level, our approach is to use software agents to support learning and navigation. These agents can be personified having individual voices, and they interact directly with the user. The phenomena chosen for computer simulation were our solar system, the interrelations between the Earth and the Sun, the Earth, the atmosphere, and the interior layers of the Earth. The preliminary results of the user studies with two groups of visually impaired children support our design solutions and the usability of the interaction techniques developed in the system.

2 Related work Exploratory learning and the chosen phenomena were previously used in the PICCO project [5]. The software had 2D graphics and the focus group was 5–8 year-old children with normal eyesight. The experiences gained from the PICCO project were used as a starting point of the project presented in this paper. The pictorial computer simulation PICCO concentrates in the variations of sunlight and heat of the Sun as experienced on the Earth related to the positions of the Earth and the Sun in space. In the simulation it is possible to explore the variations of sunlight and heat of the Sun and their effects on the Earth in a natural environment. It is also possible to examine the origin of these phenomena from the basis of the interconnections and positions of the Earth and the Sun in space. On the Earth level the simulation concentrates on phenomena, which are close to the everyday experiences of children, such as day and night, seasons, changes in the life of plants and birds etc. The simulation program has been implemented in such a way that the knowledge structure

and theory of the phenomenon are based on events appearing together with the phenomenon in question, and these events are illustrated. The simulation tool has been developed for children’s spontaneous exploratory activity with the goal of supporting children’s conceptual learning while interacting with the environment. (See [6, 7]). In the recent years there has been some research concerning the use of the PHANTOM device in developing software for visually impaired persons. For instance, Jansson and Billberger [8] reported that blind persons can identify 3D objects faster with hands than with a PHANTOM device. They argued that with practice the performance can improve somewhat. According to Magnusson et al. [9], blind users can recognize quite complex objects, and they are also able to navigate in virtual environments as long as the environment is realistic. Sjo¨stro¨m [10, 11] has studied nonvisual haptic interaction using the PHANTOM device. In his informal experiments with visually impaired users he came up with a list of design guidelines for one-point haptics, which include guidelines for navigation, finding objects and understanding objects. Patoma¨ki et al. [1] suggest that for young children the objects in the virtual environment should be very simple. They used real mockup models of the environment to help the children get familiar with the simulation. This proved to be useful in their study. The results of Patoma¨ki et al. [1] also supported Sjo¨stro¨m’s findings. The GRAB project [12] has developed an architecture which enables visually impaired and blind people to explore three-dimensional virtual worlds using the senses of touch and hearing. The architecture is based on three tools: 3D force-feedback haptic interface used with two fingers; an audio interface for audio messages and verbal commands; and a haptic modeler for designing the objects in the environment. The GRAB system was successfully used in a computer game for the blind [13]. The GRAB system uses two-handed interaction, which makes identifying objects and orientation easier for the blind. In one-point interaction, especially when used with only one hand like with the PHANTOM device, 3D objects must be designed carefully. We also believe that the use of such a system should be supported with other senses. Software agents are a natural way of providing this support. Agents allow autonomous execution of multiple tasks making the monitoring of user actions easier. For instance, the Open Agent Architecture [14] (OAA) provides a distributed agent architecture especially targeted for multimodal user interfaces. The main emphasis is on the speech recognition, gestures and pen input. The system is built around a central facilitator that handles and forwards tasks that the agents want to have completed. The system allows dividing the task into subtasks and parallel execution of each subtask. The facilitator also provides global data storage and a blackboard-like functionality for it. Several applications have been built using OAA. For example the InfoWiz [15], Command-

110

Talk [16] and Multimodal maps [17] take advantage of the multimodal capabilities of OAA. Using a central dispatcher is very common in agent-based systems. Coutaz [18] has suggested an agent based approach to dialogue control. Her PAC model (Presentation, Abstraction, Control) describes an interactive system as a hierarchical collection of PAC agents. The Presentation facet of a PAC agent handles input and output behavior and the Abstraction facet contains its functional core. The Control facet controls communication between other agents and also between agent’s facets. The PAC model and its descendant PAC Amodeus [19] have been used in fusion of multimodal input modalities. In the agent-based learning environment EduAgents [20] Hietala and Niemirepo introduced teacher and companion agents that have their own personalities and abilities. The teachers have different ways of teaching and companions may try to help the user with the mathematical exercises. The companions were designed to be human-like and thus they also could make mistakes. A part of the learning process was to work with the companion agent in various exercises and, for example, study the solution offered by it.

3 Software architecture for multimodal applications 3.1 Structure of the agent architecture We used a distributed agent architecture consisting of a set of concurrently running agents. The basic agent architecture can be divided into three separate functional components (see Fig. 2). The system can be seen as a realization of a typical message dispatcher architecture where the Message Channel is the central dispatcher and plays the most vital part of the system by providing a centralized way for passing messages

Fig. 2 An overview of the agent system

between agents around the network. The Agent Containers (and thus agents) are connected to the agent system via the Message Channel. The third component is the actual application that is handled by the Controller. The functionality and structure of Message Channel, Agent Container and agents are based on FIPA agent specifications [21]. The Controller works as a bridge between the user and the rest of the agent system. It provides the means to navigate in the scene as well as between scenes and allows the agents to manipulate the 3D environment and to interact with the user. It also provides information about the state of the world for the rest of the agent system. There are two special agents situated in the Agent Container of the Controller. The mediator agent acts as the representative of the Controller in the agent system and forwards messages sent to it to the Controller for further processing. A simple coordinate agent sends the user’s coordinates to the logging agent once every second. The system can initiate interaction with the user by playing sounds, by moving the user to another micro world, or by using tactile or force feedback to inform the user on various things. Currently, the notification is done by playing a sound and shaking the stylus. The user either accepts or declines to hear the new information. The information may be a question in which case the system waits for the user’s answer for a certain amount of time. 3.2 Classes of agents Agents have different roles in our system. For example, all the interactions between the system and the user happen through a pedagogic agent (see Sect. 4.3) so that if another agent wants to interact with the user, it will send a corresponding message to the pedagogic agent asking it to do so. Three other vital agents are present in the current system. First, there is a database agent that collects and keeps up all information in the system. It contains facts that the agents can query, modify and observe. Any agent can inform the database agent that it is interested to get a notification when a fact changes, and the database agent will send a message to the agent whenever the change happens. Second, a filter agent inspects the information going through the system trying to find meaningful information. Filter agents may update the findings through the database agent or send the new information to other agents. A filter agent is a special type of agent that can contain one or more filters. These filters are usually used to observe very specific events or data and to provide new information based on them. The filters can and should be chained to provide a chain of information refining the data starting with atomic facts and ending up with high, level information, assumptions and conclusions. Each micro world has a specific filter agent that

111

monitors events specific to this micro world and updates the database accordingly. Third, perhaps the most important agent is the rule engine agent that handles most of the interactive functionality. The agent has certain patterns that it tries to find in the user’s actions and it will react to those patterns. The rule engine agent is built on top of the CLIPS rule engine [22] and the SQLite database library [23]. A Prolog engine has also been integrated in the system. The agent is tightly connected to the database agent so that it is possible to efficiently pass information between these two components. That is, the rule engine agent observes all the facts in the database and updates the corresponding facts in its working memory, and vice versa. As a rule engine, the agent has certain prerequisites (time being one) that, once being fulfilled, will cause some event to happen: another new piece of information being created or perhaps a request for interaction with the user. The following example (Fig. 3) illustrates the activity of the agent architecture and individual agents. In this scenario, the user is studying the Solar System and has been listening to the detailed information of Mercury and Venus. The user finds the Earth by following its orbit. The user presses the planet to hear more detailed information of the Earth. This will cause a sequence of events to happen: 1. The Controller sends a message that the Earth was pressed to all logging agents. 2. As step one happens, one of our filters was listening logging messages and decides that one fact in the shared database has to be updated. The filter sends an update message to all databases. 3. The database agent updates the information and sends a message about the change to all the agents that currently observe the changed fact. 4. As step three happens, our rule-engine agent listens to every single fact in the database and receives the update message. The agent updates its working memory and this causes two rules to be fired 5. Detailed information should be played. Fig. 3 A chain of filters. An example of handling an event in the agent architecture

6. More information about rock planets should be played. 7. These two interaction requests are sent to the pedagogical agent that forwards them to the Controller. All of the agents in this example use the default way of communication through the Agent Container and Message Channel. However, this can easily become a bottleneck, so agents that send constantly a lot of data should negotiate a custom connection to another agent using the conventional means and then use the custom connection to send the data to the receiving parties. The architecture allows agents to open new TCP/IP and UDP network connections and to create new threads.

4 A learning environment for visually impaired children We have constructed a simulation application that makes use of the software architecture presented above. The simulation is specifically aimed for visually impaired children. The children can study natural phenomena, which are related to the Earth, the Sun, and the Solar System. The aim was to produce agents that support the concept learning process of visually impaired and normally seeing children. In this chapter, we concentrate on presenting the applications and the navigation interface. 4.1 Selected natural phenomena When selecting the natural phenomena for the simulation application it was essential that the phenomenon was important and significant in everyday life. The simulated phenomena have to awaken sufficient interest in the children and to efficiently utilize possibilities offered by the multimode interface technology. The phenomena chosen are the ones that can in no other way be easily and illustratively presented, such as phenomena linked with space and elementary astronomy. As an important selection criteria for the chosen natural phenomenon, it can be stated that a selected natural phe-

112

nomenon with its conformities to law, forms a clear, well-organized knowledge structure and theory, and these aspects lay a strong and well-defined foundation for the modeling of the phenomena for the simulation application. The selected phenomena include multilevel interrelationships and central concepts such as space and time. The whole phenomenon is rather complex and abstract, but the phenomena are clearly integrated with each other and they form a coherent theory. (See also [5]) 4.2 Cognitive requirements for modeling and designing The cognitive requirements for modeling and designing the natural phenomena for computer-based learning environment are based on theories and concepts of cognitive psychology, cognitive science, socio-cognitive approach and science learning. The main aim is that a constructed learning environment could support children in forming integrated abstract conceptual structures and models of the selected natural phenomena and support them in continuous knowledge construction process concerning the phenomena in question. Thus, cognitive requirements have to be taken into account when selecting the natural phenomena, writing the manuscript, modeling the phenomena, and simulating the phenomena onto the computer, as well as displaying the simulation on the screen and using the computer simulation. (See also [5, 24]) The phenomena have to be modeled for the simulation application according to the theory and existing knowledge of the phenomena. This means, for example, that information and knowledge on the screen and in the agents’ descriptions, explanations and guidance have been designed and implemented according to the present scientific knowledge. This is important because of children’s knowledge construction process and the formation of information and conceptual structures. These are significant because integrated and organized information, as well as knowledge structures in human memory at a general level of the phenomenon in question, is important to effective and demanding thinking, continuous knowledge construction and the theory formation. In this application these requirements have been taken into account. (See e.g. [5, 25]) It is important that the use of the application is based on the user’s own activity. Children can proceed according to their own interests and ideas. In the application, there are no paths or rules on how to explore and go forward. Children can use as much time as they like each time. All this provides the children with possibilities to explore the phenomenon any time as long as they want and in the order they wish. When the program is under the user’s control, it is possible for the user to concentrate on the phenomenon in question. A child’s own activity, attention and interest, supports the development and construction of conceptual structures of the phenomenon within children. The more complicated the phenomenon

is, the more important is a child’s own activity and interest in analyzing and organizing information and its storing into the memory. (See e.g. [5]) 4.3 Proactive pedagogical agents The computer simulation provides the children with an exploratory learning environment where they can explore the selected phenomena according to their own interests and questions. Children’s own questions are considered as a starting point for explorations. A child’s learning is viewed as an active process guided by his/her own questions and previous knowledge [see e.g. 26]. For achieving progress and deeper understanding children need guidance and support for their exploration. In our system proactive pedagogical agents have been used to scaffold each child’s inquiries in the simulation by making questions and encouraging child’s own questioning and hypothesis formation as an aim to guide the child’s exploration process towards scientific inquiry. Agents’ operations in this system are based mainly on auditory and haptic feedback, since the system is developed especially for visually impaired children. Proactive pedagogic agents support children’s explorations by encouraging a child’s own questioning, directing a child’s attention to objects and their relationships in phenomena, making questions and suggestions, guiding from familiar everyday phenomena, and progress gradually to more complicated topics and the causes and explanations of the phenomena. The agents scaffold each child with respects to his/her capabilities and exploration paths. The agents don’t make any decisions for the child or force him or her in any particular exploration path. At any moment a child can choose either to listen to what the agents ask or suggest or to ignore them by pressing the yes or no button. As a child’s explorations proceed, the agents support may decrease step by step. The agents support is based on a child’s explorations and user profiles. Very important in the agents action is the right timing and the form of the support and questions. The construction of the rules of proactive pedagogic agents has been one of subjects of designing and testing. The agents in each micro world have different imaginary characters and different names and voices. The entire learning environment has been constructed so that the narration and play are essential parts in children’s explorations. These elements form an important pedagogical support system for children’s exploration, science learning and thinking. This is because children’s thinking takes place in the form of continuing events, and fairy tales and stories support in recognizing and keeping in mind wholeness’s. In addition the meaning of different imaginary characters is to help children in analyzing and recognizing each individual application (so called micro world) and this again helps children in navigation and the formation of interrelationships of different phenomena. (See also [27])

113

4.4 Micro worlds The simulation consists of six micro worlds. The micro worlds are the Earth, the Solar System, the Earth Orbit, the Atmosphere, the Earth Internal Layers, and the Study Room (Fig. 4). The user starts from the central station (Fig. 4, center). From there he/she can open doors to other micro worlds. A virtual door can be opened, by pushing it with the PHANTOM stylus. The system then selects the corresponding micro world and guides the stylus to a suitable starting point. When the user is navigating from one micro world to another, he/she must travel through the Central station. This lessens the likelihood of getting lost in the environment because the user is always only one step away from the Central station. Every micro world has its own representative with a different voice (such as Captain Planet, Andy Astronaut and the Earth Giant). When the user enters in a micro world, the representative gives an introduction and tells what the user can do in it. The main task of the agents in the system is to follow the user’s steps and paths of exploration and to provide adequate support for their exploration with questions and additional information through the representative.

Fig. 4 The navigation structure and the six different applications developed on top of the architecture

In the Solar System (Fig. 4, top) the children can explore the Sun, the different planets and the circular system of the planets. The orbits of the planets are implemented as grooves on a black plane that represents the void of the space. The plane helps the child to find the planets by restricting the depth of the scene to the level of the planets so that the PHANTOM stylus can’t fall under the planets. Whenever the user enters an orbit the system tells which planet is in question. The planets are stationary, and implemented with a light magnetic pulling force in them. When the user finds a planet, the system tells its name and gives a brief description of it. If the user pushes the planet with the stylus the system tells some more information about it. In the Earth (Fig. 4, top-right) the children can explore the surface of the globe. The spherical Earth can be felt three-dimensionally with the stylus. The surface of the globe is implemented with a bump map that gives it a noticeable texture that feels different in oceans, plain land and mountains. The Earth has a gravity field that can be felt with the stylus as a slight pull towards the Earth. The gravity pull helps the child to locate planet Earth in empty space. After the child has found the Earth the gravity helps to explore the globe and prevents the stylus from falling into the empty space around the

114

globe. When exploring the surface of the Earth the child can hear the names of the continents spoken. There is also ambient background noise that contains people’s voices and different vehicle and animal sounds. The sounds of the sea can be heard when exploring the oceans. The user can rotate the Earth by moving the stylus in the far right side of the world. When doing so, the Earth starts to spin slowly and the user can hear the sound of a clock ticking. As a visual feedback it is possible to see a view of the spherical Earth and the virtual representation of the PHANTOM stylus. In the Earth Orbit (Fig. 4, top-left) the children explore the Earth’s revolution around the Sun. They learn the relative position of the Earth to the Sun during different seasons. When the user follows the orbit the stylus is shown as a small planet Earth and when the user is outside of the Earth’s orbit the stylus is shown as a small space rocket. The system tells the season and plays sounds that are characteristic to that season (e.g. rain for autumn). The system also gives some general information such as the distance of the Earth from the Sun. The Study Room (Fig. 4, bottom-right) contains a room with doors along its walls. When the user presses a door with the stylus the system presents him/her with a question about the contents of the micro worlds. These questions have simple yes/no answers and the user answers by pressing a yes or no button in the Space Mouse. The room has grooves on the floor and the users can find doors by following these aids. In the Earth Internal Layers (Fig. 4, bottom-left) users are able to explore the internal layers of the planet Earth. The layers are represented as a cross section of northern half of the Earth. Layers can be freely explored with the PHANTOM stylus. The topmost layer is the hardest and ‘rockiest’ of all layers. When a user is moving towards the bottom the ‘feeling’ gets smoother and smoother. When reaching the Earth’s core the haptic feedback is simulating the feel of Earth’s liquid center. As visual feedback the user can see the cross section and the layers and a virtual representation of the PHANTOM stylus. In the Atmosphere (Fig. 4, bottom) children can explore the layers of the atmosphere of the Earth. The different layers are presented as different ambient sounds. The lowest layer contains human sounds, birds and humming of trees. The next layers contain airplane sounds and different kinds of windy air current sounds. When a user approaches the top border of the application the sounds will get more silent and will disappear as the user goes out of the atmosphere to the space. Haptic feeling is very subtle and light but there’s still some damping when moving the stylus so that the child gets a concrete ‘feel of the air’. Visual feedback is presented as a simple background picture that shows how the atmosphere is fading into black space.

vides means to move back one level. Second, a micro world may also have a set of objects that can be used as push buttons that trigger a direct transition to another micro world. This kind of transition is called a ‘route’. Each object must have a pushable haptic surface that the ReachinAPI [3] provides. Finally, the agents may ask the pedagogic agent to ask the Controller to switch to some specific micro world. This enables a flexible application structure that could, for example, only implement navigation through agents. We ended up using a door metaphor similar to the one by Patoma¨ki et al. [1], and a single center point, the central station, from which the user would only be one step away (Fig. 5). Furthermore, we decided that the user should always travel through the central point when traveling from a micro world to another. That way getting lost between the micro worlds would be very unlikely. This was accomplished by disabling the navigation menu in every micro world and defining appropriate routes between the micro worlds. The user can get back to the center point by pushing a Magellan SpaceMouse button.When the user moves from one micro world to another the stylus is moved to the central position and held there until the new micro world is loaded and displayed. This is done to give the user a possibility to start exploring from the same point every time he/she enters the micro world thus allowing memorization of the scene and to strengthen the sense of location. The other reason to guide the stylus to a certain location when moving from scene to scene is that there may be some accidental force spikes if the stylus is located within the same space as some solid object in the new micro world. Moving the stylus away from the potential interference area makes the use of the application pleasant and smooth. In the early stages of the project we used a separate magnet that appeared at the same coordinates as the stylus and started to move towards the certain location dragging the stylus along. The early

4.5 Navigation support There can be three types of transitions between the micro worlds. First, a pie-shaped navigation menu pro-

Fig. 5 The central station

115

user studies proved this technique to be flawed, as the probability for the user to lose the guiding magnet was relatively high. The magnet was soon replaced with a global force vector that provides much smoother and more reliable way to move the stylus to a given location. Other haptic elements are also used to make the navigation in the micro worlds easier. The shape of the central station (Figs. 4, 5, center) is hexagonal and the ‘‘doors’’ to other micro worlds are in the corners of the shape as they are easier to find that way. The Study Room and Solar System have grooves that the user can follow to locate something interesting; in the Solar System the grooves are actually the orbits of planets. Some of the interesting objects are magnetic to give the user a hint of their existence and to guide the user to find the objects. In some cases, mainly in the Earth micro world (Fig. 4, top-right), we use spring forces to restrict the user’s movement and to guide him or her to the interesting parts of the micro world. The spring in the Earth micro world also represents the idea of gravity. For every micro world we also had a corresponding real plastic model. When touching it by fingers the child can get a general picture of the world he/she is exploring. Each micro world also has a distinct ambient sound or music to support the sense of location.

5 Experiences with the children From the beginning of this study we have tried to incorporate children in the planning and testing process. The development and testing of the learning system started in January of 2003 and it has continued throughout the project. At first we tested the manuscript we had written for the agents’ operations. The testing included interviews of ten 5–9 year-old children. The interviews were conducted mainly at a day care center. In the interviews the children were asked about their interests and questions with regard to the selected phenomena. We also read some short narrations of the agents’ operations for them, and after that the children were asked how they liked the manuscript, how the narration could be improved and whether there were any difficult utterances used in agents’ expressions. This testing helped us improve the lines we had designed for the agents and at the same time it gave us insight into children’s interests concerning the selected natural phenomena. The user tests have been carried out both in a usability laboratory and at the school for visually impaired children. The tests in the laboratory have been organized whenever some new designs have needed testing. Both visually impaired and sighted children have participated in these tests at different phases of the system development. Each test has been carefully planned, and to a large extent the testing procedure developed by Patoma¨ki et al. [1] was used in the laboratory tests. This testing procedure is specifically adapted for visually impaired young children. All the tests were recorded on

video. These small tests have helped us observe what kind of solutions support children’s independent explorations, and many of the solutions mentioned in a previous section (Navigation aids) were developed from the basis of these tests. A larger test was conducted in spring 2004. The test was carried out at the school for visually impaired children. As we wanted to get as much feedback as possible from the usability and accessibility of our system, we wished to have a little older children than our actual target group to participate in this test. Thus, seven 12 year-old visually impaired children took part in this test during their one-week teaching period at the school. The testing procedure at the school differed from the laboratory tests because in this test the children’s task was to use the system and act as ‘child experts’ to give comments and feedback about the system and to evaluate how they thought that smaller children would be able to use the system. The tested micro worlds in this test were the Solar system, the Earth, and the Study room. Four of the children were totally blind and three children were partially sighted. For the children who could benefit from visual feedback, this was included through a 2D projected view of the virtual environment. The children were encouraged to give feedback during the use of the simulation, and they were also shortly interviewed directly after the use. It surprised us how versatile and diverse feedback we received from the children. The children commented, for example, the usability of the system, and especially the micro world Earth was experienced as difficult. One child, for example, said that ‘‘The Earth was still quite too disorganized and confusing to explore’’, and another one commented the Earth’s revolving: ‘‘I did not know how much the Earth revolved with a sound of ‘‘click’’‘‘. The children also gave us new ideas for the agents’ operations with regard to the selected natural phenomena. For example, in the micro world Solar System the children suggested: ‘‘Maybe more information about the Sun and its structure—a small child might think that the Sun is solid, although it indeed is not’’ and ‘‘You could say, for example, how cold it is on Pluto...if possible.’’ In regard to the micro world Earth, one child proposed that ‘‘When exploring the Earth, the Earth Giant could tell something about the people who live on the continents you are currently exploring’’. The exploring of a micro world happens by using some imaginative vehicle: for example, Solar System is explored by using a space shuttle. One of the children commented also the Central Station where one chooses which micro world to explore: ‘‘It would be nice, if the vehicles would sometimes function and sometimes not. It would be more realistic, too. But they shouldn’t be out of order for too long, otherwise one could get bored.’’ All in all, the children took the evaluation of the system very seriously, and we received very valuable feedback from them. The children’s feedbacks as well as the video recordings of the situations have been used as we have developed the system further.

116

The next evaluation was a research experiment and it was targeted for the actual target group of our learning system, namely for 7-8 year-old visually impaired children. The research experiment was carried out in autumn 2004 at the school for visually impaired children. Two 7-8 year-old blind children participated in this test. Afterwards, another child from this same age group participated, and this test was carried out in the usability laboratory a few months later. The tested micro worlds were the Solar system, the Earth, and the Earth’s Orbit. Tangible plastic models (Fig. 6) of the micro worlds were used in this research experiment for the first time. They proved to be a good solution, and especially familiarizing the children first with the plastic model of the Central Station seemed to help the children’s navigation in the system. Also pedagogic agents had more operations than in the previous tests. The children were very excited and interested in using the system and they reacted keenly on the questions and thoughts raised by the agents. Some of the children even commented and thought aloud the agents’ questions and explanations. Allowing the user to choose the information he or she wants to hear made the new information more interesting for the children. Stylus shaking seemed to be a very natural way to inform the blind user of the agents’ message, and children learned quickly how to receive a message. They also actively selected if they wanted to hear what the agent has to say, and some users were also eagerly waiting to hear more information. From the children’s comments it became evident that we had succeeded to model, for example, the Earth and its surface realistically enough so that the children were able to recognize the areas of sea and land from the basis of auditory and haptic feedback. In the Earth’s orbit micro world it was also easy to follow the orbit after it was first examined once. The sounds of different seasons were also recognized. Especially good results were obtained from the use of planet orbits in Solar system: the children could easily follow them and acquire information of the planets. The test showed that 7-8 year-old blind children are able to use the system quite independently. The researcher helped the child in situations in which the hold

of the stylus got too difficult or the device overheated. The researcher also encouraged the children’s explorations and made questions and suggestions and offered some explanations, but it was the child who decided how to proceed in the simulation and what was explored.

6 Discussion The design and implementation of the system has been a huge learning experience for us. Working with the special target group of visually impaired children making use of specific hardware such as the PHANTOM device poses its own questions, possibilities and limitations. Our system has rather simple unimodal input as we use only manual (haptic) input. The filter agents are used in a similar way to the PAC Amodeus [19] to process information from low-level input such as touching a surface to high-level abstractions concerning exploratory learning. As Patoma¨ki et al. [1] suggested we kept the objects in the scene as simple as possible. In addition, we learned that the navigation inside the environment should be restricted or at least guided in some way. For example, we used a spring force in the Earth micro world (Fig. 4) to guide the user to the area where there was something to study. Our user studies proved that older children can use PHANTOM and 3D applications adequately when the use is guided and supported by the system. Sjo¨stro¨m [10, 11] states that in virtual haptic environment for the blind there should be reference points, which provide navigation help and sense of location for the user. In our system this is realized in micro worlds by moving the stylus to the same location every time the user enters the micro world. In the whole simulation the Central Station acts as reference point when navigating from one micro world to another. Stylus shaking and tapping is parameterized, but it would also require further studying to determine which tapping patterns can be distinguished as unique and which ones could be used to give hint of what kind of information the incoming message contains. We could also create more specialized objects like spheres that react to the proximity of the stylus by sending the distance of the stylus from the center of the sphere. Such proximity objects could be used to monitor the user’s actions or they could also be used to create ‘interest points’ to the micro world. The system could monitor the user’s distance from the interest points and if the user would be far away from them all, the system could aid the user to find the areas of interest by using sound and force feedback.

7 Conclusions

Fig. 6 A tangible model

In this paper we presented an agent-based multimodal software architecture aimed, to support visually impaired children. Several teaching applications were

117

implemented on top of the architecture. The initial user studies support the usefulness and applicability of the architecture, as well as our choice of technology used to build it. We were especially pleased to find out that the PHANTOM technology produced such haptic feedback that is applicable for visually impaired children of this age group. As the basic functionality of the system and micro worlds is now complete, the system will be refined and used in further pedagogical studies. Acknowledgments This research was funded by Academy of Finland, Proactive Computing Research Program (grants 202179 and 202180).

References 1. Patoma¨ki S, Raisamo R, Salo J, Pasto V, Hippula A (2004) Experiences on haptic interfaces for visually impaired young children. In: Proceedings of sixth international conference on multimodal interfaces (ICMI’04): ACM Press, pp 281-288 2. SensAble Technologies Inc. http://www.sensable.com 3. Reachin Technologies AB. http://www.reachin.se 4. 3Dconnexion. http://www.3dconnexion.com/ 5. Kangassalo M (1997) The Formation of Children’s Conceptual Models Concerning a Particular Natural Phenomenon Using PICCO, a Pictorial Computer Simulation. Doctoral dissertation. Acta Universitatis Tamperensis 559. University of Tampere. Tampere, p 188 6. Kangassalo M (1991/1999) PICCO -kuvallinen tietokonesimulaatio [PICCO, Pictorial Computer Simulation]. CD-ROM 951-98035-0-5 7. Kangassalo M (1992) The pictorial computer-based simulation in natural sciences for children’s use. In: Ohsuga S, Kangassalo H, Jaakkola H, Hori K, Yonezaki N (eds) Information modelling and knowledge bases 3rd Foundations, theory and applications. IOS Press, Amsterdam, pp 511-524 8. Jansson G, Billberger K (1999) The PHANToM used without visual guidance. In: The first phantom users research symposium (PURS 99). http://mbi.dkfz-heidelberg.de/purs99 9. Magnusson C, Rassmus-Gro¨hn K, Sjo¨stro¨m C, Danielsson H (2002) Navigation and recognition in complex haptic virtual environments—reports from an extensive study with blind users. In: Proceedings of Eurohaptics 2002 10. Sjo¨stro¨m C (2001) Designing haptic computer interfaces for blind people. In: proceedings of sixth international symposium on signal processing and its applications (ISSPA 2001). IEEE, pp68-71 11. Sjo¨stro¨m C (2002) Non-visual haptic interaction design: guidelines and applications. Doctoral dissertation, Certec, Lund Institute of Technology

12. Computer graphics access for Blind people through a haptic virtual environment. http://www.grab-eu.com 13. Wood J, Magennis M, Arias E, Gutierrez T, Graupp H, Bergamasco M (2003) The design and evaluation of a computer game for the blind in the GRAB haptic audio virtual environment. In: Proceedings of Eurohaptics 2003 14. Moran D-B, Cheyer AJ, Julia LE, Martin DL, Park S (1997) Multimodal user interfaces in the open agent architecture. In: Proceedings of the 2nd international conference on intelligent user interfaces. ACM Press, pp61-68 15. Cheyer A, Julia L (1999) InfoWiz: An animated voice interactive information system. In: Third international conference on autonomous agents (Agents’99), communicative agents workshop 16. Moore R, Dowding J, Bratt H, Gawron J M, Gorfu Y, Cheyer A (1997) CommandTalk: A spoken-language interface for battlefield simulations. In: Proceedings of the fifth conference on applied natural language processing. morgan kaufmann publishers inc. San Francisco, CA, USA, pp1-7 17. Cheyer A, Julia L (1998) Multimodal Maps: An agent-based approach. multimodal human-computer communication. lecture notes in artificial intelligence 1374. Springer, pp111-121 18. Coutaz J (1987) PAC, an object oriented model for dialog design. in: proceedings of interact ‘87. North-Holland, pp431-436 19. Nigay L, Coutaz J (1995) A generic platform for addressing the multimodal challenge. In: Proceedings of ACM CHI’95. ACM press, pp98-105 20. Hietala P, Niemirepo T (1995) A framework for building agentbased learning environments. In: Proceedings of AI-ED 95: Artificial intelligence in education. AACE, p 578 21. Foundation for Intelligent Physical Agents. http://www.fipa.org/ 22. CLIPS: A Tool for Building Expert Systems. http:// www.ghg.net/clips/CLIPS.html 23. SQLite. http://www.sqlite.org/ 24. Kangassalo M (1998) Modelling a natural phenomenon for a pictorial computer-based simulation. In: Kangassalo H, Charrel PJ, Jaakkola H (eds) Information modeling and knowledge bases IX. IOS press, Amsterdam, pp 239-254 25. Kangassalo M (1996) PICCO as a Cognitive Tool. In: Tanaka Y, Kangassalo H, Jaakkola H, Yamamoto A (eds) Information modelling and knowledge bases VII. IOS press, Amsterdam, pp 344-357 26. Lonka K, Hakkarainen K, Sintonen M (2000) Progressive inquiry learning for children–Experiences, Possibilities, Limitations. Eur Early Childhood Educ Res J 8:7-23 27. Kangassalo M, Raisamo R, Hietala P, Ja¨rvi J, Peltola K, Saarinen R, Tuominen E, Hippula A (2005) Proactive agents that support children’s exploratory learning. In: Kiyoki Y, Wangler B, Jaakkola H, Kangassalo H (eds) Information modelling and knowledge bases XVI. IOS press, Amsterdam, pp 123-133