VisPad: a novel device for vibrotactile force feedback ... - CiteSeerX

7 downloads 247 Views 1MB Size Report
virtual reality (VR) systems. We don't consider ... immersive displays are useful for virtual reality, they are not cost effective ..... running until a call is made to set the voltage level to 0. Be- cause of .... The initial rewiring is a custom job, but the serial inter- face to the .... [17] NCEDC. Northern california earthquake data center.
VisPad: A Novel Device for Vibrotactile Force Feedback Doanna Weissgerber University of California Computer Science Department Santa Cruz, CA 95064 USA [email protected]

Bruce Bridgeman University of California Psychology Department Santa Cruz, CA 95064 USA [email protected]

Abstract VisPad is a new haptics design for visualizing data, constructed from commodity massage chair pads with custom controllers and interfaces to a computer. It is an output device for information transmitted to a user who sits on the pad. The user interface is unique in that it has a large feedback area and is passive in nature, where unlike most current haptics devices, the user’s hands are free to work on other things. VisPad can be used as the sole haptics device or can be used in conjunction with other haptics devices. We have tested the usefulness of VisPad in visualizing data by adding the VisPad interface to our protein structurealignment program (ProtAlign) and performing usability studies. The data demonstrates that more information can be perceived at one time with our multi-modal presentation than with graphics-based visualization alone.

1. Introduction A data set may only have a few variables. For this type of data, a visual display is often adequate. As the number of variables increases, however, it becomes increasingly difficult to display images in an easily comprehensible form. If we exhaust all of the easily distinguishable colors, textures, and shapes possible in visual perception, graphical visualization tools become less effective. Users can spend more time trying to interpret the visualization than in understanding the information. New modalities have been developed to enhance understanding of multi-dimensional data. Sonification has been shown to enhance the user’s ability to understand data quickly [7]. But sound lacks a high-acuity spatial dimension, and when the easily discerned sound cues such as instruments, notes, tempo, etc. also become confusing, where does the scientist turn? We turn to the field of haptics for additional sensory input routes [28].

Alex Pang University of California Computer Science Department Santa Cruz, CA 95064 USA [email protected]

There are several disadvantages to current haptic technology. Desktop interfaces, such as PHANToM by Sensable Technologies, take valuable desk space. Haptic gloves can weigh almost 400 grams and are tiring during prolonged use [4]. Presently, most haptic feedback devices occupy at least one hand or cover the hand with a glove (e.g. CyberGlove with CyberGrasp or CyberTouch by Immersion Inc., PHANToM, or Haptic mice by Logitech). Because most applications use keyboards, it is not always appropriate to tie up the user’s hand. Feedback from devices such as haptic mice is lost if the user is typing, and wearing gloves while typing can be uncomfortable. Hand-oriented devices such as the PHANToM also make it difficult to just sit back and take in the visualization of multi-modal data. Haptic visualization research is looking at using motors, air jets, and skin stretch to generate effects. But again, most research is focused on the hand as the only channel for haptic output. Hayward et al. developed a version of the Pantograph, a force feedback device intended for direct manipulation with the hand [9]. It was originally developed to allow visually handicapped users to access computer applications. Ogi et al. developed a multi-sensory data environment. However, although they experimented with a wind sensation, they actually mounted small fans around the users hands [18], again restricting hand use. To address these problems, we have developed VisPad. Grigore Burdea notes that in haptic technology, there is a lack of force feedback to the body [4]. The VisPad prototype is comprised of a massage chair pad with eight individually computer-controlled motors. The prototype is able to control any variable voltage device. It is non-intrusive; VisPad simply attaches to the user’s normal office chair. The user simply sits on our haptic device, which can be used with any software. We demonstrated a prototype at the IEEE Information Visualization 2001 conference with the USGS earthquake dataset. Since then, VisPad has been shown to be very effective during our recent usability study. To test its effectiveness, we attached VisPad to our program ProtAlign [29], a multi-variate tool useful in determining the

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

structure of an unknown protein. We were able to determine that it significantly decreased the time required to assess positions along a protein structure-sequence alignment.

2. Background The goal of scientific visualization is improve the users comprehension of information. When we find better methods of information display, we can decrease the amount of time it takes to understand information [6]. Multi-modal visualization techniques, along with enabling more channels for information, provide scientists with more appropriate models for information output. The hypothesis of modality appropriateness proposes that the sensory modalities are designed to process different types of information effectively. When there are situations involving discrepant multimodal inputs, the modality that is most reliable for the task is weighted most heavily [13]. Multi-sensory information visualization gives the scientist more possible sensory input channels. With more input channels, natural mappings become easier to realize, and more information can be presented at once. But this multisensory input mapping presents challenges. Not all input senses have the same input bandwidth. Vision has by far the highest information bandwidth. Sonification also has a fairly high information bandwidth. Our capacity for comprehension of haptic stimuli is much lower [6]. Haptic rendering, a relatively new modality for human computer interaction, translates computer information through the use of force feedback devices that convey movements or sensations felt by the user. Most of the research in this emerging field has focused on the use of haptics as an alternative to visual displays for people who are visually impaired or as an enhancement to virtual reality (VR) systems. We don’t consider haptics as a replacement for visual displays, we consider it as an additional mode for helping a user to understand information more quickly. When we consider haptic mappings, we must realize some of the limitations of mapping touch. Consider skin sensitivity, which varies throughout the body. The two-point threshold for skin is the distance between two points where a subject is able to determine that two distinct points of contact are being made to the skin. Two-point thresholds have been directly correlated to skin sensitivity [20]. Different people have dissimilar skin sensitivity levels. Females feel light touches more easily than males, and age can adversely affect skin sensitivity. Even temperature can influence skin sensitivity; cooled skin is less sensitive than warm skin [20]. There are several types of haptic stimulation. Recent research in pneumatic stimulation has used air jets or air pockets that push the lining of a glove against the user’s skin. There is a numbing effect on the skin however, which means

that the ability to sense the air is temporarily disabled. This is a low bandwidth haptic method. For the user to sense each of the airjets separately, they must be well spaced [5]. Vibrotactile stimulation generates vibrations at different frequencies and amplitudes that are used to stimulate the user. Haptic devices employing vibrotactile stimulations seem to be the best because they can be built very small and lightweight, and can achieve a high bandwidth. The ”Tacticon 1600” is an electrotactile system which is used to replace sound for the hearing impaired. Small electrodes are attached to the user’s fingers. These electrodes provide electrical pulses which represent the frequency of sound waves. However, because the intensity window between the sensory threshold and the pain threshold is rather narrow, the range of available amplitudes is limited [5]. Another haptic stimulation approach is functional neuromuscular stimulation (FMS), which provides stimulation directly to the neuromuscular system. Electrodes are placed directly in muscles and in the nervous system. Although very interesting from a theoretical standpoint, this method is definitely not appropriate for the standard user. It is invasive and can be painful [5]. Much of the haptic research is applied in virtual reality systems. Haptic output displays can be broadly grouped into two main categories: those used in immersive and in nonimmersive environments. The immersive systems tend to be for very specialized applications, such as surgical trainers, pilot trainers, etc. Medical VR is interested in the visualization of and interaction with anatomical data. It is currently used to train surgeons, and to plan, and perform surgeries and therapy. Other uses include evaluating MRIs, ultrasound scans, physiological images, etc. Increasingly, as a result of interactions, realizations in modalities other than vision have started to surface: sound, force, touch, and smell. While this is very encouraging, most of the equipment is still quite bulky, specialized, and expensive. Though immersive displays are useful for virtual reality, they are not cost effective, nor really necessary for the scientific interpretation of varied data sets. We are interested in non-immersive haptic displays that are non-obtrusive. Computer users still expect to have a keyboard, a mouse, a monitor and a comfortable chair to sit in as they evaluate scientific data. Our sensory tactile display is currently an off-the-shelf massage chair pad that can be put on any existing chair. However, this display method is not limited to vibrating motors. Other possibilities include connecting the controllers to fans, lights, heating elements, etc.

3. VisPad The VisPad prototype was built using a common massage chair pad by Homedics. The control wires in the

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

3.2. VisPad with ProtAlign If all of the tasks were extremely easy to perform visually, there would be no need to pay attention to haptic information. We wanted to evaluate VisPad with a real world multi-dimensional problem. Therefore we attached the haptic device to the program ProtAlign [29, 8], a protein structure-sequence alignment viewer.

Figure 1. The VisPad prototype and location of the eight motors.

Homedics pad were cut and attached to an ACCES RDAG12-8(H) by I/O Products, Inc [1]. The RDAG128(H) uses serial communication to set the power levels in eight digital to analog converters (DACS). Each of these DACS is capable of varying the voltage level on a wire from 0 to 10 volts. The higher the voltage, the more intense the vibration of the motor in the massage chair pad. We can independently control the level of vibration for eight motors. The DACS could also be used to control the speed of a fan, the amount of heat in a heating element, etc. VisPad can be controlled by any computer with with the ability to write to the serial port. Currently the system is being used with a 833 MHz Dell Inspiron 8000 (see Figure 1) running Windows 2000.

3.1. Use of the VisPad Our research has focused on the investigation of natural visualization techniques. Natural visualization can be a direct analogy or an easily understood mapping that requires little training. When integrating a haptic interface, the designer must determine a) what task(s) will the user perform and b) how can the task be represented? Ultimately, the ability of the user to form a psychological representation constrains the tasks which can be performed [12]. VisPad can be used for natural visualization techniques or for more abstract mappings. It can be used alone or in conjunction with other haptic devices. There are two main goals in haptics: motor control and perception [11]. Motor control allows the user to modify information. VisPad has been designed for perceptual tasks, allowing the user to learn the properties of presented information.

Figure 2. Screenshot of ProtAlign.

Protein is made up of a string of amino acids. There are 20 different amino acids found in nature. Graphically, ProtAlign represents the overall three dimensional backbone structure of the protein with a ribbon-like structure (see figure 2). ProtAlign uses glyphs to visualize the size and type of each amino acid. Each glyph is comprised of one or more pegs. The shape of the pegs tells the user the amino acid type. There are four types of amino acid: hydrophobic (cube peg), hydrophilic (cylinder peg), charged basic (pyramid peg), and charged acidic (cone peg). The number and arrangement of the pegs indicate the size of the amino acid. ProtAlign visualizes the alignment of two proteins. Each position along the structure visualizes a pair of amino acids represented by a glyph pair. Cartoon shapes along the backbone show whether the amino acid positions are in a helix, a sheet, or a loop region (the secondary structure of the protein). Color is used to indicate the probability of an amino acid substitution occuring naturally in nature as determined by the Blosum62 matrix [10]. Color also indicates the Environment and Exposure scores as determined by the Environments program [2]. Environments looks at positions along the alignment, assesses the secondary structure, examines the neighboring amino acids, determines the secondary protein structure, and calculates the exposure to substrate at each of the positions along the protein structure-sequence

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

alignment. The Environment metric is used to assess how much each of the amino acids like the environment at their current location. The Exposure metric reveals the exposure to outside substrates at an amino acid position. ProtAlign uses a rainbow color analogy. For the Blosum62 scoring, each position along the protein alignment is evaluated. The pair of amino acids is compared and colored to indicate the score. Blue indicates a high probability of substitution, green indicates a good probability, yellow indicates an average probability, and red indicates that the substitution is very unlikely to occur. For the Environmental coloring, the comfort level of each amino acid is evaluated. Each glyph in the pair is individually colored. Blue indicates very comfortable, green indicates comfortable, yellow indicates barely comfortable, and red indicates that the amino acid is uncomfortable in the current environment. Given these four colors, ProtAlign can distinguish 5 levels when comparing environmental preference: much better, a little better, the same, a little worse, or much worse. The program Environments defines three Exposure levels: exposed, partially buried, and buried. Exposed positions are relatively free to move around. Because things that move tend to generate heat, we used the color red to indicate exposed positions. Buried positions tend not to have much freedom of movement. The color blue represents the buried positions which are limited in movement and things that don’t move around tend to grow cold. We needed an intermediate color to indicate partially buried positions. We chose green to keep the rainbow analogy that is pervasive to the color mapping in ProtAlign.

The Exposure metric was also mapped to VisPad. When an amino acid position is selected from the screen, the position is mapped back to the motors of VisPad. The vibrational level represents the exposure level, with high vibration indicating a highly exposed amino acid position. Figure 3 shows the ProtAlign screen with a numbered grid superimposed on the image to demonstrate the motor mapping. As the molecule is manipulated, the location of the vibration may move. For example, if a user selects the red amino acid position in quadrant 2 of figure 3, motor 2 in the upper left back would indicate the exposure level. If the user then rotates the molecule moving the amino acid position to quadrant 7, motor 7 on the right thigh would vibrate. Because there are only eight motors in the prototype, we allow only two amino acid positions to be haptically visualized at once. This greatly decreases the probability that the same motor will be needed to visualize both amino acid positions at the same time. We also experimented with mapping the ProtAlign exposure metric to VisPad without the positional information. In this case, only one amino acid pair could be haptically visualized. The exposure metric was mapped to the number of motors and the level of vibration. The higher the exposure score, the more an amino acid position can move around, therefore we added more motors. When an amino acid position was buried, we applied 5 volts to motor 3. This caused a mild vibration. When a position was partially exposed, we applied 6 volts to motors 3, 4, and 6. For exposed positions, we expect the amino acid to have more freedom of movement. Motors 1, 2, and 3 were vibrated with 10 volts, while 6 volts were applied to motors 7 and 8.

3.3. VisPad with Earthquake Data

Figure 3. Mapping the screen coordinates to VisPad.

During the IEEE Visualization 2001 conference, we demonstrated the capabilities of VisPad with a direct analogy to earthquake data. We used multi-modal techniques to visualize the USGS earthquake data set [17]. Earthquake date, time, location, magnitude and depth were visualized. The coordinates of the states of California and Nevada were used to graphically visualize a map. A gui slider visualized the dates and times of the quakes. The depth of the earthquake was represented by a colored dot at the location of the quake, while the size of the dot indicated the magnitude using the Richter Scale. The magnitude of the earthquake and the location were mapped to the motors in the haptic device. Location was indicated by mapping the screen coordinates directly back to VisPad (see Figure 4), while the magnitude of the earthquake was mapped to the motor intensity. Like the magnitude of a quake, the intensity was based on an exponential function.

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

3.5. Generating haptic effects with VisPad

Figure 4. Demonstration of VisPad at InfoVis 2001.

Our earthquake data set had magnitude values between 3.2 and 6.2 with the notable exception of the October 17, 1989 earthquake which measured 7.1. We wanted an exponential function that would map these values to the possible voltages we had available. Our highest voltage is 10 volts. We experimented with a few methods of mapping the voltages. We found that the function Ü  gave a feeling of earthquake magnitude that could be mapped to motor vibration. An earthquake of magnitude 4.0 causes the motors to vibrate with 1.09 volts while an earthquake of magnitude 4.1 causes the motor to vibrate with 1.21 volts, a difference of .12 volts. This is compared to a .85 volt difference between a magnitude 6.0 and 6.1 earthquake. We capped the magnitude 7.1 quake to the full 10 volts.

3.4. Connecting VisPad to other programs The VisPad prototype has also been connected to other simple programs for demonstration purposes. The simplest of these located the position of a mouse pointer on the screen. VisPad has been used to haptically visualize the bank angle of the wings on an airplane. In this demonstration, we graphically visualized a ”turn and bank coordinator”. As the angle of the wings increased, the intensity of the vibrotactile feedback increased. For example, a 60 degree left bank was visualized as 10 volts applied to motor 8 while a moderate 20 degree right bank was a visualized with a gentle vibration of motor 7. VisPad has also been used to represent the speed of moving objects; the faster the object moved, the more intense the haptic vibration.

There are several utility functions that permit efficient communication with the RDAG12-8(H) allowing the VisPad to be easily attached to any software. There are several interesting effects for each of the motors. At the layer closest to the hardware, there are five functions that allow access to the VisPad: RunMotor, RunMotorAtLevel, Wave, WaveRow, and Pulse. The function RunMotor( int motorNumber, int level ) allows the user to turn on any motor at any level from 0 to 10 volts. The motor will continue to run at the assigned level until a new function changes the voltage. The RDAG128(H) is able to step up the voltage in 65535 increments (0000 to FFFF). This means there is a lot of flexibility in values. However realistically with the VisPad prototype, it is not possible to truly feel the differences between all 65536 levels. The threshold intensity difference is the just noticeable difference, or JND, of psychophysics, and is proportional to the intensity level. As the intensity rises, the JND rises also in a logarithmic function (Webers law). Over a wide range, the JND increase is better modeled as a power law (Stevens power law). Over the range of our vibratory stimuli, however, the applicable segment of the psychophysical function is nearly linear. As a result, we have written linear functions that allow access to the motors in a more intuitive fashion. The function RunMotorAtLevel( int motorNumber, int level, int numberLevels ) scales the 65535 levels to numberLevels and sets the voltage to level which should fall between 0 and numberLevels. For example if the user chose to allow 10 levels, level 1 would trigger the motor vibration with 1 volt of power, level 2 would activate the motor with 2 volts, up to level 10 which would vibrate the motor with 10 volts. As in RunMotor, the motor will remain running until a call is made to set the voltage level to 0. Because of the rather narrow range of the vibrotactile sense, it would not be useful to increase the number of levels much further. The function Pulse( int motor, int level, int numberLevels ) behaves like RunMotorAtLevel, except it turns the motor off after a small delay. Wave( int motor, int numberSteps, int level, int numberLevels ) allows the user to produce a triangle hat wave pattern for a specific motor. This function causes the motor to step up to the desired level in a specified number of steps and then step down in the same number of steps until the motor is turned off. For example Wave(6, 1, 10, 10) will cause motor 6 to first vibrate at 5 volts, then 10 volts, 5 volts, and then off. Wave (7, 2, 3, 5) will cause motor 7 to vibrate with 2 volts, followed by 4 volts, arriving at 6 volts, then step back through 4 and 2 volt before turning off.

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

Figure 5. Wave effect of WaveRow across the back.

The function WaveRow( int row, int level, int numberLevels, char startSide=’L’ ) causes a wave pattern to be drawn across the user’s back or legs. The parameters level and numberLevels have the same meaning as in RunMotorAtLevel, Wave and Pulse. WaveRow builds a wave pattern across all of the motors in a single row (see Figure 5). The default direction is to start on the left and move across to the right. But the user can stipulate that a wave moves from right to left by setting startSide equal to ’R’. It is possible to combine the functions to get interesting patterns such as apparent motion across multiple motors. For example if we were to call the functions WaveRow(0, 5, 5, ’L’), WaveRow(1, 4, 5, ’R’), WaveRow(2, 3, 5, ’L’), Pulse(3, 2, 5), we would get the effect of a wave of motion snaking across the user’s back and legs and disappearing in the middle of the back.

intuitive and can be presented in the local coordinates of the user’s body [23]. Recently an office chair with a 3x3 array of vibrotactile units has been developed at the George Washington University. Their work is similar to the Sensing Chair of Tan et al. Initial experiments indicate that users were able to distinguish between different vibrational intensities. The users were also able to determine the location of the vibration, though it is interesting to note that users had difficulty determining which of the three vibrotactile units in the top row was activated [15, 14]. VisPad was developed exclusively as a perceptual output device. Our goal is to convey multi-dimensional information to the user. When VisPad is used in conjunction with graphical visualization and/or sonification it is possible to interpret more information at the same time.

4. VisPad Usability Test We tested the VisPad with a total of 49 volunteers: 48 were students, 1 a UCSC professor. The 48 student volunteers were all given extra credit in their class for participating in the study. Volunteers had no previous experience using VisPad or ProtAlign. Both the control group and experimental groups were asked to evaluate an amino acid position as viewed in ProtAlign. There were 21 separate evaluation tests, each with several tasks, given in random order.

3.6. Other Haptic Pads and Chairs

4.1. Experimental method

The Haptic Interface Research Laboratory (HIRL) and the Human Design research group at the MIT Media Lab have been developing perceptual user interfaces (PUIs). PUIs enable an intelligent environment to sense the computer user and use machine perception and reasoning to react [27]. They have extensive research with “smart” technologies. These technologies use sensors that allow a computer to see, hear, and interpret a user’s actions. The computer tries to anticipate a user’s need so that it can react intelligently [19]. Their labs have implemented smart rooms, smart clothes, and the Smart Chair [16]. One of the most interesting PUI devices is the Sensing Chair [21, 22]. This device senses the body posture of the occupant. The main purpose of their research is to transform ordinary chairs into perceptual and multimodal human-computer interfaces. The ultimate goal is to have a haptic device which senses changes in body position and orientation to drive real-time applications [24]. Though the Sensing Chair is primarily an input device, recently the chair has incorporated feedback capabilities. This feedback focuses on the transmission of directional and simple geometric information on a user’s back. This direct mapping is

When each test started ProtAlign visualized the protein backbone and glyph pairs. The protein backbone was colored under each amino acid position according to the Blosum62 score and the individual glyphs in each amino acid pair were colored according to the Environment score. The user had to choose to visualize the Exposure score. When the user selected the metric from a pull-down menu, the pair of amino acids was colored and VisPad vibrated to indicate the Exposure score. Clicking on the amino acid pair also caused VisPad to indicate the Exposure score. The experimental group knew the meaning of the haptic vibration. The control group thought they were getting a massage.

4.2. Experimental results If all of the tasks were correctly performed, we recorded the time it took to perform the analysis. For each task, using a two-sample t-test, we compared the average times of the experimental and control groups. The overall average time to perform all tasks in a test for the experimental group was 17.30 seconds versus the 31.09 seconds for the control

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

group. In all of the 21 tests, the group with haptic information performed at least 1.5 times faster than the group without haptic information. We were able to determine that the difference between the mean times for the experimental and control groups was significant for each of the tests (   , minimum   , average   ). We compared the error rates of the experimental and control groups. The experimental group had a lower error rate, 11.03% while the control group had an error rate of 11.31%. Analysis showed that the difference was not statistically significant, however.

4.3. Haptics as a possible duplicate information channel We were able to determine whether a user in the experimental group decided to combine haptic and graphical information to assess the exposure metric of an amino acid position in the protein alignment. It was possible to assess exposure using only haptic visualization. During training, the users were taught all of the possible methods of assessing a position in the alignment. However, they were not told they had to use a specific method. In fact, we told the experimental group that it was possible to get all of the information visually. We were interested in whether people would choose to use haptic feedback when given the option of determining the information visually. Most of the experimental group chose not to visualize the exposure information graphically, but for some of the experiments, a few of them preferred bi-modal visualization. They chose to both feel and see the exposure information to assess a position. Presumably, haptics was used to reinforce the visual information. The majority of the experimental group may have chosen to visualize exposure using only haptics because exposure is based on the material properties of the amino acid position. The haptic channel was ”most appropriate”, as it has been proposed that vision is better suited for spatial tasks and that touch is better suited for tasks that relate to the material properties of objects [13]. The fact that most of the experimental group chose not to experience some visual information that would have been available to them shows the appeal of haptic input for some kinds of information. Using the t-test, we compared the times of the control group to the experimental group users who preferred bi-modal visualization. Though the results look promising (   , minimum and average   ), on average only a few people per experiment chose to both visualize and feel the information. This haptic device may be useful both as its own information channel and as a duplicate information channel. Further experimentation would be required to verify this.

5. Conclusion We have presented a prototype for a large area passive haptic device using off-the-shelf commodity massage pads. The initial rewiring is a custom job, but the serial interface to the RDAG12-8(H) as well as the hardware level libraries are application independent. Using this prototype, we modified an existing application – ProtAlign – in order to run some usability studies. These experiments demonstrated that the use of our VisPad haptics unit during the visualization of complex data greatly decreased the time it took to understand all of the information. VisPad was able to decrease the time with no error rate increase. Whether the user chose to use the haptic information as its own information channel or to use haptics as a duplicate information channel, VisPad improved their performance. Why should use of an additional modality with a very low bandwidth improve performance on a principally visual task? One answer comes from a comparison of the amounts of time that are necessary for switching modalities in attention. Psychophysical work shows that switching attention from one modality to another requires about 50 msec, while moving visual gaze requires 200 msec for saccade preparation and execution [3] [25] [26]. Though small, the latency difference would occur frequently, leading to significant advantage for cross-modal information sources.

6. Future Work Ideally our VisPad haptics prototype could be turned into a more robust output device. Most of the components are either inexpensive or can easily be made so by mass production. Because of the limitations of the RDAG12-8(H), only 8 motors could be controlled. Also, the chair pad itself has motors designed for massage. What this means is that the motors have a very large overlapping vibrational pattern. It would be interesting to see VisPad evolve so that it uses a larger array of smaller motors, and it would be nice to see them radio controlled rather than directly connected using wires.

7. Acknowledgements We would like to thank Craig Abbott for his help in developing the VisPad prototype, particularly with the RDAG12-8(H), and his help with the experimental design. His expertise in clinical psychology helped minimize the user’s confusion during the usability study. Thanks to Marc Hansen for his initial efforts with ProtAlign. Initial funding for this work came from NSF NPACI ACI-9619020.

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE

References [1] Access. Data acquisition from acces i/o products. http://www.acces-usa.com, 2003. [2] J. U. Bowie, R. Luthy, ¨ and D. Eisenberg. A method to identify protein sequences that fold into a known three– dimensional structure. Science, 253:164–170, 1991. [3] B. Bridgeman and M. Mayer. Failure to integrate visual information from successive fixations. Bulletin of the Psychonomic Society, 21:285–286, 1983. [4] G. C. Burdea. Haptics issues in virtual environments. In Proceedings of the International Conference on Computer Graphics, pages 295–302. IEEE Computer Society, 2000. [5] G. W. Edwards. Performance and usability of force feedback and audiory substitutions in a virtual environment manipulation task. Master’s thesis, Virginia Polytechnic Institute and State University, November 2000. [6] J. Fritz. Haptic Representation of Scientific Data for Visually Impaired or Blind Persons. PhD thesis, University of Delaware, 1996. [7] M. Hansen, E. Charp, S. Lodha, D. WeissgerberMeads, and A. Pang. PROMUSE: A prototype system for multi-media visualization of protein structural alignments. In R. B. Altman, A. K. Dunker, L. Hunter, T. E. Klein, and K. Lauderdale, editors, Pacific Symposium on BioComputing, pages 367–379, 1999. [8] M. Hansen, D. Weissgerber-Meads, and A. Pang. Comparative visualization of protein structure–sequence alignments. In IEEE Information Visualization, pages 106–110, October 1998. [9] V. Hayward and J. M. Cruz-Hernandez. Tactile display device using distributed lateral skin stretch. In Proceedings of the Haptic Interfaces for Virtual Environment and Teleoperator Systems Symposium, ASME IMECE2000, volume DSC-69-2, pages 1309–1314. ASME, 2000. [10] S. Henikoff and J. G. Henikoff. Amino acid substitution matrices from protein blocks. Proceedings of the National Academy of Sciences, 89:10915–10919, Nov. 1992. [11] A. E. Kirkpatrick and S. A. Douglas. Application-based evaluation of haptic interfaces. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pages 32–39. IEEE Computer Society, 2002. [12] R. L. Klatzky and S. J. Lederman. How well can we encode spatial layout from sparse kinesthetic contact? In Proceedings of the 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’03), pages 179–186. IEEE Computer Society, 2003. [13] S. J. Lederman, A. Martin, C. Tong, and R. L. Klatzky. Relative performance using haptic and/or touch-produced auditory cues in a remote absolute texture identification task. In Proceedings of the 11th Symposium on Haptic Interfaces for Virtual Environment

[14]

[15]

[16] [17] [18]

[19]

[20] [21] [22]

[23] [24]

[25]

[26]

[27]

[28]

[29]

and Teleoperator Systems (HAPTICS’03), pages 151– 158. IEEE Computer Society, 2003. R. W. Lindeman and J. R. Cutler. Controller design for a wearable, near-field haptic display. In Proceedings of the 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’03), pages 397–403. IEEE Computer Society, 2003. R. W. Lindeman and Y. Yanagida. Empirical studies for effective near-field haptics in virtual environments. In Proceedings of the IEEE Virtual Reality 2003, pages 287–288. IEEE Computer Society, 2003. I. Lu. Augmented smart chair. http://wwwwhite.media.mit.edu/vismod/demos/smartchair, 1996. NCEDC. Northern california earthquake data center. http://quake.geo.berkeley.edu, 2001. T. Ogi and M. Hirose. Multisensory data sensualization based on human perception. In Proceedings of IEEE Virtual Reality Annual International Symposium, pages 66–70, 266, Santa Clara, CA, 1996. IEEE Computer Society Press. A. Pentland. Perceptual user interfaces: perceptual intelligence. Communications of the ACM, 43(3):35–44, 2000. R. Sekuler and R. Blake. Perception. McGraw-Hill, 3 edition, 1994. ISBN 0-07-056085. H. Z. Tan. Perceptual user interfaces: haptic interfaces. Communications of the ACM, 43(3):40–41, 2000. H. Z. Tan, I. Lu, and A. Pentland. The chair as a novel haptic user interface. In Proceedings of the Workshop on Perceptual User Interfaces, pages 56–57, October 1997. H. Z. Tan and A. Pentland. Tactual displays for wearable computing. Personal Technologies, 1:225–230, 2001. H. Z. Tan, L. A. Slivovsky, and A. Pentland. A sensing chair using pressure distribution sensors. IEEE/ASME Transactions on Mechatronics, 6(3):261–268, September 2001. M. Turatto, F. Benso, G. Galfano, and C. Umilta. Nonspatial attentional shifts between audition and vision. Journal of Experimental Psychology: Human Perception and Performance, 28:628–639, 2002. M. Turatto, G. Galfano, B. Bridgeman, and C. Umilta. Space-independent modality-driven attentional capture in auditory, tactile and visual systems. Visual Cognition, page in press, 2003. M. Turk and G. Robertson. Perceptual user interfaces (introduction). Communications of the ACM, 43(3):32– 34, 2000. H. A. van Veen and J. B. van Erp. Tactile Information Presentation in the Cockpit, pages 174–181. Springer, 2000. LNCS 2058. D. Weissgerber-Meads, M. Hansen, and A. Pang. ProtAlign: A 3-dimensional protein alignment assessment tool. In R. B. Altman, A. K. Dunker, L. Hunter, T. E. Klein, and K. Lauderdale, editors, Pacific Symposium on BioComputing, pages 354–367, 1999.

Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’04) 0-7695-2112-6/04 $20.00 © 2004 IEEE