Semi-transparent Video Interfaces to Assist Deaf Persons ... - CiteSeerX

2 downloads 57 Views 308KB Size Report
Mar 24, 2007 - Figure 1: Demonstration of laptop screen that a deaf notetaker uses. .... person can touch it and see the mirror image, the person gesturing.
Semi-transparent Video Interfaces to Assist Deaf Persons in Meetings Dorian Miller, Karl Gyllstrom, David Stotts, James Culp Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill NC, 27599-3175 1 (919) 962-1700

{dorianm, karl, stotts, culp}@cs.unc.edu

ABSTRACT

many sources of information.

Meetings are a vital part of participation in social activities. For a deaf person who does not understand spoken language, following a discourse at meetings can become confusing if there are too many simultaneous sources of information. When the person focuses on one source of information, he misses information from another source; for example, while looking at a presenter’s slides, the person misses information from the signing interpreter. Using semi-transparent video technology we have developed two applications to assist the deaf in local group meetings and remote personal meetings. The features of the applications were iteratively designed, as we incorporated feedback from the Deaf community. This research is an extension of our Facetop research project, which applies semi-transparent video for people without sensory disabilities.

Our research develops two applications of semi-transparent video to assist deaf meeting participants in following the flow of a presentation or meeting. By semi-transparent video, we mean that video is semi-transparently overlaid on a workspace, such as a computer desktop or a shared computer application. The user of the semi-transparent video can clearly distinguish the video from the shared workspace. We focus on two meeting scenarios, which we refer to as group meeting and personal meeting. In the group meeting scenario one or more deaf persons is attending a meeting or presentation. The majority of the participants could be hearing conversing verbally or deaf using sign language.

Categories and Subject Descriptors H.5.1 [Multimedia Information Systems]: Video; K.4.2 [Computers and Society]: Social Issues - Assistive technologies for persons with disabilities.

General Terms Design.

Keywords Assistive Technology for Deaf, Meeting Accommodations, Videoconferencing, Semi-transparent Video.

1. INTRODUCTION Participating in one-on-one and group meetings is a vital activity for someone to be involved in their social surroundings. The purpose of the meetings can be leisure activities, classroom lectures, public hall gatherings, or work-related team meetings. For the deaf who cannot understand spoken language, following a discourse at a meeting might become confusing if there are too

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ACMSE 2007, March 23-24, 2007, Winston-Salem, N. Carolina, USA) ©Copyright 2007 ACM 978-1-59593-629-5/07/0003...$5.00

In the personal meeting scenario, two remotely located people are using videoconferencing and collaborating on a shared computer application. The two people could be deaf or a deaf person could be working with the hearing person. Our semi-transparent video interfaces are used differently in each scenario; however, both address the challenge for the deaf to watch multiple sources of information simultaneously. If the person watches one source of information, he misses information from the other sources. The research reported here grew out of the original Facetop [1] project, and our study of using it for distributed pair programming between fully hearing users. Our personal meeting application for the deaf is an outgrowth of the Facetop project. Our group meeting application is a novel application suggested after deaf students tried Facetop.

1.1 Group meeting scenario In a group meeting it is especially difficult for a deaf person to take notes and follow a discussion or presentation. In such a meeting, the sources of information might include: a presenter or speaker, presented material like slides, and signing interpreter if conversing is predominantly verbal. For the deaf it is still manageable to follow if the sources of information are in the person's field of view. The person can focus on one source while using peripheral vision to be aware of other events. For example, a person might mainly watch the interpreter, but take a moment to view a new slide when the presenter advances his slides. Glancing between the sources of information, the person comfortably follows the discussion while missing minimal information. Taking notes is difficult for the deaf because while looking at the paper or computer where the notes are to be taken, the person is blind to the other sources of information. The person is cut off from the conversation; that is, the interpreter or other meeting

member who is signing. Also, the person misses events, such as when the presenter changes slides, begins or finishes speaking, or requests something of the audience, for instance, a show of hands. Our feedback from the Deaf community indicates that taking notes at group meetings is particularly challenging. It is difficult for students because they have to take continuous notes throughout a meeting. Students need to complete notes from a lecture to review it, complete their assignments, and study for exams. This applies for deaf students if the presenter and other students are hearing conversing verbally or deaf conversing with sign language.

1.2 Personal meeting scenario In the personal meeting scenario two remote people collaborate to complete a task using videoconferencing applications, such as NetMeeting or WebEx. Typically, a collaborator will see on his computer a video of the other person and have access to a shared application. Two deaf people use the video to communicate through sign language. Two hearing people use the video to communicate with facial gestures in addition to communicating verbally. Almost any single user application can be a shared application with the videoconferencing software; such as a word processor, spreadsheet editor, slideshow editor, and some simple games. Collaborators use their keyboard and mouse to control a shared application, although depending on the videoconferencing application, different techniques are used to determine how the collaborators can take turns controlling the shared application. An artifact of a typical videoconferencing application is that the collaborators interact with the video and shared application separately. This is no problem for two hearing people; they can converse focusing on the video and continue to converse, while focusing and controlling the shared application. It is more difficult for two deaf people. Conversing is easy using the video; however communication is limited when the collaborators focus on the shared application and control it. One person has to observe the other person’s actions, such as typing or mouse movement, to determine that person’s intentions. Using semi-transparent video, we developed an application enabling the collaborators to integrate their actions in the video with the shared application. In short, when one collaborator can see the other collaborator he can point and gesture at the shared application similar to when the collaborators are face-to-face. As in the group meeting application, a user sees the computer applications through the semi-transparent live video. In this case, however, one person sees a video of himself and his collaborator. In section 4 we describe the interface in more detail. Although similar interfaces with semi-transparent video have already been researched, our contribution is an innovative implementation and applying the interface to assist deaf persons communicate. The semi-transparent video helps two deaf people communicate more smoothly. As before, they can use the video to communicate with sign language. Furthermore, a person can easily interrupt signing and make a gesture to the shared application, such as pointing at the item he wants to discuss in more detail. Of course, the collaborators still have to use a mouse to control the shared application. The semi-transparent video will also enrich the possibilities for a deaf person to collaborate with a hearing person, who does not know sign language. Through the video the collaborators can

communicate with common gestures, such as head nods, pointing with a finger, and other hand waving gestures. For more detailed and abstract conversations the collaborators can use a chat session. One aspect of semi-transparent video that we will revisit throughout the discussion is that in semi-transparent video visual details important to sign language are washed out. This issue can be addressed in the video system implementation and in techniques in which the video is applied.

2. BACKGROUND There are two parts in the background. The first part is alternative meeting accommodations for assisting the deaf to participate in meetings with hearing persons. The second part describes research related to the semi-transparent video technology we use in our applications.

2.1 Meeting accommodations The general concept of meeting accommodations for the deaf is to translate from spoken language to sign language and vice versa. The most basic and widely used form is through a signing interpreter. For the deaf, signing is the most natural way to communicate, and therefore fastest and easiest. Sometimes, however, it can be difficult to get an interpreter when needed. The interpreter needs to be scheduled ahead of time and interpreters might be delayed by complications in traveling to the meeting location. For longer meetings two interpreters are required so that one interpreter can take a break from the demanding translation while the other interpreter fills in. Videoconferencing technology can make interpreters more available, because the interpreter can work from a remote location and avoid travel complications. Phone companies provide Video Relay Services for a “phone” conversation between a hearing and deaf person. Both people connect to the interpreter, who has an audio connection to the hearing person; and a video conferencing connection to the deaf person. Through the audio and video channels, the interpreter can make the necessary translations. An alternative to having an interpreter sign to the deaf is to use closed captioning. In close captioning an interpreter types a transcript of the spoken language. An advantage of the transcript over watching an interpreter is to have a record of the meeting. The transcript can be used to review the meeting afterwards and during the meeting parts can be reread for clarification. Examples of close captioning technologies intended for meeting accommodations are Sprint’s® “Captioning Relay Service” and IBM’s® “viaScribe”. Teletype (TTY) and chat messaging are other techniques for people to communicate when using sign language is not possible [2]. The basic concept is that each participant in a discussion types messages to be communicated. In TTY participants use teletype machines connected through the phone network whereby chat messaging participants use computer applications connected through the Internet. When an interpreter is not available these techniques are especially useful for a deaf person collaborating with a remote hearing person because the two people share knowledge of written language.

2.2 Semi-transparent video systems Our applications for assisting the deaf in group and personal meetings are based on semi-transparent video technology. The

semi-transparent video as it is used in our personal meeting application is an innovation of Computer Supported Cooperative Work (CSCW) research into interfaces used for remote computer mediated collaboration, where two or more remote people work together through networked computers. The objective of this research is to integrate the interpersonal space and shared workspace [3]. In other words, it allows remote collaborators to work more like when face-to-face; communicating with gestures, such as pointing a finger at an item a person wants to indicate. In the interface with semi-transparent video, one person can see another person’s gesture (such as pointing) and recognize the corresponding area or item of the shared workspace. Although the video image and workspace image are blended together, valuations of the interfaces conclude that users can easily understand the interface and use it effectively. In contrast, in a typical videoconferencing system, as mentioned, the interaction with the video and the shared workspace are separate.

for the notetaker to point the camera at the lecturer or lecture audience, such as during small group activities. The lighting in the room must be sufficient for the camera to capture a reasonable image.

Several research prototypes of semi-transparent video interfaces have been developed [3-10]. The interfaces take different approaches to the kind of shared workspace and the perspective of the video used. The workspace varies from a shared computer application to physical objects at one person's location. In some interfaces, the video includes only the user's hands. In other interfaces, the video includes more of the user's body, such as hands, head and torso, or whole body. The blended image of the video and workspace is displayed on computer monitors, projected, or in a head-mounted display. Of these research prototypes, ClearBoard [6] is closest related to our application for personal meetings. Our application for group meetings is an innovation on the semitransparent video interfaces used in CSCW research. Our application, however, uses the semi-transparent video in a fundamentally different way. The semi-transparent video is not used for gesturing. Instead, a deaf person uses the video to observe events they cannot see while focusing on the computer screen. The video is semi-transparent to simplify screen real estate issues, as will be explained in the next section. Our research with semi-transparent video contributes to the related CSCW research referenced above. We apply the technology to new applications involving the deaf that were previously not considered.

3. GROUP MEETING INTERFACE FEATURES In this section, we describe features of the group meeting application we have developed. As described, the purpose is to enable a deaf person to observe the meeting room, while focusing on a laptop to take notes. The features include controlling the video window and recording the video. These features were designed with feedback from the Deaf community. We also describe a requested feature we have not implemented yet. Figure 1 is a screenshot demonstrating how a deaf notetaker uses a group meeting application. In this case, the notetaker is attending a lecture with a speaking presenter. Our group meeting application displays a live video image of the signing interpreter translating the spoken language of the presenter. The video image is captured by an inexpensive camera (i.e. webcam) connected to the laptop. A notetaker would mainly observe the interpreter in the video image; however, for some activities it might be useful

Figure 1: Demonstration of laptop screen that a deaf notetaker uses. The notetaker can use any combination of applications to take notes, watch slides accompanying the presentation, etc. The application windows can be arranged side-by-side so that all applications are viewed at the same time. If the user has a regular laptop notes can be typed. However, if the laptop is a Tablet PC ®, in addition to typing, the user has the option to write electronic notes on the screen with a designated pen. In Figure 1 the notetaker is using the Tablet Sticky Notes application to take notes. Observing the laptop screen the deaf person can take notes while still being able to observe the interpreter. Similarly, having the slides on a laptop spares the notetaker the time and overhead to switch attention between the laptop screen and projected slide. Of course the notetaker does not have to observe the laptop screen all the time and instead can watch the interpreter and meeting room to simply follow the discussion of the speaker or audience. In addition to the video window our group meeting application has a control panel for the user to set the parameters of the video image, such as size, position, and transparency level of the video image. The interaction with the video image is different depending on whether the window is opaque. To illustrate the advantages of the semi-transparent window, we first describe the situation in an opaque video window. With an opaque video

window there is a trade-off between the size of the window and managing screen real estate. The larger the video window, the easier it is to recognize details, such as details in an interpreter's facial and hand gestures. Enlarging the video image becomes more necessary as the interpreter is further away from the camera; increased distance reduces the size and details of the interpreter’s image in the video. However, an enlarged video image might hide parts of other applications the notetaker is using. Alternatively, the notetaker could shrink the size of the other applications, which would also require more interaction and application to see the same information that was viewable before shrinking the window. Making the video window semi-transparent simplifies the screen real estate issue. With the semi-transparent window, a user easily distinguishes the video from the computer application in the same area of the window (as in Figure 1). A user controls the computer application covered by the video window as if the new video were not there. This is possible because the semi-transparent video window ignores mouse and keyboard events and passes them through to other applications. This functionality is supported by the operating system. It is not necessary to move the video window because the user can view and control all applications. Also the video window can be as large as desired.

4. PERSONAL MEETING INTERFACE FEATURES In this section, we describe the features of the personal meeting interface. The personal meeting we address is where two remote people communicate through computers and collaborate on a shared computer application. The personal meeting interface lets the collaborators use the video to visually gesture, such as pointing a finger, at the shared application similar to as if the collaborators were face-to-face. As explained, this is beneficial to the deaf because it enables them to communicate visually while looking at the shared application. To use our personal meeting application, each person points the camera connected to their computer at himself; for example, by placing the camera on top of the computer monitor. Figure 2 is a screenshot of what each person sees; that is a video of himself and his collaborator semi-transparently overlaid on the computer desktop. The computer desktop shows a shared application, in this case a checkers game. The collaborators are playing checkers against the computer, and together are plotting a game strategy. The collaborators control the checkers game with a mouse cursor as if the video were not present.

Although a user can distinguish the video from the overlapping application, the transparency of the window diffuses the details of the video. The contrast of the video image varies as the background (computer screen covered by video) ranges from light to dark. Members of the Deaf community have commented that understanding a person signing in the video is more difficult because details in hand gestures and facial expressions are harder to recognize. One solution is to move the video window to a more appropriate position by the mechanism that was explained for repositioning the opaque window. In future work, we have to experiment to determine an acceptable transparency level to recognize the details in the video and covered parts of any applications. Regardless of the details, a deaf person can still use the video to observe the meeting room for events that do not require high levels of detail; for example, identifying when a speaker or interpreter begins or finishes, or when an audience member asks a question. Our group meeting application has a feature commonly requested by members of the Deaf community – to record the video of the presentation for reviewing it later. In a classroom setting recorded video of an interpreter signing will be helpful for reviewing lectures, completing assignments, and studying for exams. As we discuss in the implementation, recording the video is computationally intensive and requires large disk space depending on the amount of compression. The side effects for the user might be that the computer's responsiveness is slow. An enhancement to our prototype would be to improve supporting a predominantly deaf audience. To use the current prototype, a deaf user must place the camera connected to the laptop close to the interpreter or the part of the room that should be observed. When several people are using our application in one meeting, not everybody will be able to have a satisfactory video image. A possible solution is to develop a system which has one camera capturing an ideal video image, and have the system stream the video image to all the laptops of the deaf audience members. The system that captures and streams the video could be a service provided by the meeting room, such as a lecture hall.

Figure 2: Personal Meeting application. The collaborators are playing checkers against the computer. The main reason a user sees a video of himself on the computer screen is to be able to make gestures relative to the shared application, such as pointing an index finger at an item in the shared application. The person making the gesture has to verify that his intended gesture is properly reproduced in the video image that the observer of the gesture sees. For the person making the pointing gesture, it is as natural as pointing an index finger at part of a physical mirror. The mirrored image of the index finger appears to point at the same location. Likewise, pointing at the computer monitor, the video image on the screen is the same as if the screen were a mirror. Like in a mirror, it appears as if the person is on the other side of the screen. Unlike a mirror where a person can touch it and see the mirror image, the person gesturing at the screen may not touch the screen. The person gesturing at the screen has to gesture some distance from the screen so that the gesture is present in the camera’s field of view. The observer of the pointing gesture will see the pointing person’s index finger overlap the item pointed at. As in Figure 2, one person is pointing to a checkers piece.

There are three additional advantages to a person seeing himself in the video. The first advantage is that registration of the camera user and computer are arbitrary. Regardless of the positioning, the user can adapt his gestures so that the gesture appears correctly in the video image. The second advantage is that participants of related videoconferencing systems have expressed a preference of seeing their own video so that they know how others see them [11]. The third advantage is most applicable to the deaf users. Overlapping the images of the video and shared application, the user can observe both images at the same time, unlike typical videoconferencing systems, where the user has to look at different parts of the screen. Also as the video image is larger, it is easier to see smaller details such as when one person waves to get the other person's attention. Although this feature could be useful to hearing users, they do not rely on it as much because the information conveyed in the video can be conveyed verbally. The drawback of the semi-transparent video is the same as in our group meeting application. Visual details in the semi-transparent video can be washed out and make it difficult to recognize sign language. We will try to mitigate this issue in future work. The interface as is, however, could still be useful for collaboration between a deaf person and a hearing person. These collaborators would use more rough gestures, such as pointing, “thumbs up” or hand waving, which do not require as much detail to be understood. There are a few requirements of the shared application and the collaborators’ monitors. The application has to have the same relative position and size on both monitors so that when one person makes a pointing gesture, the elements pointed at is the same on both persons’ monitors.

5. IMPLEMENTATION We have prototyped our applications on Mac OS X® and Windows® (2000/XP/Tablet PC) computers. The basic hardware setup is connecting a camera to the user’s computer. For cameras, we have been using the relatively inexpensive (approx. $100) Sony® iBot and Logitech® QuickCam with FireWire and USB connections respectively. The cameras have a 640x480 image and have a frame rate of 20-30 fps, which is sufficient for recognizing sign language. The implementation has several parts. The main part is creating the semi-transparent video window. Other aspects include recording video and streaming video.

5.1 Semi-transparent video window The main component to the implementation is creating the semitransparent video window, which is supported by Mac OS X and Windows (2000/XP/Tablet PC). The video window must have certain properties. It must be semi-transparent, and always be the topmost window so that the computer applications and the video are always visible. The window must pass mouse keyboard events through it to enable users to continue to control the computer applications that are underneath the video. Finally, the video in the window must be mirrored to enable the desired effect of the user’s gesturing at the shared application. These multimedia requirements are supported with varying degrees of performance by the Mac and Windows operating systems. Mac OS X provides the necessary functionality to create

the semi-transparent video window with excellent performance. Windows supports semi-transparent windows; however, performance is sluggish when it displays a large video window. We cannot use the preferred method (video overlay) for displaying video because the video cannot be made semitransparent. The sluggish performance of semi-transparent video windows on Windows impacts our prototypes differently. Our group meeting application has sufficient performance when the video window is less than ¼ the screen size. Although the personal meeting application is still being developed, the performance is too sluggish for reasonable use. For this we are investigating alternative techniques.

5.2 Recording and streaming video Besides the system displaying the video, the video is recorded or streamed to support features of our applications. In our group meeting, the video is recorded so that the meeting can be reviewed afterwards. In our personal meeting application, the video is streamed over the network between the remote collaborators’ computers. For now, the recording feature is implemented on Windows so that users can take advantage of writing notes naturally on a Tablet PC screen with the designated pen. We use Windows’s DirectShow multimedia technology to process the video from the camera. We are implementing streaming video for the Mac and Windows personal meeting applications. The Mac implementation, however, is most complete. The Mac implementation uses the standard videoconferencing protocol H.263 for streaming video. The same protocol is used in videoconferencing applications that deaf people use to communicate through sign language. In our experience the video quality is high; a person's details are clearly recognizable, a person’s motion is smooth, and delay in transmitting the video is minimal over a LAN connection.

6. EVALUATION Our semi-transparent video applications were iteratively designed with feedback from the Deaf community. Members of the Deaf community help us identify their needs in meetings and they suggested features to expand our applications. We received feedback from several members of local Deaf communities. We worked initially with two deaf students and the staff at Disability Services of the University of North Carolina at Chapel Hill (UNC). We also shared the technology at a training session for deaf technologists in Raleigh NC, sponsored by Telecommunications Access North Carolina and the NC Division of Services for the Deaf and the Hard of Hearing. We have also demonstrated our applications with deaf students and hearing signing interpreter students at UNC Greensboro. We have gathered feedback for our group meeting application from four students and four signing interpreters. A typical situation is that one interpreter accompanies one deaf student in otherwise hearing lecture. The students' difficulties taking notes during a lecture depends on the presented material. Taking notes is less an issue if the lecturer provides notes (i.e. copy of presentation slides) or a hearing student takes notes for the student (an accommodation arranged through a university's disabilities office). An example of a difficult situation for taking

notes is a mathematics class where taking notes helps the student understand the material presented in class. The difficulty is to take notes and not miss information from the interpreter. Feedback from the interpreters raises an issue that we have to further evaluate in formal evaluations. Interpreters and students give each other feedback to set the pace of the interpretation. For example eye contact indicates a starting point. So if the student shortly looks away, the interpreter could remember a short segment and start when appropriate. In other situations, a student might like to have something repeated. Interpreters felt that interpreting might be more difficult if a student's facial expressions are covered while the student watches the interpreter in the video on the computer. In our future evaluations, we can observe the impact of this issue and observe the general impact of using the group meeting application on the pace and dynamics of the interpretation. Members of the deaf community also suggested how our group meeting application would be useful for a meeting of deaf participants. A deaf meeting scribe has a similar issue as a student watching the interpreter. The scribe has to watch all the people conversing with sign language in order to create detailed enough notes for people who did not attend the meeting to follow the meeting proceedings. Our group meeting application can help the scribe watch the conversations. The group meeting application would also help with copy signing in a meeting of deaf participants. In copy signing, one person has to repeat someone else's signs for people who cannot see the original signer because of their position in the room (e.g. sitting in the same side of the table as the original signer). Instead of observing a copy signer, using our group application, the people can watch video of the original signer. Six people have used our personal meeting application for longer sessions; four persons are deaf students and 2 are hearing students fluent in ASL. In a one-hour session, two users either played checkers against the computer or drew a brainstorming diagram. Insights from these sessions are preliminary results for a user study that will be completed in the future. The persons could comfortably and successfully complete the task. Signing through the semi-transparent video was clear. Some of the users' feedback was related to their experience using conventional video conferencing systems. They had to finger spell slower so that hand movement is not blurred in the video – a limitation of video conferencing technology in general. Also occasionally if the transmission was interrupted they could resort to a chat session to clarify the unclear signing. Although it was possible to work with the semi-transparent video, the semi-transparent video diffuses the collaborative application and strains the eyes. We are researching different techniques to reduce or avoid the diffusion.

future work involves more formal evaluations of our applications and improving the performance of our prototypes.

8. ACKNOWLEDGMENTS We thank the members of the Deaf community who participated in our discussions that helped to understand the situation and design our applications. We appreciate Alex McLin and Gary Bishop’s feedback on the initial design of the group meeting application. Our appreciation goes to Swaha Miller for help in proofreading this document and providing valuable feedback. This work was partially supported by grants from the U.S. Environmental Protection Agency (#R82-795901-3) and the National Library of Medicine, as well as an IBM PhD fellowship.

9. REFERENCES [1]

Stotts, D., J. Smith, and K. Gyllstrom. Support for Distributed Pair Programming in the Semi-transparent Video Facetop. XP/Agile Universe. Calgary, 2004, 92-104.

[2]

Keating, E. and G. Mirus, American Sign Language in virtual space: Interactions between deaf users of computermediated video communication and the impact of technology on language practices. Language in Society, 2003, 693–714.

[3]

Ishii, H. TeamWorkStation: towards a seamless shared workspace. ACM conference on Computer-supported cooperative work, 1990, 13-26.

[4]

Chen, W.-C., et al. Toward a Compelling Sensation of Telepresence: Demonstrating a portal to a distant (static) office. Proceedings of IEEE Visualization, 2000, 327-333.

[5]

Engelbart, C. and W.K. English. A research center for augmenting human intellect. AFIPS Conference Proceedings for Joint Computer Conference. San Francisco, 1968, 395-410.

[6]

Ishii, H., M. Kobayashi, and J. Grudin. Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Transactions on Information Systems (TOIS), 1993, 349-375.

[7]

Kuzuoka, H. Spatial workspace collaboration: a SharedView video support system for remote collaboration capability. SIGCHI conference on Human factors in computing systems, 1992, 533-540.

[8]

Morikawa, O. and T. Maesako. HyperMirror: toward pleasant-to-use video mediated communication system. ACM conference on Computer supported cooperative work, 1998, 149 - 158.

[9]

Tang, A., C. Neustaedter, and S. Greenberg. Embodiments and VideoArms in Mixed Presence Groupware. Report 2004-741-06, University of Calgary 2004.

[10]

Tang, J.C. and S.L. Minneman. Videodraw: a video interface for collaborative drawing. ACM Transactions on Information Systems (TOIS), 1991, 170 - 184.

[11]

Sellen, A.J. Speech patterns in video-mediated conversations. Proceedings of the SIGCHI conference on Human factors in computing systems, 1992, 49-59.

7. CONCLUSION We have explained the design of two applications using semitransparent video technology to assist deaf persons in meetings. The semi-transparent video addresses the problem of a person having to watch multiple information sources. With semitransparent video, users can interact in new ways with their computer and other people. The implementation of applications with semi-transparent video requires efficient multimedia processing, which can still be improved in some systems. Our