DigiTable: an interactive multiuser table for collocated and ... - CiteSeerX

70 downloads 9629 Views 6MB Size Report
We propose DigiTable, a platform combining a multiuser tactile interactive .... Double Digital Desk [19] allowing two distant users to con- tribute to a virtual leaf, ...
DigiTable: an interactive multiuser table for collocated and remote collaboration enabling remote gesture visualization Franc¸ois Coldefy and Stphane Louis-dit-Picard Orange Labs 2, AV. Pierre Marzin, 22300 Lannion francois.coldefy, [email protected]

Abstract

tactile interactive tabletop, a video-communication system enabling eye-contact and full size distant user visualization and a spatialized sound system for speech transmission. A robust computer vision module for distant users’ gesture visualization completes the set-up. The paper is organized as follows: Section 2 presents our motivations. Section 3 proposes a state of the art on remote gesture visualization. In Section 4 we describe DigiTable whereas Section 5 and 6 detail respectively the collaborative platform and the computer vision module. Section 7 presents experiments. Conclusion and future work are in section 8.

We present D IGI TABLE, an experimental platform we hope lessen the gap between co-present and distant interaction. D IGI TABLE is combining a multiuser tactile interactive tabletop, a video-communication system enabling eye-contact with real size distant user visualization and a spatialized sound system for speech transmission. Based on a robust computer vision module, it provides a fluid gesture visualization of each distant participant whether he/she is moving virtual digital objects or is intending to do so. Remote gesture visualization contributes to the efficiency of distant collaboration tasks because it enables the coordination among participant’s actions and talk. Our main contribution addresses the development and the integration of robust and real time projector-camera processing techniques in Computer Supported Cooperative Work.

2. Motivations This paper addresses distant groupware collaboration. We strive to design a platform which preserves most of the characteristics of face-to-face interaction. We found our approach on Gutwin’s workspace awareness concept [5]. His analysis focuses on situational awareness which is knowledge of the users’ dynamic environment. Situational awareness is the way people maintain up-to-date mental models of complex and dynamic environments through perceptual information. It is peripheral to the primary group activity and depends on perception, comprehension and prediction of the environment and of others’ actions [4]. It relies on non intentional informational sources such as consequential communication or artefact manipulation, and on intentional communication. Consequential communication is information transfer that emerges as a consequence of a person’s activity [14]. Most of it is conveyed by the visual channel: position, posture, head, arms, hands’ movement provide information about what is going on. Artifacts are a second source of information about the current actions because their characteristic sound signature depending on their manipulation (moving, stacking, dividing, and so on) gives salient feedbacks of their use. Finally, intentional communication, through conversation and gestures (deictic, manifesting or visual evidence actions), completes the perceptual information gathered about the environment.

1. Introduction We are interested in distant collaboration between groups and the so called Mixed Presence Groupware (MPG) [16] connecting two (or more) distant shared interfaces enabling simultaneous interactions of collocated or remote users. The creative domain such as computer animation, co-design, architecture, urbanism, etc. may be particularly concerned by MPG because it often involves remote teams and because its matter relies on graphical objects (images, drawings, plans, etc.) for which shared interfaces such as tabletops propose adapted natural interactions (pen based interaction, gesture). However, distant collaboration has to preserve as far as possible the fluidity and the mutual awareness provided by co-presence. Following Tang et al. approach [16], we focus on remote gesture visualization of distant users as it conveys major information such as intentionality (who is intending to do what), action identity (who is doing what) and pointing[9][8], all of which facilitates distant communication and collaboration. We propose DigiTable, a platform combining a multiuser 1

Situational awareness helps to be aware of the state of task objects and of one another’s activities. It facilitates to plan what to say or to do and to coordinate speech and actions. We choose tabletop as collaborative device first because it is a now available shared interface, and above all because it is probably the most common tool used for group meetings and human interaction. To facilitate intentional communication and presence feeling, we choose to use a videocommunication system providing real size visualization of the distant users and eye-contact by means of a spy camera (see section 4 for details). As Tang et al. [16], we add a computer vision module to capture the local gesture of the participants on or above the table. The detected gesture is then transmitted to the distant site for overlay on the current desktop image. This remote gesture visualization software, similar but probably more robust than VideoArms [16], combined with the video-communication system, participates in conveying most of the visual information needed to feed situational awareness.

Figure 1. D IGI TABLE combines a Diamond Touch, a video and a spatialized audio communication system with a computer vision module.

Thus, D IGI TABLE, the designed platform, should satisfy people with two major supplies: first, presence feeling through eye-contact and second, coordination and efficient collaboration through remote gesture visualization. Remote gesture visualization affords action identity as participants are able to perceive who is doing what. It also enables action anticipation as one can predict which digital object distant participants are intending to grasp. Finally, intentional communication is enhanced as remote gesture visualization enables participants to point to a region of interest on the shared desktop to show something to distant users.

3. Related Work VideoPlace [10] in 1995 was a very first video based attempt to capture participant’s gesture and action and to fuse

Figure 2. D IGI TABLE Architecture: shared application and remote gesture analysis is implemented at each distant site on a single work station; video communication is running on a separate network.

it with graphic images. Shared graphic images were viewed on two distant sites on a television and combined with the images of the participants’ hands whenever they were pointing on the television screen. Later in 1991, Tang proposed VideoDraw [17], a remote shared drawing application providing remote gesture visualization. Each participant was drawing directly on the display screen of a video monitor with whiteboard markers. A camera, aimed at the display screen, transmitted to the other site the captured image, i.e. the participant’s marks and gestures. Polarized filters were used to manage video feedback. However, no digital collaboration was afforded as VideoDraw was based on solely analogical video techniques. In order to enlarge the drastically limited shared drawing surface of VideoDraw, Tang et al. proposed VideoWhiteBoard [18], a system using translucent large rear projection screens while a camera mounted behind the screen captured both drawings and people shadows. But VideoWhiteBoard prevented distant user identification as only his or her shadow was transmitted. ClearBoard [7] enabled eye contact between distant participants by using a half mirror polarizing projection screen whose transparency allows rear video projection while user’s reflected image is captured by a camera. As for VideoDraw, the user was drawing directly on the display surface which showed user’s distant drawings fused with user’s face. More recently, Roussel [13] proposed in 2001 a shared telepointer device using chroma-keying to extract the user’s hand placed over a blue surface and overlaid on the common view of the shared space. Malik et al. [12] presented Visual TouchPad a more sophisticated device based on 3D tracking of both hands on a sensitive blue pad for tactile and 3D interactions. The shared desktop image is then augmented by the insertion of the both hands image. However, in both methods, the user is not interacting on the visualization surface itself as she moves her hand on a blue pad. In 2004, Takao proposes Tele-Graffiti [15], a remote sketching system for augmented reality shared drawings on

real paper sheets. It extends previous work of Wellner on Double Digital Desk [19] allowing two distant users to contribute to a virtual leaf, fusing the marks each participant has written on his/her actual paper sheet. The system is composed at each site of a camera-projector couple aimed at the white paper leaf on which the user draws. The user’s actions and marks are captured by the camera and are transmitted to the distant site where the image is projected on the other user’s sheet after a dynamic geometric alignment. As minimal computer vision analysis of the projected and captured images is performed, the system runs at video frame rate (24 Hz) but a video feedback or visual echoing may occur. Thus delicate camera and projector calibration and lightning adjustment are needed. Liao et al. [11] propose computer vision techniques to remove this visual echo. Unfortunately, the frame rate then drops drastically below 7 Hz. A very accomplished work in the domain of remote gesture visualization is VideoArms proposed by Tang et al. [16]. The authors designed a collaborative system with effective distant mutual awareness by coupling a sensitive SmartBoard with a video capture of participant’s gesture overlaid on the remote desktop image. Users can easily predict, understand and interpret distant users’ actions or intents as their arms are visible whenever they want to interact with the tactile surface or show anything on it.

4. DigiTable platform D IGI TABLE is a platform combining a multiuser tactile interactive tabletop, a video-communication system enabling eye-contact and full size distant user visualization, a computer vision module for remote gesture visualization of the distant users and a spatialized sound system (see Fig. 1 and Fig. 2). It may be compared to the Escritoire [1] and Vicat [2] which both associate a tabletop with a vertical screen for remote collaboration. However, these three systems mainly differs on three points: D IGI TABLE and Vicat are designed for remote and collocated groups interaction whereas the Escritoire focuses on a personal desktop supporting bimanual intaraction per site and providing an enhanced resolution of the projected image for a better readibility of documents; D IGI TABLE provides situated and literal remote gesture visualization in contrast to Vicat and the Escritoire which use telepointers and traces. Finally, D IG I TABLE propose eye-contact and full size visualization of the remote particpants when both other systems use conventional videocommunication tools. We use Merl Diamond Touch [3], which is hitherto one among very few available shared tabletops enabling simultaneous multiuser interaction. This device is a passive tactile surface on which the desktop image is projected from a ceiling mounted video-projector (video-projector 2 of Fig. 1).

The video communication system uses a spy camera hidden behind a rigid wood-screen and peeping through an almost unnoticeable 3mm wide hole. A second videoprojector (video-projector 1 of Fig. 1) beams on the wall screen the video of the distant site captured by the symmetric remote spy camera (see Fig. 2). Eye-contact is guaranteed by approximately placing the camera peep-hole at the estimated eyes’ height of a seated person and beaming the distant video on the screen such that the peephole and the distant user’s eyes coincide. Fine tuning of hole’s design and video-projector beam’s orientation is performed to avoid camera’s dazzle. The computer vision module uses a camera placed at the ceiling and pointing at the tabletop. The module consists of a segmentation process detecting any object above the table by comparing, at almost the frame rate, the captured camera image and the actual known desktop image projected on the tabletop, up to a geometric and colour distortion. In output, it produces an image mask of the detected objects (hands, arms, or any object above the table) extracted from the camera image. The compressed mask is sent to the distant site where it is overlaid with the current desktop image. We use semi-transparency to let the user see the desktop ”under” the arms of his distant partner. Fig. 3 shows users’ gesture and their visualization on the remote site. The left bottom image shows the overlay of the detected hand of site 2 (upper right image); the right bottom image shows the overlay of the detected arms of the site 1 (upper left image).

5. Distant collaborative system 5.1. Audio-Video Communication System On each site, a communication system grabs audio and video data of the participants using a stereo microphone and the hidden camera peep-hole. Audio and video channels are compressed using respectively the TDAC (configured at 64Kbps, 32Kbps per channel) and the H263 (configured at 1Mbps) codecs and are sent to the remote site using RTP (Real-time Transport Protocol) over UDP/IP. Note that the video quality is currently limited by the camera peep-hole resolution (420 lines expanded to 4CIF - 704x576 - resolution). On the remote site, the audio and the video channels are respectively displayed to the stereo speakers and to the short focal video-projector (a WT 610 NEC videoprojector) beaming on to the vertical wall screen (Fig. 1).

5.2. Collaborative Application Server Diamond Touch enables up to four users to interact simultaneously. Interaction with window containers (rotation, translation, resize, zoom, etc.) is modelled according to the ”sheet of paper” metaphor. A collaborative application server manages the collaboration between the con-

nected sites and provides the replication of events occurring on window containers on both sites in order to keep the shared desktop consistency.

Figure 3. Remote gesture visualization: view of both distant tables in action (upper line) and of both overlaid desktops (lower line); the left bottom image shows the overlay of the detected hand of site 1 (upper right image); the right bottom image shows the overlay of the detected arms of the site 2 (upper left image)

5.3. Shared desktop We use an adhoc software based on Java and JOGL (Java Binding for OpenGL) to implement the interactive window containers displayed on the tabletop as shared workspace. Java 2D manages the rendering of window containers in off-screen buffers that we use as OpenGL textures to compose the desktop displayed on the tabletop. As shown in Fig. 4, the desktop compositor lays out textured quads (each quad is a window container representing a Java 2D application). Users interact with quads (rotation, translation, scaling) or with their contents (Java 2D applications). Offscreen buffers and their associated OpenGL textures are dynamically updated.

Events received from the network are injected in the event pump of the shared desktop processes to be handled as other local events. In the computer vision process, each segmented image is compressed using an RLE (Run Length Encoding) encoding technique. As remote gesture mask images contain very few information, such a coding algorithm is very efficient in time processing and compression. The computer vision process is connected to the image server using a UDP/IP based network link. Each segmented image is split into several fragments wrapped into UDP/IP packets which are sent to remote shared desktop processes through the image server. Each shared desktop process receives the image fragments and rebuilds the image in order to integrate it as a semi-transparent 2D layer displayed in front of the shared desktop. The segmented image is overlaid on the shared desktop using OpenGL texturing. As UDP/IP is not fully reliable, few image fragments may be missing and the segmented image may be partially rebuilt. However, a potentially corrupted image mask is soon replaced by the next one received from the stream.

6. Computer vision The Computer Vision method consists of comparing the captured image in regard with the known projected desktop in order to detect anything (hands, arms, or any object) between the projection surface (here the Diamond Touch) and the camera. Hands and arms are identified as the locus of the image (pixels) where the camera image and the projected desktop do not correspond up to a geometric and colour model transformation. The computer vision module improves four technical weaknesses of VideoArms [16]: 1) the proposed method detects any object on or over the table without needing any learning stage or a priori knowledge about the object to detect; 2) no restriction on the projected images is imposed (VideoArms needs dark tones images); 3) the proposed method is robust to external lightning changes (variation in the daylight or in the artificial lightning); 4) calibration is automatic.

6.1. Geometric and colour models

Figure 4. The desktop composition using off-screen buffers of Java 2D/Swing applications and OpenGL

Each shared desktop process is connected to the collaborative application server using a TCP/IP network link. All events (including click, dragging operations, etc.) performed by local participants on window containers are sent to the server which dispatches them to all connected sites.

We consider that the image captured by the camera corresponds to the known projected desktop onto the table up to an homographic transformation H from is the scalar product. The minimization of U Lθ (δθ) in respect to δθ can be now performed by an Iterative Reweighed Least Square scheme (IRLS) [6]. Let θ0 be the initialization of θ computed from H0 , the coarse estimation of H obtained from the correspondence between the green squares’ positions in camera image and in the desktop image, and from α0 corresponding to the coefficients of the identity colour transfer functions. The complete algorithm is :

δθ ← 0 n←0 while not convergence do θn ← θn−1 + δθ compute δθ minimizing U Lθn (δθ), by IRLS n←n+1 end while Convergence is supposed to be achieved when U (θn−1 )−U (θn ) is lower than a predefined threshold. U (θn )

6.4. Online object detection After calibration, the geometric transformation H is estimated and supposed to be fixed. In order to allow autoadaptation of any lightning changes (sun shining, or being hidden by clouds outside, or switching off or on lamps inside the room), we online re-estimate αt , the 3-color transfer function parameters at each time t. As relation 3 is linear in α, the computation of its estimate α ˆ t is straight forward using the IRLS algorithm applied to Dt and Rt , respectively the desktop and camera images at time t. In order to reduce processing time, one iteration of the IRLS algorithm is performed according to: ! X −1 t T t t (1) α ˆ c = Qc,t Gs,c ψT ( (s, c))R (c, Hs) s∈S

with Qc,t =

X

GTs,c Gs,c ψT (t (s, c)),

s∈S t )k=1..p and Gs,c = (gk (Dt (c, s)))k=1..p are raw αct = (αc,k vectors of length p, and (.)T stands for matrix transposition. However, the matrix Qc,t may be singular as the desktop image may be uniform (i.e. Dt (c, s) is constant for all s). ˜ c,t = Qc,t + λI We thus replace Qc,t in relation 1 by Q where I is the identity matrix and λ a small value taken so that it will not disturb the estimation of α when Qc,t is inversible. Any object (a hand, an arm or anything else) placed above the table will mask the projected desktop. It will be detected in the camera image as the set of pixels s for which model 1 is not valid, i.e. for which the error weight w(c, s) [6] computed in the IRLS iteration is greater than a pre-defined threshold. We set this threshold to 0.25 for each channel. Some classical post-processing such as morphological filtering is then performed to remove small detections or to fill holes in obtained masks.

6.5. Implementation We use multithreading to take advantage of the dual-core processors. We implement one thread driving the camera capture and the shared desktop acquisition. Another thread, triggered by the previous one, waits for desktop and camera images and computes the image masks before transmission to the image server.

The local variations of the natural light illumination, especially on larger surfaces such as tables, lead us to estimate the colour transfer functions on local patches. We divide the desktop image in four equal patches in which we independently estimate the three color transfer functions for each frame. We use a Sony EVI-D70 whose pan, tilt and zoom functiunalities facilitate the set up. The desktop image is SXGA (1280x1024) and we choose a low camera resolution (192x144). Let us note that relation (2) is expressed in the desktop grid S domain. We sub-sample the desktop image, which is a SXGA (1280x1024) image, to 160x128 in order to reduce time processing.

7. Experimental results 7.1. Calibration The upper row of Fig. 6 shows the green squares calibration image and the obtained segmentation. The second row shows the original contrasted desktop image (right) used for the fine estimation of θ, and its capture by the camera (left) when projected onto the table. The geometrical deformation is due to the parallax. Notice the colour disparity between the original desktop (right) and the captured image (left). The third row of Fig. 6 shows respectively the captured desktop image warped in the desktop reference world (left image) and the false colours weight image after calibration (right). For the sake of the visualization, we map to [0,255] the image weights w(c, s) whose values originally lie between 0 and 1: a white colour corresponds to a perfect match between the original desktop image and the warped and colour transformed camera image whereas a dark tone stands for a bad similitude. Let us note that the middle bottom of the weight image shows a darker zone where the colour transformation model is not well estimated. It is a consequence of a local brighter illumination of the projector due to defects of the optical device (dust on the projector’s lens). This brighter area is clearly visible on the upper left green squares image in Fig. 6. We keep present this easily removable artefact in order to show how the colour transfer functions may be spatially dependant. As described earlier, we divide the image in four equal patches for colour transfer function estimation in order to cope with spatial variations of illumination. However, our model is too tight to fit to higher spatial frequencies variation. The bottom row of Fig. 6 shows the Red, Green and Blue transfer function mapping the colour channels of the original desktop image (right image of the second row) to the geometrically warped camera image (left image of the third row) for the upper quarter of the image. Let us note how the colour level range is shrunk. The transfer functions map colours under level 60 to a near constant value because of the weak sensitivity of the camera in low illumination. It is astonishing to observe that the blue transfer function de-

Figure 7. Online gesture detection: upper row: the captured image of the table (left) and the original desktop image projected onto the table (right); middle row: outlier detection by thresholding the weight image (left) and morphological post-processing to remove small detection (right); bottom row: image mask of the hands computed from the camera image and sent to the distant site (left) and overlay of the image mask and the current desktop image at the distant site (right).

Figure 6. Automatic calibration steps: a) upper row : camera acquisition of the green squares image projected onto the table (left) and the corresponding square detection(right); b) second row: contrasted calibration desktop image (right) and its acquisition by the camera (left) when projected onto the table; c) third row: warping of the camera image in the desktop image reference world (left) and obtained false colour weight image after parameter estimation convergence (right); d) Colour transfer functions for Red, Green, Blue channels.

creases in the upper range level whereas one would expect transfer function to be monotonous. It is mainly due to the absence of data in this level range. The transfer function estimate is thus ”undefined” but still computed because of ˜ c,t in relation (1). the use of the regularized matrix Q

7.2. On line detection Fig. 7 presents steps of the remote gesture detection process. The upper row shows the captured image while an user is interacting (left) and the original desktop image projected onto the table (right). The middle row shows the result of

Figure 8. Remote gesture visualization: upper row: view of both distant tables; bottom row: remote gesture overlaid on the current desktop the corresponding site

the outlier detection by thresholding the weight image. A morphological post-processing improves this detection by removing small detections (contours line essentially) and by partly filling holes (right image). Some holes remains

whereas a large false detection is visible near the user right wrist. The bottom row shows the image mask of the hands computed from the camera image and sent to the distant site (left) and the overlay of the sent image mask on the current desktop image at the distant site (right). Figure 8 shows the remote gesture visualization in action on two distant sites (upper line). The bottom line shows how the local shared desktop looks like at each site, including the overlay of the distant participant arms. No visual echo occurs. However, as camera and desktop captures are not synchronized and are acquired successively, some traces may be visible due to fast gestures or shared workspace changes as fast moving windows or video cuts during film visualization. No constraint on the desktop image is imposed anymore (colours, brightness, etc.). Every kind of object can be detected, if they are thicker than a pen due to the coarse analysis resolution (160x128). Sometimes, when an almost empty uniform desktop abruptly replaces a richer one in colours and forms during user interaction, the coulour transfer estimates can be loose and one or two patches of our desktop subdivision may be detected as a whole set of outliers. The only way that the estimate of the colour transfer functions catches the right uniform backgound colour is to bring back on the current desktop an application window with different colours. This is actually the main limitation of our technique. The computer vision algorithm provides 32 image masks per second when running alone. When connected, the arms images mask are refreshed on each site at 17 Hz on a dualcore Intel Xeon 3.73GHz with 2GB of RAM.

[3]

[4] [5]

[6] [7]

[8]

[9]

[10]

[11]

[12]

8. Conclusion We propose an efficient platform for remote groupware collaboration conveying essential visual information for efficient remote coordination and communication between participants. Users favour remote gesture visualization. Remote and collocated collaboration are both worthwhile experiences. Remote collaboration is not experienced as a poor ersatz of the collocated situation anymore. We propose a model which improves remote gesture visualization techniques although we have to use low resolution analysis to cope with near real time processing (17Hz). Further work will focus on code optimization. Acknowledgments: this work was supported by the French government within the RNTL Project DigiTable and the Media and Networks Cluster.

References [1] M. Ashdown and P. Robinson. Remote collaboration on desk-sized displays. Journal of Visualization and Computer Animation, 16(1):41–51, 2005. [2] F. Chen, P. Eades, J. Epps, S. Lichman, B. Close, P. Hutterer, M. Takatsuka, B. Thomas, and M. Wu. ViCAT: Visualisation

[13]

[14]

[15]

[16]

[17]

[18]

[19]

and Interaction on a Collaborative Access Table. In Proceedings of the 1st IEEE International Workshop on Horizontal Interactive Human-Computer Systems, 2006. P. H. Dietz and D. Leigh. DiamondTouch: A Multi-User Touch Technology. In ACM Symposium on User Interface Software Technology (UIST), pages 229–226, November 2001. M. Endsley. Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1):32–64, 1998. C. Gutwin and S. Greenberg. A descriptive framework of workspace awareness for real-time groupware. Computer Supported Cooperative Work,Special Issue on Awareness in CSCW, 11:411–446, 2002. P. Huber. Robust statistics. 1981. H. Ishii and M. Kobayashi. ClearBoard: A Seamless Media for Shared Drawing and Conversation with Eye-Contact. ACM SIGCHI, pages 525–532, 1992. D. Kirk, D. Stanton, and T. Rodden. The effect of remote gesturing on distance instruction. Proceedings of the Conference on Computer Support for Collaborative Learning (CSCL), 2005. R. Kraut, S. Fussel, and J. Siegel. Visual information as a conversational resource in collaborative physical tasks. Human Computer Interaction, 18:13–49, 2003. M. W. Krueger, T. Gionfriddo, and K. Hinrichsen. Videoplace an artificial reality. In CHI ’85: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 35–40, New York, NY, USA, 1985. ACM Press. C. Liao, M. Sun, R. Yang, and Z. Zhang. Robust and Accurate Visual Echo Cancelation in a Full-duplex Projectorcamera System. International Workshop on ProjectorCamera Systems (ProCams), 2006. S. Malik and J. Laslo. Visual touchpad: a two-handed gestural input device. International Conference on Multimodal interface (ICMI), 2004. N. Roussel. Exploring new uses of video with videospace. In Proceedings of the 8th IFIP International Conference on Engineering for Human-Computer Interaction (EHCI), volume 2254, pages 73–90, 2001. L. Segal. Designing team workstations: The choreography of teamwork, in local applications. Local Applications of the Ecological Approach to Human-Machine Systems, 1995. N. Takao, J. Shi, and S. Baker. Tele-Graffiti: A CameraProjector Based Remote Sketching System with Hand-Based User Interface and Automatic Session Summarization. International Journal of Computer Vision, 53(2):115–133, 2003. A. Tang, C. Neustaedter, and S. Greenberg. Videoarms: embodiments in mixed presence groupware. In Proceedings of the BCS-HCI British HCI Group Conference, 2006. J. C. Tang and S. L. Minneman. Videodraw: a video interface for collaborative drawing. In ACM Transactions on Information Systems, volume 9(2), pages 170–184, 1991. J. C. Tang and S. L. Minneman. Videowhiteboard: video shadows to support remote collaboration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 315 – 322, 1991. P. Wellner. Interacting with paper on the digitaldesk. In doctoral dissertation, 1994.

Suggest Documents