Documenting the pen-based interaction - ACM Digital Library

0 downloads 0 Views 540KB Size Report
ABSTRACT. Pen-based interaction allows users to register information using a variety of devices such as PDAs, Tablet PCs or elec- tronic whiteboards.
Documenting the pen-based interaction Cesar Teixeira

Maria da Grac¸a Pimentel Cassio Prazeres Helder Ribas Daniel Lobato

Departamento de Computacao Universidade Federal de Sao Carlos [email protected]

Departamento de Ciencias de Computacao Universidade de Sao Paulo mgp,prazeres,helder,[email protected]

ABSTRACT

1.

Pen-based interaction allows users to register information using a variety of devices such as PDAs, Tablet PCs or electronic whiteboards. As a result, users have the opportunity to review the information by means of a document that represents the final, static, result. Alternatively, users may be able to play back the digital ink using, for instance, the same application used for capture. However, considering the context in which pen-based ink is usually used – e.g. meetings or lectures – there is an important opportunity for a user to review intermediary views of the pen-based interaction. For instance, it may be relevant for a student to produce a printable version of a particular diagram presented by an instructor so that the several steps used in the construction of the diagram are made explicit. Similarly, it may be imperative for the instructor to be able to identify the steps taken by a student while making notes during a laboratory session, for example. We have defined operations that model the userinteraction during digital ink capture and implemented the operations in an Web-based application that allows reviewing documents created automatically as a result of the userinteraction with electronic whiteboards or Tablets PCs. The operations may be exploited in many situations were it is important to have a detailed and customized report of user’s writing activity.

Weiser’s [30] vision of ubiquitous computing included the transparent use of many non-traditional computational devices in everyday environments supporting human activities. Small portable devices such as PDAs, large writing surfaces such as electronic whiteboards, tiny sensors spread out in environments are example of devices that would be explored in applications supporting users in their activities without explicit user-computer interaction. For instance, an electronic whiteboard allows a user to write on a large surface as one is used to do on traditional whiteboards without explicitly being aware that there is a computational device capturing, processing and presenting the associated electronic ink. Applications have been built that capture the user natural interaction (e.g capturing audio, video or pen-based interaction) so as to transparently produce associated multimedia documents to be later reviewed; the more context aware the application – providing differentiated services depending on the user context – the more ubiquitous the supported services [2]. Similarly, efforts have been geared towards providing information without demanding much attention by means of ambient displays [20]. Pen-based devices such as PDAs, tablet PCs and electronic whiteboards include applications that process the userinteraction for data input as well as traditional wimp (WIMP: Windows, Icons, Menus and Pointing devices) operations. With respect to wimp operations, the pen is used for the selection of menu items or the activation of hypertext links during Web browsing, for instance. With respect to data input, the pen is used to draw ink strokes that may be processed for handwriting recognition [13] [14], for instance. Elaborate processing such as the recognition of some text underlying the ink (a PDF, for instance) for querying and linking has also been reported [26]. Alternatively, the strokes can be processed to generate static documents that resemble post-it notes, in the case of PDAs [9], or annotated paper, in the case of tablet PCs (e.g. the Microsoft Journal application) or presentation slides, in the case of electronic whiteboards [8] [11] [27] – to name a few common uses. In the many applications where the strokes are not processed (for recognition or WIMP operations, for instance), they can be in reviewed different forms including: as a static document representing the final drawing state, or as a dynamic animation that allows the replay of the strokes. However these two reviewing alternatives impose important limitations when intermediary representations, or snapshots, are

Keywords Pen-based interaction, Intermediary documents, Ubiquitous computing, Natural interfaces, Capture and access applications, inkteractor operations.

INTRODUCTION

relevant to the user – in particular when a large amount of strokes is captured. On the one hand, the static document simply hides intermediary configurations. On the other hand, the dynamic animation demands the user to review each stroke drawing until one desired snapshot is achieved; if other intermediary configurations are demanded, the playback is resumed until the next snapshot is obtained. (Throughout the paper, we use the terms snapshot documents and intermediary documents interchangeably.) The problem we tackle in this paper is that of generating documents that allow the visualization of intermediary configurations of stroke drawings created by pen-based interaction. Our objective is to define operations that may be applied to ink strokes so as to allow the generation of documents containing representative snapshots of the whole interaction. Our investigation methodology included a study of representative data from an educational application and the formulation of scenarios resulting from interviews with real users. Our original contribution in this paper is the definition of a set of operations that considers information relative to typical pen-based interaction such as the change in the stroke color and the amount of drawing activity to generate intermediary documents corresponding to representative snapshots of the overall interaction. We present documents that illustrate the use of the operations in an application executing in an electronic whiteboard and in a tablet PC. The remainder of this paper is organized as follows. In Section 2 we outline applications supporting pen-based interaction. In Section 3 we briefly describe the iClass infrastructure, which we used to experiment the operations proposed in this paper. In Section 4 we present two scenarios that have inspired us in the definition of the ink-based operations. In Section 5 we present our proposed operations aimed at generating snapshot documents. In Section 6 we illustrate the use of some of our operations. In Section 7 we summarize our contributions and discuss future work.

2.

PEN-BASED APPLICATIONS

Supporting raw electronic ink capture and presentation as well as elaborate gesture recognition associated with operations such as copy and paste, Tivoli [23] was a pioneer in exploiting pen-based user-interaction with an electronic whiteboard investigated in the context of meetings. Other systems implemented in the meeting domain include NotePals [11], TeamSpace [27] and LiteMinutes [8]. The educational domain has also reported related research, as represented by systems such as the Authoring on the Fly [18], eClass [1, 6, 25], iClass [7, 24], Ubiquitous Presenter [31], E-Chalk [13] and SmartClassroom [28]. Several efforts have tackled the production of documents as a result of the interaction, a case in point is eClass [1]. The system was originally built at the Georgia Tech to allow information from a traditional lecture (slides, ink from an electronic whiteboard, audio and video) to be captured so as to automatically produce Web-based multimedia documents. The research focused in exploiting ubiquitous computing platforms to automate the authoring of multimedia documents and in the evaluation of the long term use of the system [6]. Attention was also given to requirements demanding the extension of the resulting documents after the session had been concluded and to the preparation of sessions in general [25].

Other efforts have given continuity to the investigation of the production of documents that include information relative to pen-based interaction. Chu and Chen [10] capture information relative to several media including ink, HMTLbased navigation and video. The authors have considered several operations to allow the replay of the lecture as a synchronized multimedia document. Including attention to collaborative work, the Ubiquitous Presenter allows students to use tablet PCs to interact with slides presented by instructors. Students may also use Webbased forms in traditional desktop computers to add annotations to the corresponding documents after the lecture [31]. As another example, Kam et al. [16] investigated the use of shared whiteboards by student groups in classroom settings in their system called Livenotes, which runs on tablet PCs on a wireless network. The system allows pen-based annotation in particular on top of a common background containing the instructor’s slides, on top of which users can discuss and annotate. Also regarding collaborative work, NotePals [11] was pioneer in terms of exploiting PDAs to allow users to produce and share annotations. The understanding of specific use of ink in live sessions has also been investigated. In the educational domain, Anderson et al. have carried out a deep study on how instructors make use of electronic ink and speech together while lecturing – their goals is “to inform the development of future tools for supporting classroom presentation, distance education and viewing of archived lecturers” [3]. In yet another research perspective, investigation has also been carried out in terms of infrastructures supporting building pen-based software. An important example is SATIN [15], which includes an architecture for handling pen input that includes recognizers and interpreters. Many efforts have been reported with respect to supporting handwriting recognition. As one example, Friedland et al. [13] present E-Chalk, which captures the instructor interaction with large electronic whiteboards and provides important services such as Mathematical formula recognition, integration with useful applets and the inclusion of images or even pre-recorded series of strokes. Because it uses an underlying distributed infrastructure, the captured contents can be accessed by remote viewers. E-Chalk has also been integrated with traditional video-conference software.

3.

ICLASS CAPTURE AND PUBLISHING

Inspired by the original eClass [1] we built iClass towards providing an extensible software infrastructure used to investigate many problems relative to ubiquitous computing applications [7].1 Figure 1 illustrates iClass in use by an instructor with an electronic whiteboard and by a student with a tablet PC. iClass is able to record several pieces of information produced during a lecture – including strokes and slides from an electronic whiteboard, audio, video and Web pages. As a result, at the end of the lecture, an XML document integrating the different captured media is produced and automatically stored in a document repository. 1 iClass has been built in the context of an international collaboration project with Gregory Abowd’s group at Georgia Tech, funded by the NSF in the U.S. and by CNPq and FAPESP in Brazil.

To allow users to review the captured information as illustrated in Figure 2, the stored XML documents are processed on demand by XSLT style sheets to produce HTML or SMIL visualizations. Yet another option is a Java applet that plays back slides in a stroke level. As in many pen-based applications, iClass allows users to review the ink writing with both static and dynamic views. A static document may contain a copy of all slides presented during the session, such as the HTML document illustrated in Figure 2(left). A dynamic view of the slides is made available by means of an applet, iPlayer, that allows the strokes to be played back in the same speed they have been captured (useful for synchronous playback with audio, for instance). Figure 2(right) illustrates the dynamic playback of the slide presented on the top right in Figure 2(left). iClass has been in use for several semesters, and exploits an infrastructure that allows the rapid prototyping capture and access applications [24]. Figure 2: The captured information is used to generate documents in several formats, including an HMTL document where each slide is associated with one static image (left); a dynamic version of the whole session can be played back causing the ink to presented in the original drawing speed (possibly synchronously with captured audio) (right).

Examining these images, the instructors were interested in reviewing in detail the steps used by the students when writing down, which can be achieved with the iPlayer applet illustrated on Figure 2(right). The main aim of the instructors is to be able to understand the students’ rationale while carrying out the lab experiment.

4.2

Figure 1: Instructor writing down on an electronic whiteboard using iClass (right). A student uses iClass on a tablet PC (left)

4.

REAL WORLD SCENARIOS

As a result from interviews with faculty from several areas, including Chemistry, Photography, Architecture and Medicine with respect to using iClass in their courses, and from the observation of documents generated in classes such as Chemistry Lab, UML Design and Research Seminars, we identified scenarios where the production of documents containing representative snapshots of the whole interaction is demanded. We present some of these scenarios next – in this section, all course information, students notes and instructors’ demands are real.

4.1

Chemistry Lab

Students in a Chemistry course have used iClass running on tablet PCs to make their notes during an experiment, samples of students’ notes are presented in Figure 3. The slides correspond to the same portion of the experiment, and the figure illustrates how different can be the students’ notes for the same experiment.

UML Design

Groups of students in a UML Design course have used iClass running on tablet PCs to carry out an in-class exercise. A static document containing the final images for one group is presented in Figure 4. In an interview with the instructor, she reported it would be important to be possible to have intermediate views of the slides, so she would be able to have a broad view of the order in which each element in the UML diagram was produced by the students. With this information for all groups of students, the instructor reported she could have an important evaluation of students’ learning.

4.3

Collaborative work and filters

Collaborative applications have been built to support many users sharing a common writing surface such as large walls – examples include Collaborage [22] and the Interactive Mural [14]. Because iClass users have demanded its use by remote users toward allowing synchronous distance collaboration – such scenario is quite common in videoconference tools providing a shared whiteboard surface (an example is Microsoft NetMeeting) – a prototype has been developed with the iClass infrastructure [5]. These scenarios illustrate that a single surface can be written down with electronic ink by many users who can be remote or not. In this context, an important feature is to be able to generate reviewing documents filtered by the ink

strokes generated by a specific author. Other relevant filters include generating documents filtered by type of ink strokes (e.g. circles) and surface region (e.g. all strokes made on some top portion of a surface).

5.

INKTERACTORS: INK OPERATORS

We have defined operators that, applied to documents containing ink strokes, produce intermediary documents associated with the overall interaction.

5.1

Definitions

• qs: quantity of strokes produced or deleted in the slide since the last intermediate state of the document was considered. • qt: amount of elapsed time since the slide was last changed. • mode: specify the type of interaction with the slide during the creation process (stroke production, stroke deletion, typing, object insertion, object deletion). Figure 3: Students in a Chemistry lab session have written down their notes with iClass running on a Tablet PC: final static images with all ink strokes on top a prepared background.

• P : the set of all totalstrokes strokes on the slide, defined by the coordinates of their initial and final points (xi, yi), (xf, yf ). Hence P = {(xi , yi )1 , (xf , yf )1 , (xi , yi )2 , (xf , yf )2 , . . . (xi , yi )totalsrokes , (xf , yf )totalsrokes } • t1 , t2 : limits of the time period being considered in the construction of the intermediary document.

5.2

inkteractors

• T imeSlice(∆t, t1 , t2 ): This is likely to be the most direct way to specify intermediate states of a document. The operation demands the definition of a constant time increment ∆t which is used to generate several intermediate visions of the document over an specified period t1 to t2 . Effect: Considers intermediate states of the document its states at ti , t1 ≤ ti ≤ t2 ∧ (ti − t1 )mod∆t = 0 • StrokeQtty(nstrk, t1 , t2 ): Takes intermediate states of the document every time its size is incremented by nstrk strokes. Effect: Considers intermediate states of the document its states at ti , t1 ≤ ti ≤ t2 ∧ qs = nstrk. • StrokeQtty(nchar, t1 , t2 ): Takes intermediate states of the document every time its size is incremented by nchar strokes. Figure 4: A document presenting all the slides produced by a group of students in a UML Design class.

Effect: Considers intermediate states of the document its states at ti , t1 ≤ ti ≤ t2 ∧ qs = nchar • StrokeQtty(npixel, t1 , t2 ): Takes intermediate states of the document every time its size is incremented by npixel strokes. Effect: Considers intermediate states of the document its states at ti , t1 ≤ ti ≤ t2 ∧ qs = npixel

• IdleT ime(∆t, t1 , t2 ): Takes intermediate states of the document at moments immediately before idle periods greater or equal to ∆t. An idle period is defined as the period of time since the slide was last changed. Effect: Considers intermediate states of the document its states at ti , t1 ≤ ti ≤ t2 ∧ qt ≥ ∆t • ChangeM ode(t1 , t2 ): Takes intermediate states of the document at moments immediately before change modes. Considers M = (m1 , m2 , ...mn ) the set of possible modes of the building tool – stroke production, stroke deletion, typing, object insertion, object deletion – mc the current mode and mi the new selected mode. Effect: if mi 6= mc { Consider the state of the document till until this change mode; mc := mi } • ChangeAtrbts(fi , t1 , t2 ): Takes intermediate states of the document at moments immediately before changing strokes attributes. Considers

Figure 5: Sample document resulting from operation TimeSlice.

A = (a1 , a2 , ...an )

last intermediate state has been considered). Let SP (Stroke Pixels) be the set of spixels that define the new stroke,

the set of possible kinds of facilities for stroke productions – line color, line width, line type – aic the current value selected for the facility ai and ain the new value selected for ai .

SP = {(x1 , y1 ), (x2 , y2 ), . . . (xspixels, yspixels)} Let space be the minimum distance between two clusters. The new stroke will be part of the current cluster if it is close to any of its points (distance greater no more than space), otherwise it will be the first stroke of a new cluster an the previous one will be finished.

Effect: if aic 6= ain { Considers the state of the document until this change of value for ai ; aic := ain } • ChangeArea(GA, t1 , t2 ): Takes intermediate states of the document at moments immediately before changes in the subarea where the interactions are taking place.

6.

Let GA = {(x1 , y1 , x2 , y2 )1 , (x1 , y1 , x2 , y2 )2 , (x1 , y1 , x2 , y2 )3 , (x1 , y1 , x2 , y2 )4 . . . (x1 , y1 , x2 , y2 )n } be a set of disjoint regular areas defined over the slide, where x1 < x2 and y1 < y2 . Let (x1 , y1 , x2 , y2 )c be the current area, i.e., the area that includes the coordinates of the point where the last action over the slide took place. Let (x, y) be the reference coordinate of the present action over the slide Effect: if (x < x1c ∨ x > x2c ∨ y < y1c ∨ y > y2c ){ Considers the state of the document until this change of interaction area; Change current area variable to the value that includes (x, y)} • Gaps(∆t, space, t1 , t2 ): Takes intermediate states of the document at moments before a stroke is produced in a distance greater than the space from the strokes produced immediately before (since the last intermediate state has been considered). The aim here is to group clusters of close strokes which may, possibly, have some associated semantic (they may compose a word, for example). Effect: Let CP (Cluster Pixels) be the set of pixels of the current cluster (all the points considered since the

EXAMPLES

We have implemented the cutter operators in our iClass infrastructure, which allowed us to apply the operations to real information from captured lectures. Using the captured information from a graduate seminar, we illustrate four operators as follows. Figure 5 presents a snapshot document produced with slides generated at 60-second intervals. Figure 6 presents a snapshot document produced with images created when user idle intervals where at least 15 seconds. Figure 7 presents a snapshot document with slides created when the user produced a total of 30 strokes (the last slide has the remaining strokes). These three examples illustrate that, although idletime is likely to be more useful than fixed time-intervals or fixed amount of strokes, in many situations the reviewer may be able to identify interesting transitions with all operations provided. Finally, the slides in the snapshot document in Figure 8 were produced every time the user changed the ink color. This example illustrates that the corresponding operator may be quite useful in situations where the use of color has some particular meaning. Figure 9 illustrates how the operations can be combined to produce other interesting results. First, the TimeSlice operator was computed to identify the amount of strokes created at 5-second intervals. Next, the IdleTime operator was computed to identify time intervals where the user was

Figure 8: Sample document resulting from operation ChangeAtrbts(color).

Figure 6: Sample document resulting from operation IdleTime.

idle for a minimum of 25 seconds. As a result, four images have been produced automatically, and can be used to generate a snapshot document representing the moments of major user-interaction with the pen-based device. The figures presented in this section illustrate that the operators we have defined can be used to produce snapshot documents representative of different types of user activity. On the one hand, the operations model the user-interaction while writing down using electronic ink. On the other hand, the number of operations allow reviewers to produce several different snapshot documents of the same original capture session upon demand – this is important since there are many situations in which the reviewer may be interested, and the availability of such options allow the reviewer to investigate which are the best alternatives.

7.

Figure 7: Sample document resulting from operation StrokeQtty.

FINAL REMARKS

Pen-based interaction allows users to register information informally using a variety of devices such as PDAs, Tablet PCs or electronic whiteboards. The resulting information is usually made available for review in its original application or by means of printed versions. We have reported results from our investigation with respect to generating documents corresponding to intermediary documents relative to information captured with penbased devices. We have defined operations that consider typical pen-based user-interaction such as the amount of time users have spent while drawing as well as the amount of idle time, the amount of ink strokes used or some attributes associated with the strokes, such as color and width. We have implemented the operations in the context of an infrastructure that allows the capture of electronic ink in live sessions and the automatic generation of corresponding documents in an XML publishing framework, and have illustrated the use of the operations by presenting corresponding documents. Although we have defined other operators, including filters, our current efforts are geared towards providing an infrastructure that allows the application of the operations

Figure 9: bottom: Illustration of computation of stroke activity in 5-second intervals. top: Idle intervals are identified automatically; corresponding images are properly identified (top). in broader scenarios than those presented in this paper. One example is the support to synchronous collaborative annotations, which demand the identification which user was the author of each stroke – as in the scenario where a remote instructor is able to provide feedback to several students at the same time sharing their writing surface in synchronous mode [12]. Another example is the reuse of annotations, which demands the association of versions to strokes and slides. We have already investigated the integrated use of PDAs, tablet PCs and electronic whiteboards – and operations that allow the generation of snapshot documents combining images from those distinct sources may lead to interesting documents. One important result from a recent use of our approach it the need for a top-down alternative to analyze a large amount of capture ink before going into the details of the operations reported in this paper. One instructor willing to analyze his 30 students’ notes from 10 lectures demanded us to provide an overview of the information to be able to identify the points to be reviewed in detail with the ink operators. In the short term, work should give attention to providing alternative visualizations of the results, so that reviewers can carry out analysis of many documents, from many users, captured in a variety of settings, in a structured and systematic way. A complementary, opposing alternative would be taking the unifying approach of having a single image to represent the contents of many slides, as it has been done with video [29]. Regarding future work, we plan to compare the results of

our proposed approach with those obtained by using techniques from the information retrieval as well as the sound, speech and video segmentation literature. Our first efforts in this direction include using neural networks to identify meaningful ink clusters. Other efforts should investigate approaches such as those used to process speech and document contents (e.g. [21]), speech and video contents (e.g. [17]), to classify audio (e.g. [19]) and speech (e.g. [4]). We have been collaborating with faculty from several areas – this collaboration has been rich in providing us with inspiration for defining new services and operations, in particular in terms of pen-based interaction.

Acknowledgment We acknowledge invaluable support from HP, FAPESP and CNPq. We thank the instructors involved in the use of iClass, in particular by their inspiring interviews. We also thank the reviewers for their most relevant comments.

8.

REFERENCES

[1] G. Abowd. Classroom 2000: An experiment with the instrumentation of a living educational environment. IBM Systems Journal, 38(4):508–530, 1999. [2] G. Abowd, E. D. Mynatt, and T. Rodden. The human experience. IEEE Pervasive Computing, 1(1):48–57, 2002. [3] R. Anderson, C. Hoyer, C. Prince, J. Su, F. Videon, and S. Wolfman. Speech, ink, and slides: the interaction of content channels. In Proc. ACM Multmedia’04, pages 796–803, 2004.

[4] B. Arons. Speechskimmer: a system for interactively skimming recorded speech. ACM Trans. Comput.-Hum. Interact., 4(1), 1997. [5] L. Baldochi, R. Cattelan, M. Pimentel, and K. Truong. Automatic generation of capture and access applications. In Proceedings of the 8th Brazilian Symposium on Multimedia and Hypermedia Systems, pages 100–115, 2002. [6] J. A. Brotherton and G. D. Abowd. Lessons learned from eclass: Assessing automated capture and access in the classroom. ACM Trans. Comput.-Hum. Interact., 11(2):121–155, 2004. [7] R. Cattelan, L. Baldochi, and M. Pimentel. Processing and storage middleware support for capture and access applications. In Companion Proceedings of the 2003 ACM/IFIP/USENIX International Middleware Conference, page 315, 2003. [8] P. Chiu, J. Boreczky, A. Girgensohn, and D. Kimber. LiteMinutes: an Internet-Based System for Multimedia Meeting Minutes. In Proceedings of the 2001 International World Wide Web Conference, pages 140–149, 2001. [9] P. Chiu, A. Kapuskar, S. Reitmeier, and L. Wilcox. Notelook: taking notes in meetings with digital video and ink. In Proc. ACM Multmedia’99, pages 149–158, 1999. [10] W.-T. Chu and H.-Y. Chen. Toward better retrieval and presentation by exploring cross-media correlations. Multimedia Syst., 10(3):183–198, 2005. [11] R. C. Davis, J. A. Landay, V. Chen, J. Huang, R. B. Lee, F. C. Li, J. Lin, I. Charles B. Morrey, B. Schleimer, M. N. Price, and B. N. Schilit. Notepals: lightweight note sharing by the group, for the group. In Proc SIGCHI, pages 338–345, 1999. [12] J. P. Farah, P. Brotero, J. H. G. Borges, and M. G. C. Pimentel. Chemistry laboratory with electronic notebooks. In II TIDIA Workshop Fapesp – http://tidia-ae.incubadora.fapesp.br/portal/ documents/external documents/IIWTF/Posters/ posterQuimica.pdf, 2005. [13] G. Friedland, L. Knipping, and E. Tapia. Web based lectures produced by ai supported classroom teaching. International Journal on Artificial Intelligence Tools, 13(2):367–382, 2004. [14] F. Guimbretire, M. Stone, and T. Winograd. Fluid interaction with high-resolution wall-size displays. In Proc. UIST’01, 2001. [15] J. I. Hong and J. A. Landay. Satin: a toolkit for informal ink-based applications. In Proc. UIST’00, pages 63–72, 2000. [16] M. Kam, J. Wang, A. Iles, E. Tse, J. Chiu, D. Glaser, O. Tarshish, and J. Canny. Livenotes: a system for cooperative and augmented note-taking in lectures. In CHI ’05: Proceeding of the SIGCHI conference on Human factors in computing systems, pages 531–540, 2005. [17] D.-S. Lee, B. Erol, J. Graham, J. J. Hull, and N. Murata. Portable meeting recorder. In Proc. ACM Multmedia’02, pages 493–502, 2002. [18] J. Lienhard and T. Lauer. Multi-layer recording as a new concept of combining lecture recording and students’ handwritten notes. In Proc. ACM

Multimedia’02, pages 335–338, 2002. [19] L. Lu, H. Jiang, and H. Zhang. A robust audio classification and segmentation method. In Proc. ACM Multmedia’01, pages 203–211, 2001. [20] J. Mankoff, A. Dey, G. Hsieh, J. Kientz, S. Lederer, and M. Ames. Heuristic evaluation of ambient displays. In Proc. CHI’03, pages 169–176, 2003. [21] D. Mekhaldi, D. Lalanne, and R. Ingold. Using bi-modal alignment and clustering techniques for documents and speech thematic segmentations. In CIKM ’04: Proceedings of the thirteenth ACM conference on Information and knowledge management, pages 69–77, 2004. [22] T. P. Moran, E. Saund, W. V. Melle, A. U. Gujar, K. P. Fishkin, and B. L. Harrison. Design and technology for collaborage: collaborative collages of information on physical walls. In Proc. UIST’99, 1999. [23] E. Pedersen, K. McCall, T. Moran, and F. Halasz. Tivoli: An electronic whiteboard for informal workgroup meetings. In Proceedings of the 1993 ACM INTERCHI Conference on Human Factors in Computing Systems, pages 391–398, 1993. [24] M. Pimentel, L. Baldochi, and R. Cattelan. Prototyping applications to document the human experience. Submitted to IEEE Pervasive. [25] M. Pimentel, Y. Ishiguro, B. Kerimbaev, G. Abowd, and M. Guzdial. Supporting educational activities through dynamic web interfaces. Interacting with Computers, 13(3):353–374, 2001. [26] M. N. Price, B. N. Schilit, and G. Golovchinsky. Xlibris: the active reading machine. In CHI 98 conference summary on Human factors in computing systems, pages 22–23. ACM Press, 1998. [27] H. Richter, G. Abowd, W. Geyer, L. Fuchs, S. Daijavad, and S. Poltrock. Integrating meeting capture within a collaborative team environment. In Proceedings of the 3rd International Conference on Ubiquitous Computing, pages 123–138, 2001. [28] Y. Shi, W. Xie, G. Xu, R. Shi, E. Chen, Y. Mao, and F. Liu. The Smart Classroom: Merging Technologies for Seamless Tele-Education. IEEE Pervasive Computing, 2(2):47–55, 2003. [29] L. Teodosio and W. Bender. Salient stills. ACM Trans. Multimedia Comput. Commun. Appl., 1(1), 2005. [30] M. Weiser. The computer for the 21st century. Scientific American, 265(3):94–104, 1991. [31] M. Wilkerson, W. G. Griswold, and B. Simon. Ubiquitous presenter: increasing student access and control in a digital lecturing environment. In SIGCSE ’05: Proceedings of the 36th SIGCSE technical symposium on Computer science education, pages 116–120, 2005.