Tabletop Concept Mapping Stefan Oppl Department of Business Information Systems Communications Engineering Kepler University of Linz Freistaedterstrasse 315 4040 Linz, Austria
[email protected] ABSTRACT
Concept mapping is designed to externalize and represent knowledge. Together with their visual presentation concept maps should support focused and sustainable interaction between students and coaches or members of organizations. Hence, corresponding tool support has not only to empower persons externalizing their mental models but also to enable transparent multi-party interaction based on contextsensitive (re)presentations. We introduce the Tabletop Concept Mapping (TCM) technique and tool which is supposed to meet these requirements. Providing an open space to express individual thoughts and ideas, it maximizes openness with respect to pragmatics, semantics and syntax of modeling, and minimizes intervention through feature-inherent properties of the artifact.
Christian Stary Department of Business Information Systems Communications Engineering Kepler University of Linz Freistaedterstrasse 315 4040 Linz, Austria
[email protected] integrating new and old knowledge (structures). By communicating concept maps understanding can be facilitated through avoiding misunderstandings (cf. [1]). In the course of concept mapping, constructs are arranged according to an issue of interest, e.g., describing one’s own workplace. They are named and mutually related, setting up and naming all relationships considered to be relevant. In this way, a contextual representation is established.
Keywords
Concept mapping, tangible interface, tabletop, modeling, semiotics. INTRODUCTION
Concept mapping is a technique for eliciting and representing knowledge in network structures [18][19]. Concept maps contain mutually related concepts, i.e. mental constructs (for an example see Figure 1). Concept mapping can either be applied in structured domains, such as mathematics, allowing for individually arranging domain content [2], or for generating meaningful representations from scratch according to individual mental models [4]. While for the first setting, the focus of concept mapping lies on the arrangement of previously known elements, the latter requires an open space to identify, name, and arrange meaningful content. At the centre of interest of this technique is the ability to learn about concepts, either stemming from existing cognitive structures or acquiring novel ones. As such, concept mapping turned out to be useful to generate ideas, to design a structure (such as organization of work), to communicate ideas, and to aid learning by explicitly
Figure 1: Excerpt of a concept map on concept mapping (taken from cmap.ihmc.us) Such cognitive engagements require personal and epistemological connections [22]. Both can be established through constructionist elements [20], such as graspable and tangible elements [26]. The introduced Tabletop Concept Mapping (TCM) technique and tool enables setting up a holistic knowledge space using tangible semantics and syntax. It explicitly targets the semiotic perspective of knowledge externalization and exchange. In this way, not only individual insights, but also mutual understanding of stakeholders, learners, and coaches should be fostered. In the following section, first we present the work related to structural arrangements when using concept maps. Then, we detail the design approach taken for TCM, and its implementation. Finally, we review first evaluation data guiding our future work. RELATED WORK
Researchers have proposed several approaches supporting the creation of concept maps. These approaches mainly differ in form and focus, as they refer to various features and application scenarios.
Computer-based Modeling
The initial developers of the concept-mapping technique [19] suggest the use of computer-based tools when creating concept maps. The main reason they give refers to the capability to revise or expand the maps whenever necessary. Such an endeavor is difficult when not using software to create models. In addition, software tools enable a selective focus on parts of the map (e.g. by zooming or hiding other parts) which may reduce cognitive load on users. Finally, the persistence of maps can be easily ensured when using file repositories to save and retrieve concept maps. Concept mapping is already supported by a range of elaborated software tools (e.g. [3]). Some of them do not only support the modeling process but also assessing the quality of the map based on metrics derived from graphtheory [24]. Other approaches offer a tight coupling to the computer desktop environment and consequently, enable direct links to digital resources [3]. Structure-Elaboration Techniques
Structure-elaboration techniques are an effective means to create physical representations of mental models [5]. In a moderated process (the dialogue-hermeneutic method), the participants create a graphical representation of the their mental models by placing labeled cards on a modeling surface. Subsequently, they relate them using associations. Dann [5] stresses the importance of the immediacy of representation in the structuring process. This immediacy is attained by the physical creation of the model. Participants immediately refer to a physical representation rather than abstract items. They create and alter the model in a dialogue-based way until reaching consensus about what is represented. Mental models of individuals are externalized, questioned and can be modified at the same time. The procedure ends once all participants feel comfortable with the result. Structure-elaboration techniques are highly sophisticated approaches with respect to the specification of both, the methodology and the instruments to be used. However, their suitability for the externalization of mental models has already been evaluated empirically [9][11]. Some researchers [5] suggest that structure elaboration techniques should always be adapted to the case at hand, e.g., in terms of prescribed modeling elements or methodology. Presumably, such an adaptation could be necessary when used for concept mapping. Integrating Computer Support & Structure Elaboration
Both, structure-elaboration techniques and computer-based concept mapping have strengths and weaknesses. They provide effective features in different aspects of user support and might complement each other. Structureelaboration techniques are explicitly designed to support cooperative settings. The immediateness of representation supports the creation of a shared understanding among individuals even for complex concept-mapping tasks. The
cooperative manipulation of models raises mutual awareness of key issues. It also helps identifying potential areas of conflict when overseeing individual mental models. Thus, a physical, tangible approach as used for structure elaboration seems to be appropriate to aid cooperative concept mapping. We have to be aware of the fact that concept mapping tends to lead to complex structures, in particular in collaborative settings. In those cases the mental models of several individuals have to be integrated into a single representation. Such an articulation process is not strictly linear. It is rather an iterative procedure, characterized by continuous switching between creation (identification, naming), reflection and modification of representations. In such situations of high complexity it is hard for participants to keep an overview of what has been modeled so far, and what is the current focus of elaboration. It is thus inevitable to provide tool support addressing these issues. Computer-based approaches can help to trace the focus and status of structure elaboration by providing flexible and customizable visualizations. Additionally, support for experimental model changes can be provided implementing an undo-function. The result of a concept-mapping session should be a descriptive model of how the participating individuals structure and explain a perceived phenomenon of the world. Such models, however, cannot be taken for granted [23], as they undergo constant review and revision [10]. Consequently, a computer-based system should allow archiving initial contributions and revisions of representations, preserving information about the modeling process besides the actual models. In this way, the reflection and modification of models and their development can be facilitated, without additional memorization effort, and regardless of editing times. For tool development, again the features ensuring persistent models states can be adapted from the computer-based concept mapping approach. TCM DEVELOPMENT INPUTS
In this section we map the envisioned concept-mapping support features to technical system requirements and discuss relevant inputs from existing tabletop interfaces. Requirements
Based on the previous method analysis, a support tool has to enable the physical representation of concept maps. It should also support the collaborative manipulation of these maps. In order to support the specification of concepts and associations according to individual perception, the modelelement semantics must not be prescribed, as it is going to be specified by the user in the course of modeling [8]. The modeling process has also to be supported with respect to undoing experimental changes. For putting models into the context of real-world application, linking to external digital resources should be possible. In order to avoid cognitive overload of users during the mapping process,
approaches reducing model complexity have to be evaluated and considered for integration. Means to assure model persistence have to be implemented in order to make models serve as reference points and for reflection at a later point in time. As this requirement concerns the physical representation of models, we have to develop some reconstruction support for already stored models. Finally, we have to take into account that physical structure-elaboration techniques are closer to cognitive and emotional modeling experience than virtual ones. As processing of models can be supported by information technology in an effective way, we can utilize this capability for the modeling process itself. The process documentation has to be effective in both worlds, the virtual and concrete one. Working on a physical surface establishes a close semantic and pragmatic relationship to modeling from the perspective of the modeler(s). This relationship is loosened in virtual settings, e.g., after storing a version of a physical model in the repository of the tool. In order to regain this experience modelers should be able to step ‘in’ again into the physical appearance of the model (based on historical snapshots stored in the repository of the tool). Hence, our tabletop implementation has to exploit the potential of physical and software-based representations in an intertwined way, based on context-sensitive switching between table and desktop. Tabletop Interfaces
To the best of our knowledge, tangible interfaces to concept mapping have not been reported so far, although some approaches to represent and manipulate models in different application contexts exist. The systems reviewed here provide conceptual inputs and technical potentials to meet some of the requirements addressed above. One of the first tabletop interfaces that has been used for modeling support is the Sensetable of the MIT Media Lab [21]. Its scenarios of application mainly target at intuitive visualization and manipulation of complex system models, such as business process representations (in the case of the demonstrator “Tangible Business Process Analyzer”). The system has not been designed for knowledge representation applications, such as concept mapping, but rather to support tangible experimentation given a set of data. In the demonstrator mentioned above, user can perform business process simulations with tangibly adjusted parameters. Technically, the system is based on electromagnetic tracking using sensing tablets. This approach requires active tokens, but cannot be used in combination with a projection of information from underneath. This type of projection, however, overcomes the shortcomings of visualbased tracking, such as the dependence on environmental lighting. Zuckerman et al. pioneered tangible modeling interfaces in educational contexts [26]. Their “digital Montessori-
inspired Manipulatives” (digital MiMs) enable children exploring (abstract) concepts behind physical phenomena, e.g., in feedback-control systems. Children manipulate, configure and connect tokens with predefined semantics (sources, valves, storage, etc.), and may assess their models by running simulations. The results of simulations are displayed directly on the physical tokens, as no projection or tracking technology is involved. Conceptually, the application area of digital MiMs appears to be more context-dependent than concept mapping. However, the authors provide generic design guidelines for creating physical tokens that enable to deal with abstract modeling tasks and encourage experimentation. We took these guidelines into account when designing the TCM tokens. The “Designer’s Outpost” [17] is a spatially distributed tangible interface for collaborative website design. From a methodological perspective the planning of a website can be considered as a concept-mapping task. The authors describe how capturing the design history [16] of a model can facilitate the participants’ understanding of the structure of a model and its evolution. In addition, this feature enables and encourages the users to try out and save different variants of how to map a certain structure to representational items. The reacTable [14] is a tabletop interface designed for collaborative music performances. Conceptually, this research does not address issues relevant for concept mapping support, whereas the technological approach does. The system identifying tokens and tracking them is a promising candidate for implementing an efficient and effective infrastructure of tabletops. The computer-vision based reacTIVision-framework [15] provides real-time recognition capabilities. It is an enabler for all features that require data exchange between the physical representation of a model and its digital counterpart. The reacTIVisionframework has already been tested successfully by educators and researchers in art history [6]. For TCM implementation it provides model persistency, collaboration and reconstruction. TCM TOOL DEVELOPMENT
In this section we detail the actual design and implementation of the Tabletop Concept Mapping tool. Design
The use of a table is grounded in the traditional means used for structure elaboration, namely the surface of a table. Figure 2 shows the tangible combination of physical and software-based modeling, leading to intertwined tabletop and desktop representations. The table surface is the main user interface. Users place physical tokens and associate them accordingly. The physical tokens act as carriers for concepts. Nearly all interactions between the users and the system occur on the surface to enable simultaneous manipulation of the model. Only “digital” use cases like attaching digital resources to a token have to be carried out using the keyboard or the mouse.
models, the tokens also act as containers (cf. Figure 4). Users can open a token and put additional information into it. Additional information is bound to smaller tokens (cf. Figure 4). They represent either an arbitrary digital resource (file), or a model state captured previously. The latter information type enables users to generate parts of a concept map separately, and to connect these parts on a higher level of abstraction at a later point in time. In this way, the common modeling concept of abstraction through overview and detail representations is mapped to the physical world. Figure 2: System Overview Based upon the specified requirements and findings from existing related work, the following functional modules have been implemented:
The ratio between token size and the size of the table surface allows about 10-15 tokens to be placed on the physical surface simultaneously. For complex mapping tasks, this amount of concepts might be too small. The container feature overcomes this deficiency, too.
Labeling & Associating
Concept mapping relies on the ability to assign names to concepts and to define associations between them. In traditional software-based concept mapping tools, these interactions are performed using mouse and keyboard. Using traditional input devices in a physical modeling environment requires switching between media probably distracting users from their original modeling task. Accordingly, the interface has been designed to avoid input devices like mouse or keyboard. Several tools can be used to manipulate the model directly on the surface: Marker plugs are used when associating tokens with each other. Directed and undirected connections can be created using differently shaped variants of these plugs. A rubber token enables users to delete connections. Labeling (i.e. assigning designators to concepts) is either performed by placing sticky notes (post-its) on the top of the token (cf. Figure 3) or alternatively, by using the keyboard to enter a label. In the first case, a camera captures a picture of the sticky note. Image processing algorithms then extract a clearly readable image from the written text.
Figure 3: Labeling & Associating Abstraction Support
Features like zooming or the selective display of concepts allow reducing the complexity of visualizations. However, they are restricted to the desktop. In order to overcome this limitation and to reduce complexity also in physical
Figure 4: Using Tokens as Containers History & Reconstruction
The traceability of the modeling process is ensured in TCM through capturing the design history. This feature also facilitates the understanding of a representation [16], in particular for collaborative endeavors. In this case the design history enables participants to recapitulate and reflect the modeling steps made so far, even when they join a session later on, or have to continue working on a model generated by different individuals. The TCM tool captures the design history by taking snapshots per default. Whenever the model remains stable for more than five seconds, the system takes a snapshot of the current state. In addition, a dedicated token enables users to take snapshots on demand. It allows explicit capturing and storing a certain model state using the backend system for later retrieval. The users can navigate back and forth in the modeling process using the stored information. The history mode (i.e. recalling former model states) can be activated using a round token. It can be rotated counterclockwise or clockwise to go back and forth in time, respectively. When the users switch to the history mode, the computer screen displays the currently selected model state along with a status bar indicating the point in time when the state has been captured. Additionally, support for experimental changes of the concept map is considered crucial when encouraging the exploration of potential concepts and associations.
Experimental changes need to be undoable. Such a requirement is straightforward to implement for desktop applications but hard to accomplish in a physical modeling environment. We have developed a mechanism on top of the history mode. It supports the reconstruction of a previously selected model state.
semantics. Figure 5 shows the main building blocks of the table in a conceptualized view from aside.
The reconstruction support feature is triggered with another control token. According to the differences between current and requested model state, the TCM tool guides users stepby-step which token to remove, move and/or add, in order to complete a reconstruction.
The modeling process is tracked by a computer vision system. The tracking system enables digital capturing of each model state in real time. The software we use is based on the reacTIVision-framework [15]. It has been extended to support the structure elaboration, including the compensation of temporary tracking errors due to fast token movements. Moreover, some algorithms have been implemented to automatically reduce the effect of changing light conditions. They recognize and filter identification errors caused by external spotlights or image noise introduced by too high camera gain.
User-defined semantics
Concept mapping has to allow arbitrary items and associations. Although TCM provides various types of tokens for modeling, their final meaning has to be assigned by the modeler(s). Currently, three different types of modeling tokens are provided. In case additional concept classes are required, users may assign meaning to combinations of tokens or to a pattern of tokens.
The modeling tokens are about 10 x 6 cm in base size and 4 cm in height. They are available in three different variants differing in both shape and color (cf. Figure 2).
Various categories of tokens enable the flexible specification of concept classes and the flexible assignment of meaning in the course of concept mapping. In terms of model persistence a flexible data format has to be used. It has to be capable of representing arbitrary concepts structures. Topic Maps [12] are standard to that respect, as they are a means of describing semantic networks, and provide adequate representational power. Implementation
We detail the implementation from the hardware, functional logic, user, and data management perspective. The main hardware TCM components are a table and the physical tokens used to perform structure elaboration. The table's surface is 80 x 100 cm in size and is made of semitransparent acrylic glass. It allows visual tracking of the physical modeling tokens from the bottom. It also allows projecting supplementary and supportive information onto the surface during the modeling process.
Figure 5: Hardware Setup
Projection is performed using a video beamer that is part of the bottom plate of the table. The bottom plate also holds the camera, visually tracking the tokens. In order to avoid tracking errors caused by reflections of the projection, the camera operates in the near-infrared spectrum and filters visible light.
In its current state of implementation, the system is able to track an arbitrary number of modeling tokens on the table surface – the only restriction is the space available for modeling. The identification and manipulation of modeling tools is also handled by the reacTIVision extension. In this way, it couples the hardware components – both model, and tools – to the features enabling the interaction with TCM.
4 IR-LED-arrays located in the corners of the bottom plate ensure proper lighting. Diffusion plates have been integrated about 50 cm above the LED-arrays for uniform illumination of the surface. The dimension of the table has been chosen to fully exploit the available resolution of the camera and its potential field of vision. The usage of a wide-angle lens reduces the distance required between the camera and the table surface.
The visualization of the model state on the computer screen is implemented in Java using the JHotDraw-framework [7] (cf. Figure 6). The latter is especially suitable for the visualization of diagrammatic models, as it allows for rapid development of graphical displays and editors. The JHotDraw-framework is designed to enable the creation of highly reusable code based upon object-oriented design patterns.
A second camera next to the visualization screen acts as an additional input channel. It enables registering data to be used, e.g. for labeling concepts or specifying token
Both, the display on the computer screen, and the projection on the modeling surface, utilize an identical code base. They only differ in few parameters configuring the
actual appearance of elements. In the default mode of operation, the screen display shows the current state of the structure created on the modeling surface. Changes are reflected in real time. Users interact with the software components of the system through the second camera or the keyboard. The display of directions and / or dialog boxes on the computer screen facilitates the interaction.
minutes. It comprises the assembly of the hardware and the subsequent configuration of the software. The software components including the recognition framework and the visualizer applications operate well on a standard laptop with two processor cores and 2 GB of RAM. The entire ensemble has been subject to evaluation in several teaching settings to demonstrate its usefulness and utility. EVALUATION
The evaluation of the TCM approach is performed along three phases. In a first exploratory study, potential technical and conceptual shortcomings have been identified. Furthermore, the utility and the usefulness of the tool set for modeling have been subject to evaluation. As this phase has been completed early 2008, its results have already been taken into account improving TCM.
Figure 6: computer-based visualization The visualizations on both the screen and the modeling surface reflect and augment the current state of the physical representation in the default mode. In more complex operations – such as history browsing or reconstruction support – they provide additional information and user support. Flexible data representation is considered a core issue for successful deployment of the proposed approach. Userdefinable semantics of concept classes requires various meta-models that can be modified or extended on an ad-hoc basis. Such a generic data representation also needs to allow for interfacing other systems that either provide input to the externalization process or take the resulting model as a starting point for further elaboration. The flexible, yet standardized approach to fulfill these requirements is the use of ISO Topic Maps [12]. Topic Maps represent generic semantic networks and provide constructs for concepts, associations and meta-model elements. Using topic maps for data representation, the dynamically expressed semantics of a model can be captured. At the same time the representation is compatible with every tool that can be used to process the standardized XML-based XTM-format [13], such as the concept mapping toolset “CMapTools” [3]). Although the TCM system works stable in its current state of implementation, the fluctuation of recognition quality (depending on surrounding light conditions) is still an invariant and needs further investigation. Changing conditions imposed by direct daylight might lead to a significant decrease in quality of token recognition. It is annoying for users, but not critical to task performance. The table can be completely disassembled for its in-situ use. It then fits the luggage compartment of an economy car. Setting up the table at a user's site takes about 20
In the second phase (started in 2008), the effectiveness of the tool set for cooperative externalization of mental models is examined in several university teaching settings. The third phase will deploy TCM in organizational settings in a field study and is expected to be completed in mid 2009. The results reported here have been collected from interviews and video analyses involving students as participants in phase 1 and 2. In total, 45 individual TCM sessions have been recorded. In the exploratory study, 18 students have used the tool in a cooperative setting (á 2 persons) without being given a specific mapping task (“externalize whatever seems to be of interest for both of you”). The participants were asked whether they consider the tool useful for the development of a common understanding with respect to the selected mapping. Subsequently, 9 students have tested the table in single-user sessions, again with a theme of their choice. This time technical flaws of the tool should be revealed. In the initial tests carried out in phase 2, 19 individuals were asked to align their understanding of how to write a scientific paper in a collaborative setting of 2-3 people. This task had to be accomplished in their actual studies in the winter term 08/09 as part of their assignments. This time the effectiveness for a given task has been of interest, in addition to the experienced usefulness. So far, the data can be clustered with respect to the representational semantics, and the effects of handling tangible elements when externalizing meaningful content. User-defined representational semantics
In the user studies each of the individuals (except one) has assigned meaning to each variant of the modeling tokens, as each variant corresponded to a specific category of concepts. Most of the time these individually identified categories have been applied consistently throughout the modeling sessions. Few students have modified or refined the meaning of a token variant in the course of modeling. However, these dynamic changes did not lead to incoherent representations, they rather created more detailed and models that could be communicated easily.
Three different categories of tokens turned out to be convenient in most of the sessions. In about 30%, merely two token variants have been used. In three modeling sessions, the participants asked for four or more variants. In those cases prior knowledge and existing experience in modeling might lead to this kind of request, as those participants were already familiar with common (business process) modeling tools like ARIS [25]. Hence, software engineers, project managers, and in general people with no background in modeling might feel comfortable with an upper limit of three categories. In cooperative settings, the meaning of token variants in most cases has been negotiated before starting to model. In the course of modeling categorical knowledge has either been renegotiated or refined, whenever incoherencies or problems of understanding occurred. In four cases, the meaning has not been specified before starting to model, but negotiated on the fly.
capabilities allowing automatically.
to
retrieve
this
information
Model size & structure
The size of the modeling surface allows to use up to 15 tokens at the same time in a convenient way. Our data collected so far indicate this number to provide sufficient expressiveness to represent either entire concept maps or at least structurally closed sub-maps. The latter could be addressed using the container feature. However, in the majority of modeling sessions, the participants have been able to place their entire model on the surface within a single modeling frame on the tabletop. The models of two groups not fitting onto the modeling surface have been divided into sub models. The container feature enabled them to mutually relate sub models on a more abstract layer. Two further groups have used the container feature without the need for more space but for abstracting and restructuring.
Overall, users seem to be well supported by TCM to express their understanding when externalizing mental models. In many of the cases studied so far, determining the meaning of a token variant has been driven by collaboration and group consensus. In this way, TCM empowers modelers to align mental models on a meta-level (initially not focusing on concepts but on the categories applied for conceptualization). Spatial expressions
Users have rarely used the possibility to associate concepts explicitly, using connectors. They rather have expressed relationships through the specific placement of tokens. Closely related concepts have been placed next to each other. In some cases, the patterns of arranging tokens corresponded to a certain meaning. Causal or temporal dependencies often have been expressed by placing tokens along a hierarchy, arranging them in a single line either from top to bottom or from left to right of the table (cf. Figure 7, left). Hierarchical relationships have been consistently represented by placing superordinate concepts at the top and subordinate concepts rather towards the bottom of the surface (cf. Figure 7, right). Explicit associations have only been used to either stress a certain relationship or to show connections among concepts that were not “obvious” (cf. Figure 8). A bias towards concepts (neglecting associations) could be observed for physical modeling, whereas in software-based concept mapping, concepts and associations are considered to be of equal importance [19]. Apparently, users experienced in computer-based modeling apply relationships more often, and when “obvious”. Users unfamiliar with modeling rarely insert connectors at all. Currently, the backend system does not explicitly represent the semantics of spatial relationships in its digital model. It is still an open issues how to gather information on the specific semantics of a token ensemble arranged in a certain pattern. Our future research will deal with
Figure 7: Process-oriented vs. hierarchically organized concept map
Figure 8: Spatially organized concept map CONCLUSIONS
As concept mapping is designed to externalize and represent knowledge, human-centered tool support has not only to provide a set of elements to grasp information physically, but also a flexible diagrammatic (re)presentation scheme for focused and sustainable interaction. The proposed TCM technique and tool empower individuals in expressing their mental representations physically using tangible elements, even in distributed collaborative settings. Modeling is recognized as a self-organized set of activities, albeit being traceable and thus, intelligible to third parties. Our further research will focus on minimizing technology-related interventions for the sake of innovation and creativity.
REFERENCES
1.Ausubel, D.P. The Acquisition and Retention of Knowledge: A Cognitive View. Kluwer: Dordrecht (2000) 2.Brinkmann, A.: Graphical Knowledge Display – Mind Mapping and Concept Mapping as Efficient Tools in Mathematics Education. In: Mathematics Education Review, Vol. 16, pp. 39-48 (2003). 3.Cañas, A., Hill, G., Carff, R., Suri, N., Lott, J., Eskridge, T., Gómez, G., Arroyo, M., and Carvajal, R. CMapTools: A knowledge modeling and sharing environment. In Proceedings of the 1st International Conference on Concept Mapping. Pamplona, Spain: Universidad Pública de Navarra (2004). 4.Coffey J.W., Hoffman R.R.: Knowledge modeling for the preservation of institutional memory. The Journal of Knowledge Management Vol. 7, pp. 38-52 (2003). 5.Dann, H.-D. Variation von Lege-Strukturen zur Wissensrepräsentation. In Struktur-Lege-Verfahren als Dialog-Konsens-Methodik. B. Scheele, Ed., vol. 25 of Arbeiten zur sozialwissenschaftlichen Psychologie. Aschendorff, 1992, pp. 2–41. 6.Döring, T., and Beckhaus, S. The card box at hand: exploring the potentials of a paper-based tangible interface for education and research in art history. In TEI ’07: Proceedings of the 1st international conference on Tangible and embedded interaction (New York, NY, USA, 2007), ACM, pp. 87–90. 7.Gamma, E., and Eggenschwiler, T. The JHotDrawFramework. online http://www.jhotdraw.org/, 1996. 8.Goguen, J. On Notation. Revised version of a paper in TOOLS 10: Technology of Object-Oriented Languages and Systems, (Prentice-Hall, 1993), Department of Computer Science and Engineering, University of California at San Diego, 1993. 9.Groeben, N., and Scheele, B. Dialogue-hermeneutic Method and the "Research Program Subjective Theories". Forum: Qualitative Social Research 1, 2 (June 2000). 10.Herrmann, T., Hoffmann, M., Kunau, G., and Loser, K. Modelling cooperative work: Chances and risks of structuring. In Cooperative Systems Design, A Challenge of the Mobility Age. Proceedings of COOP 2002 (2002), IOS press, pp. 53–70. 11.Ifenthaler, D. Diagnose lernabhängiger Veränderung mentaler Modell. PhD thesis, University of Freiburg, 2006. 12.ISO JTC1/SC34/WG3. Information Technology Topic Maps - Part 2: Data Model. International Standard 13250-2, ISO/IEC, June 2006. 13.ISO JTC1/SC34/WG3. Information Technology Topic Maps - Part 3: XML Syntax. International standard, ISO, June 2006.
14.Kaltenbrunner, M., Jorda, S., Alonso, M., and Geiger, G. The reactable*: A collaborative musical instrument. In Proceedings of WETICE ’06 (2006), IEEE Press. 15.Kaltenbrunner, M., and Bencina, R. reactivision: a computer-vision framework for table-based tangible interaction. In Proceedings of the 1st international conference on Tangible and embedded interaction (New York, NY, USA, 2007), ACM Press, pp. 69–74. 16.Klemmer, S., Thomsen, M., Phelps-Goodman, E., Lee, R., and Landay, J. Where do web sites come from? capturing and interacting with design history. chi 2002. Human Factors in Computing Systems, CHI Letters 4, 1 (2002). 17.Klemmer, S., Newman, M., Farrell, R., Bilezikjian, M., and Landay, J. The designers’ outpost: a tangible interface for collaborative web site. In Proceedings of the 14th annual ACM symposium on User interface software and technology (2001), ACM Press New York, NY, USA, pp. 1–10. 18.Novak, J.D.: Learning, Creating, and Using Knowledge: Concept Maps as Facilitative Tools in Schools and Corporations. Lawrence Erlbaum: London (1998). 19.Novak, J., and Cañas, A. J. The theory underlying concept maps and how to construct them. Technical Report IHMC CmapTools 2006-01, Florida Institute for Human and Machine Cognition, 2006. 20.Papert, S.: The Children’s Machine, Basis Books, New York (1993) 21.Patten, J., Ishii, H., Hines, J., and Pangaro, G. Sensetable: a wireless object tracking platform for tangible interfaces. In Proceedings of the ACM Conference on Human Factors in Computing Systems 2001 (CHI ’01) (2001). 22.Resnick, M., Bruckman, A.; Martin., F.: Pianos, Not Stereos. Creating Computer Construction Kits. In: Interactions, Vol. 3, No. 8, pp. 41-50 (1996). 23.Robinson, M., and Bannon, L. Questioning representations. In Proceedings of ECSCW ’91 (Amsterdam, 1991), B. L. Robinson and M. Schmidt, Eds., pp. 219–233. 24.Ruiz-Primo, M., and Shavelson, R. Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching 33, 6 (1996), 569–600. 25.Scheer, A.-W. ARIS – Business Process Modeling, 3 ed. Springer, 2003. 26.Zuckerman, O., Arida, S., and Resnick, M. Extending tangible interfaces for education: digital montessori-inspired manipulatives. In Proceedings of Human factors in computing systems (2005), ACM Press New York, NY, USA, pp. 859–868.