Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
A Framework for Collaborative Augmented Reality Applications Tobias Brockmann University of Muenster
[email protected]
Nina Krueger University of Muenster
[email protected]
Stefan Stieglitz University of Muenster
[email protected]
Immo Bohlsen University of Muenster
[email protected]
ABSTRACT In recent years, augmented reality applications became more popular due to a more user-friendly design and technical improvements. Currently, augmented reality applications start to play a growing role for communication and collaboration. Some researchers classified and structured augmented reality applications in different domains, but an overall classification scheme is still missing. Additional to this gap, new augmented reality (AR) applications and features came up, so that existing taxonomies do not cover all relevant aspects anymore. The aim of this article is to provide a classification framework for collaborative AR-applications based on literature analysis of the latest research regarding collaboration and ARapplications and as well related disciplines. In this paper we present an approach to analyze related taxonomies. Based on that, we propose a new taxonomy which covers relevant aspects of current AR-applications and consists of the dimensions space, time, mobility, virtual content, role concept, and visualization hardware. Following this, we compare our framework to existing taxonomies and we discuss academic and practical implications. Keywords Augmented reality, information richness, taxonomy, collaborative, applications. INTRODUCTION The intra- as well as inter-organizational communication and collaboration has grown continuously throughout the last years, especially throughout the use of social media for all types of organizations (Berlecon Research, 2012, Stieglitz and DangXuan, 2013; Krüger et al., 2012). Modern companies act more and more in so called ‘value creation networks’ and organize their value creation process in the form of projects with interdisciplinary and inter-company teams, which work in a fast, flexible way and are time- and location-independent (Hantschel, 2009). This requires the massive use of collaboration and communication systems to support the distributed teams adequately (Gelb et al., 2010; Stieglitz and Brockmann, 2012). Several gaps can be identified concerning classical collaboration solutions, such as the bad integration of real items in video conferences or the conveying of presence during dislocated interaction scenarios (Milgram and Kishino, 1994). One possible solution to solve these problems might be the adaption of collaboration systems based on virtual environments or augmented reality (AR) technologies (Stieglitz and Lattemann, 2011). This capability of an increased interaction quality induced by the use of AR-applications is supported by the rational media selection theories ‘Social Presence Theory’ (SPT) and ‘Media Richness Theory’ (MRT). The central statement of the SPT is that media differ in the extent of social presence conveyed by them when they are used for the support of communication and collaboration (Short et al., 1976). In scenarios, where a high social presence is required for successful interaction, ARtechnologies can provide additional value compared to classic groupware. The Media Richness Theory of Daft et al. (1986) is based on the assumption that there are different reasons for requiring communication and in certain scenarios ambiguous information exists. The reduction of this ambiguity cannot be attained by sending more data, but rather through the exchange of rich information (Daft et al., 1986). The richness of a medium mainly depends on the possibility for immediate feedback, number of simultaneously usable communication channels, the degree of personalization and diversity of languages (Daft et
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
1
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
al., 1979). Owing to AR-applications a new visual channel can be seamlessly integrated in face-to-face meetings with no negative effects on the possibility for immediate feedback, the degree of personalization or language diversity. Regarding to the central aspects of SPT and MRT, the use of AR-applications can hence generate an increased quality of interaction. Due to the fast technical development concerning new display technologies and image recognition processes as well as the general increase of computational power of current systems, research in so called collaborative AR-applications has been intensified in the last years, resulting in a multitude of different prototypical applications (Carmigniani, et al. 2011; Minatani et al., 2007; Mogilev et al., 2002; Poppe et al., 2012; Zhong et al., 2001). The magnitude of different applications needs to be structured. A first approach helping to border AR-applications is given by Graham (2012). They define augmented reality applications as live, direct or indirect views of a physical, real world environment, whose elements are augmented by computer generated sensory input, which might be sounds, videos, graphics or GPS data (Graham 2012). Current research results concerning collaborative AR-Applications have not been analyzed in a broader context yet. Classification frameworks for groupware systems such as the 3K-Modell of Teufel et al. (1995) or augmented reality-specific frameworks are not or only to a certain degree applicable for the classification of collaborative augmented reality applications (Teufel et al., 1995). Renevier and Nigay (2001) as well as Wang and Dunston (2006) presented first attempts of classifying AR in this context. Their works include only partially the characteristic properties of the collaborative AR-applications at hand, are based on a technological state from 6 to 11 years ago, and provide only limited orientation. No holistic classification approach for AR-systems has been developed until now (Renevier et al., 2001; Wang and Dunston, 2006). Therefore, academics as well as practitioners lack a basic framework for the classification of AR-Systems and its properties. The primary goal of our work is to provide a framework for collaborative AR-applications. The identification of the relevant dimensions and its occurrences is based on a literature analysis of the latest research regarding both collaboration and ARapplications. Starting with an extensive literature review of existing collaborative AR-applications we show that collaborative AR-systems are relevant for current and future communication. Following, we discuss literature of well-known approaches of AR-taxonomies. Based on this, a classification framework on the basis of the Three-Level Measurement Model is created (Bailey, 1984; Nickerson, 2009). We then develop an initial framework based on literature analysis. This first draft of a classification framework is extended through a deductive-to-empirical approach. Therefore, classification frameworks from related disciplines are evaluated and AR-systems are classified in an iterative approach. The article closes with a discussion of implications for the created taxonomy and an outlook on future research. RELATED WORK Different approaches to classify AR-systems can be found in literature, which vary significantly concerning the used dimensions and classification criteria. The literature review includes technically-oriented, user-oriented, informationoriented, and interaction-oriented taxonomies. Type of Taxonomy
References
Dimensions
Technically-oriented taxonomies
Milgram et al., 1994 (20)
• • •
extent of world knowledge reproduction fidelity extent of presence metaphor
Braz and Pereira, 2008 (5)
• • • • • •
real world manipulator-systems real world acquisition-systems tracking-systems virtual model generator-systems mixing realities display systems
Wang and Dunston, 2006 (35)
• • •
mobility number of users space
Renevier et al., 2001 (27)
•
remote collaboration in one augmented reality remote collaboration in augmented
User-oriented taxonomies
•
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
2
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
•
realities local collaboration in one augmented reality
Hugues et al., 2011 (15)
• •
augmented reality artificial environment
Lindemann and Noma, 2007 (17)
•
differentiates between perception of information through the real world or the ‘mixing point’
Suomela and Lehikoinen, 2005 (31)
• •
environment model point of view
Tönnies and Plecher, 2011 (34)
• • • • • •
temporality dimensionality registration frame of reference referencing mounting
Interaction-oriented taxonomy
Mackay, 1998 (18)
• • •
augment the user augment (/extend) the object augment the environment
Not classified
Normand et al., 2012 (24)
• • • •
tracking augmentation type temporality inclusion of other senses
Information-oriented taxonomies
Table 1. Different Types of Taxonomies and Related Dimensions
As a result of our literature analysis we found two different technically-oriented taxonomies. Milgram et al. (1994) present a three-dimensional taxonomy for the classification of mixed reality AV and AR displays, which identifies differences concerning the quality of perceived information from reality (extent of world knowledge), the image quality of the virtual extension (reproduction fidelity) and the degree of perceived immersion in the world (extent of presence metaphor) (Milgram et al., 1994). Braz and Pereira (2008) assume in their TARCAST taxonomy that every AR-system includes the following subsystems: real world manipulator-, real world acquisition-, tracking-, virtual model generator-, mixing realities-, and display systems (Braz and Pereira, 2008). Regarding the user-oriented taxonomies the work of Wang and Dunston (2006) is highly relevant. It is designed to classify construction-supporting AR-based groupware into the dimensions mobility, number of users, and space (Wang and Dunston, 2006). Renevier et al. (2001) divide collaborative AR-systems into three classes. Remote collaboration in one augmented reality describes one user being next to an object while others join in virtually. In contrast to this, remote collaboration in augmented realities enables every user to have his own representation of real objects, which mirrors inputs from the other users. Local collaboration in one augmented reality describes all users acting on the same region and object (Renevier et al., 2001). Hugues et al. (2011) provide a functional AR-taxonomy, distinguishing between an augmented reality (e.g. indicating underground subway stations) and the creation of an artificial environment. According to them, the first mentioned can be subdivided into six, the latter typology into three sub-functionalities (Hugues et al., 2011). The taxonomy by Lindeman and Noma (2007) also includes visual, auditive, haptic, olfactoral, and gustative enrichments of reality. It distinguishes between the perceptions of additional information directly through the real world (e.g. through close-by speakers) and perceptions which are first captured by technical equipment and then digitally embedded in the AR through the so-called mixing point (Lindemann and Noma, 2007). The third group of taxonomies consists of information-oriented taxonomies. Suomela and Lehikoinen (2005) present a taxonomy tailored for classifying mobile AR-applications based on the visualization of location-dependent information, specifically the environment model, which ranges from 0D to 3D, and the point of view, which can be either 1st or 3rd person. They combine both factors in a model-view number (Suomela and Lehikoinen, 2005). Tönnies and Plecher’s (2011)
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
3
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
taxonomy aims at characterizing the presentation space (the combination of both VR and reality) with six different dimensions in order to classify the according tools and has been tested with 40 publications of the ISMAR conference. These dimensions are temporality, dimensionality, registration, frame of reference, referencing, and mounting (Tönnies and Plecher, 2011). The last type of taxonomies identified in literature is the interaction-oriented taxonomy. Mackay (1998) takes a rather trivial approach of a single dimension, from ‘augment the user’ (e.g. showing bones in a body on a HMD), over ‘augment (extend) the object’ to ‘augment the environment’ realized through video projectors supporting reality (Mackay, 1998). When considering these categories, the four-dimensional taxonomy presented by Normand et al. (2012) cannot be assigned to one of those categories, since they contain parts of technical-, user- and also interaction-oriented nature. The first dimension, called ‘tracking’, is divided into the classes 0D, 2D, 2D+θ and 3D. ‘Augmentation type’, shows parallels to the single taxonomy of Mackay, although only ‘mediated augmentation’ (using HMD, enhancement only visible to the primary user) and ‘direct augmentation’ (enriching reality with projectors) are considered. The third dimension is based on Hugues et al. (2011), and comprises the temporal aspect (past (< t0), present (t0), future (> t0), or completely imaginary (∞)) of virtual contents. The fourth optional dimension respects the inclusion of other senses through AR-technologies (Normand et al., 2012). TAXONOMY DEVELOPMENT Development of taxonomies is a complex process and the successful implementation (not only in the area of collaborative AR-applications) depends on the intended use of the taxonomy and therefore on the appropriate choice of the methodology to support it. Nickerson et al. (2009) present an approach, which is based on the ‘three-level measurement model’ of Kenneth D. Bailey (1984), and is (according to the authors) particularly fitting for creating taxonomies in the area of information systems. They define a taxonomy t as a set of n dimensions Di (i=1, …, n). Each Di might vary in ki (ki≥2) manifestations Cij (j=1, …, ki). Additionally, they introduce the requirement of each object having exactly one occurrence per dimension, which means that they have to be both mutually exclusive and collectively exhaustive (Nickerson et al., 2009). To obtain this, we use an iterative approach consisting of three major steps (fig. 1). In the first step we start with a focus on a partial set of our observed objects found during the literature analysis. We identify properties that may be appropriate to differentiate between those objects. As a result of this first step we obtain a classification scheme, which serves as a preversion for the next step. Based on this classification, we design new dimensions and manifestations in a second step and apply the entire set of research objects to these new dimensions. Having the entire set of research objects applied to the classification scheme we use the iterative approach of this methodology to refine the initial classification with the received results of the last step. Finally, we identify missing objects, which feature a combination of dimensions that does not occur in the resulting taxonomy and regarding these objects we enhance the classification again in a final iteration.
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
4
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
Analysis of a partial set of objects
I
Identification of characteristic object properties
Empirical to Deductive
Development of first classification (based on dimensions and occurrences)
Conceptualization of new dimensions and occurrences
II
Analysis of research objects concerning the new dimensions and occurrences
Deductive to Empirical
Development of a refined version of the classification
III
Identification of missing objects Application of Classification Creation of new objects
Figure 1. Taxonomy Development Process
We conducted an extensive keyword-based literature review regarding the term collaborative AR-applications and combinations of the following keywords: augmented reality, AR, mixed reality, MR, computer supported collaborative work, CSCW, collaborative, collaboration, communication and multi-user. By searching in online databases such as AISEL, Web of Knowledge, SpringerLink, IEEE Xplore, Emerald, and Google Scholar a total number of 237 articles were identified as relevant based on a manual content analysis. Regarding the field of AR-taxonomies the search was initially based on combinations of the keywords augmented reality, AR, mixed reality, MR, taxonomy, typology, and classification. RESULT The literature review (chapter 2) clarifies that a multitude of different AR-taxonomies exist, each serving a particular purpose. Depending on the purpose, two taxonomies, serving the classification of the same set of objects, can be composed out of two completely different dimensions and manifestations. To obtain a unified basis for the further process of taxonomy development, we first define the purpose of the taxonomy to be obtained: The taxonomy shall support the user in identifying the most appropriate collaborative AR-application fitting the respective communication and collaboration scenario. Based on the definition of this purpose, the meta-characteristic ‘Type of user interaction support within the scope of communication and collaboration scenarios’ is derived for a further taxonomy development. Within the scope of analyzing collaborative AR-applications six relevant dimensions could be identified. Figure 2 provides an overview of these dimensions and their related manifestations. Furthermore, we allocated the outcomes of our literature analysis to the different manifestations for each dimension be representing the articles with the highest relevance in this field.
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
5
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
Space
Co-located Referenced in 14 articles (Billinghurst & Kato 1999)
Dislocated
Variable
Referenced in 10 articles (Poppe et al. 2011)
Referenced in 1 article (Thomas, Quirchmayr & Piekarski 2003)
Synchronous
Time
Variable
Asynchronous
Referenced in 25 articles (Barakonyi et al. 2003)
Referenced in no article
Referenced in 1 article (Thomas, Quirchmayr & Piekarski 2003)
Stationary
Mobile
Mixed
Referenced in 22 articles (Prince et al. 2002)
Referenced in 1 article (Reitmayr & Schmalstieg 2001)
Referenced in 3 articles (Stafford, Piekarski & Thomas 2006)
User visualization
Object visualization
Combined visualization
Referenced in 4 articles (Prince et al. 2002)
Referenced in 21 articles (Thomas, Quirchmayr & Piekarski 2003)
Referenced in 1 article (Poppe et al. 2011)
0032003
Mobility
Virtual Content Role Concept Visualization Hardware
One-Role-Concept
Multi-Role-Concept
Referenced in 18 articles (Reitmayr & Schmalstieg 2001)
Referenced in 8 articles (Zhong, Boulanger & Georganas 2001)
Head Mounted Display (HMD)
Handheld Display
Spatial Display
Individual Spatial Display
Mixed
Referenced in 12 articles (Poppe et al. 2011)
Referenced in 3 articles (Mogilev et al. 2002)
Referenced in 1 article (Ishii et al. 2002)
Referenced in 1 article (Barakonyi, et al. 2003)
Referenced in 9 articles (Höllerer et al. 1999)
Figure 2. Dimensions of AR-Taxonomy and Number of References in 237 Relevant Articles
The proposed taxonomy divides collaborative AR-systems into six dimensions, each subdivided into different manifestations. The central question of the first dimension, called space, is whether the users have to be in the same place while using the application or not. It is subdivided into the three manifestations co-located (users are in the same locale), dislocated (users use at least two different places), and variable (system can be used for both co-located and dislocated scenarios). The second property (time) we identified is the temporal reference of the interaction. It differentiates between systems which are usable for synchronous interactions, and systems which are limited to the use in asynchronous interactions. Systems that support both synchronous and asynchronous interactions are classified as variable within this dimension. The third dimension (mobility) focuses on the aspect of user mobility and examines whether the users are able to roam freely while they are using the application. While users of a stationary system are bound to a specific location, users of a mobile AR-application can move. In a mixed user mobility one user is bound to a certain location while the other users can move freely. As a forth property of AR-application we identified the content of virtual extensions. If the extension of the perceived reality is exclusively done with virtual representations of the users, it is called user representation. If this is done with virtual representations of new, virtually modified or virtually extended objects, it is presented as object representation. A combination of both previously mentioned approaches is called combined representation. The fifth dimension role concept, distinguishes between a one-role-concept (all users have access to the same functionality) and a multi-role-concept (different user roles exist, which supply different functionalities). The last dimension of our taxonomy classifies AR-applications based on the visual output devices. To perceive a potentially individual virtual extension of reality, users may wear a head mounted display or they can use a handheld display. Furthermore, they may have access to a shared, stationary, public display (spatial
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
6
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
display) or they perceive the virtual extension of reality with the help of an own stationary display (spatial display – individual). Finally, some systems may support different forms of visual output devices (mixed). DISCUSSION Following, we indicate strengths and weaknesses of the taxonomy, in which both the taxonomy development process as well as the final outcome of the developed taxonomy are reflected critically. The analysis and development process has been conducted based on a proven procedure model, which is particularly appropriate for the development of taxonomies in information systems. We were able to classify all identified literature concerning collaborative AR-applications using the taxonomy. We also managed it to integrate our findings into a new taxonomy. However, it is still necessary to validate the proposed taxonomy. So far this taxonomy represents a first approach. On the one hand, the inclusion of all applications enables the immediate integration of all relevant dimensions and manifestations in the first draft of the taxonomy, leading to the avoidance of potentially unnecessary iterations. On the other hand, a high number of iterations usually lead to an intensive review of the topic and can potentially contribute to the identification of details which have been missing during the first iteration. A further aspect is that the entire taxonomy process has been based on meta-characteristic which has been determined before the first process part and therefore been purpose-oriented. Nickerson et al. (2009) name a few requirements to be fulfilled by a good taxonomy. Particularly a fitting number of dimensions and manifestations have to be chosen to simplify the classification process while also representing the differences between the objects in question by the taxonomy. The taxonomy developed in the scope of this work fulfills these requirements, as six dimensions and a maximum of five possible appearances and at least 14 different characteristics (fig. 2) could be identified out of 28 observed collaborative AR-applications. On average one characteristic can be derived out of two observed collaborative AR-applications. Theoretically it seems necessary to investigate differences between different applications to distinguish them by introducing new dimensions. This was performed during the analysis and development process, but no further dimension / differentiating property could be identified in this research step. Another requirement according to Nickerson et al. (2009) is the fact that all observed objects should be allocated to at least one class. While this is the case, we have to mention that in two cases this was not strictly selective, although the dimensional manifestations could be clearly differentiated. Concerning the last requirement, we can state that it is easily possible to extend the taxonomy by new dimensions and manifestations. Regarding the identified dimensions it has to be mentioned that no dependencies exist, but each of the six dimensions covers a central aspect regarding the predetermined meta-characteristic. Besides the classic dimensions of space and time, which can also be used to classify groupware without AR-technology, the developed taxonomy also contains the dimension of virtual content, which has been specifically tailored to the properties of the observed collaborative AR-applications. The taxonomy therefore also includes both classic as well as AR-specific characteristics of the analyzed application. As the evaluation of the distribution of dimensional manifestations has shown (fig. 2), each identified dimension, except time, contributes to the differentiation of the observed applications. It could be reflected critically, that the dimension of time should not be considered due to insufficient differentiation quality. However, considering the fact, that a central role of taxonomies is also the differentiation of missing objects, it is obvious that the dimension of time despite its lack of differentiation is of high relevance and therefore takes a justified role in the taxonomy. As a limitation of the taxonomy it can be mentioned that the classification of applications, concentrates on the extension of reality perceived by users through visual elements. This implies, that the differentiation between haptic, olfactoral, and gustative information is not reflected in the taxonomy by a separated dimension. Yet, during the analysis and development process no single collaborative application could be identified that supports the use of such dimension. CONCLUSION Throughout this research, a taxonomy for the classification of collaborative AR-applications has been developed. The taxonomy is composed of the dimensions space, time, mobility, virtual content, role concept, and visualization hardware and therefore considers traditional as well as AR-specific dimensions for the classification of applications. The evaluation of the classification process has shown that 14 different manifestations have been identified and therefore a wide spectrum of different applications exists. With one exception, each of the six dimensions provides an important contribution to distinguish the observed applications. Results show that 27 of 28 applications can be classified as
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
7
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
synchronous and only one application could be classified as variable. The dimension of time therefore currently serves for the classification of still missing applications. Besides the aspect of missing asynchronous applications it can be concluded, that also in the area of mobile collaborative AR-applications there is still is a huge potential for research and development. It has to be noted that the analyzed applications are exclusive prototypes developed during scientific research and have not been used in real-world conditions. As a next step it is needed to conduct an extensive analysis of the AR-applications available on the market, to classify the identified applications and to prove the applicability of this taxonomy in practice. Due to the fact that the area of augmented reality still constitutes a relatively new and fast changing field, it also makes sense to observe the developments of collaborative AR-applications and to modify dimensions and manifestations in the taxonomy if needed. REFERENCES 1.
Azuma, R. T., Baillot, Y., Behringer, R., Feiner, S., Julier, S., and MacIntyre, B. (2001). ‘Recent Advances in Augmented Reality,’ IEEE Computer Graphics and Applications, 21, 6, p.34-47.
2.
Bailey, K. D. (1984). ‘A Three-Level Measurement Model,’ Quality and Quantity, 18, 3, 225-245.
3.
Barakonyi, I., Frieb, W., and Schmalstieg, D. (2003). “Augmented Reality Videoconferencing for Collaborative Work,” Proceedings of the 2nd Hungarian Conference on Computer Graphics and Geometry.
4.
Berlecon Research (2012). ‘Communication and Collaboration out of the cloud – Status Quo of German Enterprises?’, 04, 12, 1-63.
5.
Braz, J. M., and Pereira, J. M. (2008). ‘TARCAST: Taxonomy for Augmented Reality CASTing with Web Support,’ The International Journal of Virtual Reality, 7, 4, p.47-56.
6.
Billinghurst, M., and Kato, H. (1999). “Collaborative Mixed Reality,” in Mixed Reality - Merging Real and Virtual Worlds. Proceedings of the International Symposium on Mixed Reality (ISMR '99), pp. 261-284.
7.
Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., and Ivkovic, M. (2011). ‘Augmented Reality Technologies, Systems and Applications,’ Multimedia Tools and Applications, 51, 1, 341-377.
8.
Daft, R. L., and Lengel, R. H. (1986). ‘Organizational Information Requirements, Media Richness and Structural Design,’ Management Science, 32, 5, 554-571.
9.
Daft, R. L., and Wiginton, J. C. (1979). ‘Language and Organization,’ The Academy of Management Review, 4, 2, 179191.
10. Dennis, A. R., Valacich, J. S., Speier, C., and Morris, M. G. (1998). ‘Beyond Media Richness: An Empirical Test of Media Synchronicity Theory,’ in Proceedings of the Hawaii International Conference on System Sciences (HICSS '98), 6-9 Jan, Kohala Coast, HI, USA, 48-57. 11. Gelb, D., Subramanian, A., and Tan, K. H. (2011). ‘Augmented Reality for Immersive Remote Collaboration,’ in Proceedings of the IEEE Workshop on Person-Oriented Vision (POV '11), Kona, HI, USA, 07 Jan 2011. 12. Graham, M., Zook, M., and Boulton, A. (2012). "Augmented reality in urban places: contested content and the duplicity of code." Transactions of the Institute of British Geographers. 13. Hantschel, G. A. (2009). ‘Future Organisational Culture Effective Adoption of Innovative and Flexible Collaboration technologies’, in Wissens- und Informationsmanagement, F. Kneuper and F. Neumann (Ed.), Wiesbaden: Gabler, 513520. 14. Höllerer, T., Feiner, S., Terauchi, T., Rashid, G., and Hallaway, D. (1999). “Exploring MARS: Developing Indoor and Outdoor User Interfaces to a Mobile Augmented Reality System,” Computers & Graphics (23:6), pp. 779-785. 15. Hugues, O., Fuchs, P., and Nannipieri, O. (2011). ‘New Augmented Reality Taxonomy: Technologies and Features of Augmented Environment,’ in Handbook of Augmented Reality, B. Furht (Ed.), New York: Springer, p.47-63. 16. Ishii, H., Underkoffler, J., Chak, D., Piper, B., Ben-Joseph, E., Yeung, L., and Kanji, Z. (2002). ”Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation,” Proceedings of the 2002 IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR '02). 17. Krüger, N., Stieglitz, S., and Potthoff, T. (2012). “Brand Communication in Twitter - a Case Study on Adidas”. Proceedings of the Pacific Asia Conference on Information Systems (PACIS), Hochiminh City, Vietnam, Paper 161
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
8
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
18. Lindemann, R. W., and Noma, H. (2007). ‘A Classification Scheme for Multi-Sensory Augmented Reality,’ in Proceedings of the 2007 ACM Symposium on Virtual Reality Software and Technology (VRST '07), Irvine, CA, USA, November 05 - 07, 2007, 175-178. 19. Mackay, W. E. (1998). ‘Augmented Reality: Linking Real and Virtual Worlds - A New Paradigm for Interacting With Computers,’ in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI '98), L'Aquila, Italy — May 25 - 27, 1998, 13-21. 20. Markus, M. L. (1987). ‘Toward a ‘Critical Mass’ Theory of Interactive Media,’ Communication Research, 14, 5, 491511. 21. Milgram, P., and Kishino, F. (1994). ‘A Taxonomy of Mixed Reality Visual Displays,’ IEICE Transactions on Information Systems E77, D:12, 1-15. 22. Minatani, S., Kitahara, I., Kameda, Y., and Ohta, Y. (2007). ‘Face-to-Face Tabletop Remote Collaboration in Mixed Reality,’ Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR '07), 13 - 16 2007, Nara-Ken New Public Hall, Nara, Japan, 43-46. 23. Mogilev, D., Kiyokawa, K., Billinghurst, M., and Pair, J. (2002). ‘AR Pad: An Interface for Face-to-Face AR Collaboration,’ Extended Abstracts of the Conference on Human Factors in Computing Systems: Proceedings of the Conference on Human Factors in Computing Systems (CHI '02), New York: ACM, April 20-25 2002, 654-655. 24. Nickerson, R. C., Varshney, U., Muntermann, J., and Isaac, H. (2009). ‘Taxonomy Development in Information Systems: Developing A Taxonomy of Mobile Applications,’ in Proceedings of the 17th European Conference on Information Systems (ECIS '09), September 21, 2009, Verona, Italy, 1138-1149. 25. Normand, J. M., Servières, M., and Moreau, G. (2012). ‘A New Typology of Augmented Reality Applications,’ in Proceedings of the 3rd Augmented Human International Conference (AH '12). 26. Poppe, E, Brown, R. A., Recker, J. C., and Johnson, D. M. (2012). ‘Evaluation of an Augmented Reality Collaborative Process Modelling System,’ in Proceedings of the 2012 International Conference on Cyberworlds 2012 (CW '12), Darmstadt, Germany, September 25, 2012. 27. Prince, S., Cheok, A. D., Farbiz, F., Williamson, T., Johnson, N., Billinghurst, M., and Kato, H. (2002). “3-D Live: Real Time Interaction for Mixed Reality,” Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (CSCW '02). pp. 364-371. 28. Renevier, P., and Nigay, L. (2001). ‘Mobile Collaborative Augmented Reality: The Augmented Stroll,’ in Engineering for Human-Computer Interaction: Proceedings of the 8th IFIP International Conference on Engineering for HumanComputer Interaction (EHCI '01), Toronto, Canada, May 11–13, 2001, 299-316. 29. Reitmayr, G., and Schmalstieg, D. (2001). “Mobile Collaborative Augmented Reality,” Proceedings of the 4th International Symposium on Augmented Reality (ISAR '01). 30. Short, J., Williams, E., and Christie, B. (1976). The Social Psychology of Telecommunications, London: Wiley. 31. Stafford, A., Piekarski, W., and Thomas, B. H. (2006). “Implementation of God-like Interaction Techniques for Supporting Collaboration Between Outdoor AR and Indoor Tabletop Users,” Proceedings of the 2006 IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR '06), pp. 165-172. 32. Stieglitz, S., and Dang-Xuan, L. (2013). ”Emotions and Information Diffusion in Social Media - Sentiment of Microblogs and Sharing Behavior”. Journal of Management Information Systems (forthcoming). 33. Stieglitz, S., and Brockmann, T. (2012). “Virtual Worlds as Environments for Virtual Customer Integration”. Proceedings of the 45th Hawaii International Conference on System Sciences (HICSS) Hawaii, Maui, pp. 1013-102. 34. Stieglitz, S., and Lattemann, C. 2011. "Experiential Learning in Second Life," Proceedings of the Americas' Conference on Information Systems, Paper 238. 35. Suomela, R., and Lehikoinen, J. (2005). ‘Taxonomy for Visualizing Location-based Information,’ in Virtual Reality, 8, 4, 71-82. 36. Teufel, S., Sauter, C., Mühlherr, T., and Bauknecht, K. (1995). Computerunterstützung für die Gruppenarbeit, Bonn: Addison-Wesley. 37. Thomas, H. T., Quirchmayr, G., and Piekarski, W. (2003). “Through Walls Communication for Medical Emergency Services,” International Journal of Human-Computer Interaction (16:3), pp. 477-496.
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
9
Brockmann et al.
A Framework for Collaborative Augmented Reality Applications
38. Tönnies, M., and Plecher, D. A. (2011). ‘Presentation Principles in Augmented Reality - Classification and Categorization Guidelines,’ Technical Report (TUM-I1111), Technische Universität München. 39. Wang, X., and Dunston, P. p. (2006). ‘Groupware Concepts for Augmented Reality Mediated Human-to-Human Collaboration,’ in Proceedings of the Joint International Conference on Computing and Decision Making in Civil and Building Engineering, 1836-1842. 40. Zhong, X. W., Boulanger, P., and Georganas, N. D. (2001). ‘Collaborative Augmented Reality: A Prototype for Industrial Training,’ in Proceedings of the 21st Biennial Symposium on Communications, June 25, 2002, Kingston, Canada.
Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 14-17, 2013.
10