Rapid prototyping of mobile applications for ...

3 downloads 230081 Views 1011KB Size Report
rapid prototyping of applications using mobile devices. The scope of the tool is twofold: to simplify the business development and simulation process of specific ...
2011 IEEE Symposium on Visual Languages and Human-Centric Computing: Posters and Demos

Rapid prototyping of mobile applications for augumented reality interactions Michele Di Capua

Gennaro Costagliola, Mattia De Rosa, Vittorio Fuccella

Unlimited Software Centro Direzionale, Isola F/11 80143 Napoli, Italy Email: [email protected]

University of Salerno Via Ponte Don Melillo 84084 Fisciano (SA), Italy Email: {gencos, matderosa, vfuccella}@unisa.it

Abstract—The progress achieved in the field of computer vision and the great improvement and diffusion of mobile technologies enable the exploration of new models of human-computer interaction, especially with respect to Augmented Reality (AR) scenarios. On the other hand, the lack of rapid prototyping environments for the development and testing of new systems of AR and Mixed Reality, may slow down the process of adoption of these technologies. This document, based on the state of the art of AR technology and on particular of AR development tools, describes the work in progress for the definition of a tool for the rapid prototyping of applications using mobile devices. The scope of the tool is twofold: to simplify the business development and simulation process of specific scenarios and to provide a tool to support the analysis of the quality of human interactions in AR environments.

I. I NTRODUCTION Augmented reality (AR) is a term for the live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input [1]. The applications and related technologies for AR are attracting increasing attention from both the scientific community and companies originally involved in different research areas. In particular, the progress achieved in the fields of computer vision and mobile computing are mainly shifting the focus towards the development of systems for AR for mobile devices [2]. AR is thus creating newer and newer opportunities for exploring the mechanisms of interaction between humans, and virtual and physical environments. However, even if in strong expansion, the current state of the art of AR technologies and applications is still below market expectations, especially when considering the quality of the interaction offered. While some aspects, that are closely linked to the AR technology (i.e. marker-tracking, rendering, etc.), are gradually evolving, on the other end, there are still several aspects, both technical and social, requiring further investigation. One of these aspects is the creation and analysis of appropriate interaction techniques for AR applications, which allow the user to interact with virtual content in an intuitive manner [3]. It is possible to explore the development of new interaction techniques in different directions including: ubiquitous computing [4], tangible computing [5], and social computing [6]. In ubiquitous computing, we can analyze the interactions with the user and its activities, within a dynamic environment. In tangible computing the user interacts with interfaces, modeled as physical objects belonging to the every day life and associated with digital information. The area of social computing tends, in AR environments, to analyze the

978-1-4577-1247-0/11/$26.00 ©2011 IEEE

psychological aspects underlying user interactions. The state of the art in AR technologies and applications is analyzed in section II; section III describes the architecture and the objectives of the proposed tool; lastly, some final remarks and an outline of future work conclude the paper. II. S TATE OF THE A RT At present, the development of AR applications is still an expensive and not intuitive task. Typically, the applications tend to privilege a specific context in order to maximize the actual capacity of the particular system used, not allowing the user to experience different scenarios that would allow the analysis of complex interactions. In the literature, especially in the last decade, we can find several tools that tend to simplify the AR process of application development, trying to reduce the programming effort, and that are often tied to particular domains and to particular technologies. It is possible to divide the recent production of development tools within the AR systems according to 3 distinct levels [3]. At the lower level there are the libraries that provide a basic integration between the artificial “vision” and the computer. At the intermediate level we can find programming environments which provide an initial infrastructure for building applications. On the top level there are the GUI-based application environments that are mainly oriented towards end users with few programming skills. Below we now analyze briefly the main tools developed so far. ARToolkit [7], is a library used for the development of AR applications. The library consists of a user friendly framework, which provides its basic functionalities including several features such as: tracking of markers, OpenGL rendering and VRLM 3D support. However, the use of the library requires code development in C++ or Java only. Porting of this toolkit on some mobile platforms such as Android is also possible. Studierstube [8] is a stable framework, developed at the University of Graz, which includes rendering capabilities, tracking and content management. The environment has been designed with a particular inclination to the collaborative aspects of AR systems. Studierstube is a high-level tool that requires the knowledge of C++ and the availability of the Open Inventor graphics toolkit. DWARF (Distributed Wearable AR Framework) [9], is developed at the University of Monaco and is a CORBA-based framework that enables rapid prototyping of distributed applications of AR. The development philosophy of the tool is strongly focused on a communication and dis-

249

tributed service-oriention. However, good programming skills are still required for the use of this framework. III. A RCHITECTURE AND O BJECTIVES OF THE AR TOOL Starting from the idea that a rapid prototyping system [10], [11] may be helpful for the process design of AR applications, the idea proposed here has, as its main goal, the creation of a tool for rapid prototyping of mobile applications in augmented environments [12]. Several tools were already produced in this area, as discussed above, but all of them deal with specific technical issues within the landscape of AR (i.e. tracking, 3D rendering, etc.). In addition, very few of these tools work directly within the mobile, and none of these tools automatically produces mobile applications able to interact in AR systems. The tool is made of the following main components (see figure 1): 1) A Visual Environment; 2) A Business Logic Server; 3) An RDBMS for content data storage. The visual environment is used to map fiducial markers (or QRcode) to simple actions defined by a user, such as, ”open a web page” or ”play a video”. The user can also choose to interact with some kind of sensors (i.e. temperature), for example, getting the data provided from a sensor visualized on a fiducial marker. The configuration of these actions, linked to fiducial markers, is stored in an XML file and then passed to the Business Logic Server, which provides dynamic integration between actions and data (i.e. video or 3d model) stored in the RDBMS. The tool finally generates a mobile application (i.e. for the Google Android platform), which is configured to interact with some markers, in the way the user visually programmed them. Besides the usual systems for tracking fiducial markers and their 3D representation, the tool integrates other technologies that are recently emerging, and that can improve user interactions within an environment, such as RFID tags. The development philosophy of the tool provides an abstract approach to the task of modeling each single element of the AR environment (such as markers, QR code, etc..). The programming environment of the tool for the development of mobile AR applications and their interactions will be mainly composed of visual elements. Some of them will be enhanced with scripting capabilities, still preserving the basic philosophy of not requiring advanced programming skills. Low level code development will only be required for the possible integration of external components, such as particular sensors [13] or for the development of advanced interactive services (i.e. actuators). As regards the implementation details of the basic features of AR systems (i.e. tracking markers), low-level libraries will be analyzed and integrated (e.g. NyARToolkit). The integration of these libraries in the tool will allow the continuous update of the developed mobile applications, without making them dependent on the libraries themselves. IV. C ONCLUSION AND F UTURE W ORK In this paper we have shown the main features of the architecture of a tool for the rapid prototyping of AR applications for mobile devices. The advantages that this new tool is expected to provide are:

250

Fig. 1: Tool working schema

Rapid prototyping in an augmented environment. Development of new interfaces (metaphors) of interactions for particular contexts. • Possible development of new formal techniques to “predict” and model user interactions. There are several possible scenarios in which the proposed tool can be used. Among these we can assume prototyping and simulation of user interactions with different environments such as mobile shopping (viewing and purchase of goods in a shop “augmented” with interactive elements), and mobile museum, in an interactive augmented museum tour. • •

R EFERENCES [1] D. Wagner, T. Pintaric, F. Ledermann, and D. Schmalstieg, “Towards massively multi-user augmented reality on handheld devices,” in In Third International Conference on Pervasive Computing, 2005, pp. 208–219. [2] I. M. Zendjebil, F. Ababsa, J.-Y. Didier, E. Lalag¨ue, F. Decle, R. Delmont, L. Frauciel, and J. Vairon, “R´ealit´e augment´ee en ext´erieur. e´ tat de l’art,” Technique et Science Informatiques, vol. 28, no. 6-7, pp. 857–890, 2009. [3] Feng Zhou, H. B.-L. Duh, and M. Billinghurst, “Trends in augmented reality tracking, interaction and display: A review of ten years of ismar,” in Proceedings of the 7th IEEE/ACM ISMAR ’08, 2008, pp. 193–202. [4] M. Weiser, “The computer for the 21st century,” SIGMOBILE Mob. Comput. Commun. Rev., vol. 3, pp. 3–11, July 1999. [5] H. Ishii and B. Ullmer, “Tangible bits: towards seamless interfaces between people, bits and atoms,” in Proceedings of the SIGCHI conference on Human factors in computing systems, ser. CHI ’97. New York, NY, USA: ACM, 1997, pp. 234–241. [6] L. A. Suchman, Plans and situated actions: the problem of humanmachine communication. New York, NY, USA: Cambridge University Press, 1987. [7] ARToolkit, http://www.hitl.washington.edu/artoolkit/, 2011. [8] Studierstube, http://studierstube.icg.tugraz.at/, 2011. [9] DWARF, http://ar.in.tum.de/Chair/ProjectDwarf/, 2011. [10] J. Verlinden and I. Horv´ath, “Analyzing opportunities for using interactive augmented prototyping in design practice,” Artif. Intell. Eng. Des. Anal. Manuf., vol. 23, pp. 289–303, August 2009. [11] Y.-K. Lim, E. Stolterman, and J. Tenenberg, “The anatomy of prototypes: Prototypes as filters, prototypes as manifestations of design ideas,” ACM Trans. Comput.-Hum. Interact., vol. 15, pp. 7:1–7:27, July 2008. [12] M. Bauer, B. Bruegge, G. Klinker, A. Macwilliams, T. Reicher, S. Riß, C. Sandor, S. Christian, and M. Wagner, “Design of a component-based augmented reality framework,” in In Proc. ISAR 2001, 2001, pp. 45–54. [13] B. Gonc¸alves, J. G. P. Filho, and G. Guizzardi, “A service architecture for sensor data provisioning for context-aware mobile applications,” in Proceedings of the 2008 ACM symposium on Applied computing, ser. SAC ’08. New York, NY, USA: ACM, 2008, pp. 1946–1952.

Suggest Documents