Remote touch interaction with high quality models ...

1 downloads 29 Views 4MB Size Report
Microsoft Surface 3 tablet for the touch interaction. An explanatory description of the components, in their physical installation inside the museum, is showed in.
Remote touch interaction with high quality models using an autostereoscopic 3D display. Adriano Mancini1? , Paolo Clini2 , Carlo Alberto Bozzi2 , Eva Savina Malinverni2 , Roberto Pierdicca2 , and Romina Nespeca2 1 Department of Information Engineering Universit´ a Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona (Italy) 2 Department of Civil Engineering, Building and Architecture Universit´ a Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona (Italy) {m.mancini,p.clini,r.pierdicca,e.s.malinverni,r.nespeca}@univpm.it

Abstract. The use of 3D models to document archaeological findings witnessed to boost in the latest years, mainly thanks to the large adoption of digital photogrammetry for the virtual reconstruction of ancient artifatcs. For this reason, the widespread availability of digital 3D objects obliges the research community to face with a hard challenge: which is the best way for allowing visitors being in contact with the real estate of cultural goods? The work described in these pages answers to these questions by describing a novel solution for the enhanced interaction and visualization of a complex 3D model . The installation consists in an autostereoscopic display paired with a remotely connected (with a wired connection) touch-pad for the interaction with the contents displayed in it. The main advantage of using such technology is represented by the fact that one is not obliged to wear cumbersome devices but at the same time one can have a 3D view of the object without any additional aid. The system allows, through a touch-pad, to manage the 3D views and interact with with a very small object that in its virtual dimension is magnified with respect to a classical museum arrangement. The research results, applied to the real case exhibition presented, have proved the innovation and usability of the multimedia solutions, which required the use of complex hardware components and a tricky implementation of the whole software architecture. The digital disruption in the CH domain should be also entrusted in the use of advanced interfaces that at the same time are intuitive and usable interaction methods

Keywords: Human-Computer Interaction, Autostereoscopic, Cultural Heritage, 3D reconstruction, 3D visualization

1

Introduction

Digital technologies are becoming more and more the mainstream for the communication of Cultural Heritage (CH) goods. Indeed, the virtualization of real ?

Corresponding author: tel. +39 0712204449

objects proved to enhance the cognition of the users, and it is also, probably, the only viable solution to put in contact the user with objects that, otherwise, might remain unknown to the majority of the mankind [3]. The use of 3D models to document archaeological findings witnessed to boost in the latest years, mainly thanks to the large adoption of digital photogrammetry for the virtual reconstruction of ancient artifatcs. Close range photogrammetry in fact, allows achieving faithful results in terms of both the 3D model accuracy and the quality of the outcomes visualization. The use of minimal hardware, (essentially, just a digital camera is enough to collect an useful dataset for documentation aims), and the ease of use in almost every environmental condition, make 3D sampling solutions based on Multiple View Stereo (MVS) matching and Structure from Motion techniques ideal for the documentation of archaeological findings [16], [10]. For these reasons, it has become a common practice within the international panorama. Notwithstanding, the widespread availability of digital 3D objects obliges the research community to face with a hard challenge: which is the best way for allowing visitors being in contact with the real estate of cultural goods? The variables to achieve such a difficult task are many. The quality of the reproductions, the devices (or platforms) in which these contents should be displayed, the type of interaction that better fits with the expectations of the users. The work described in these pages answers to these questions by describing a novel solution for the enhanced interaction and visualization of a complex 3D model; specifically, the Venus model (mesh and texture) showed in Figure 1a has been chosen as a case study for a new exhibition in the Museum of Genga, Marche Region, Italy. The Venus is a little statue belonging to the Paleolitic period, realized with stalactite stones with a high of only 8.7 cm. Due to its small dimensions, for the acquisition of the images it was used the so called focus stacking, a close range techniques adopted to solve the problems related to the depth of field and to obtain a 3D model with hight quality resolution. The survey was done capturing more than 900 pictures, obtaining a point cloud of more than 42 million points and a mesh of more than 9 millions faces and texture resolution of 8192 x 8192. The installation basically consists in an autostereoscopic display paired with a remotely connected (with a wired connection) touch-pad for the interaction with the contents displayed in it. Autostereoscopic screen is not novel itself, but represents a strong enhancement for museums installations, since its use for museums exhibitions is not widespread so far. With respect to a classical 2D view approach, this technology allows, among the others: stereo parallax, movement parallax, accommodation and convergence. All 3D display technologies (stereoscopic displays) provide at least stereo parallax, whilst autostereoscopic displays provide the 3D image without the viewer needing to wear any special viewing gear. We refer the reader to [7] for a more exhaustive discussion over this technology. The main advantage of using such technology is hence represented by the fact that one is not obliged to wear cumbersome devices but at the same time one can have a 3D view of the object without any additional aid. The system allows, through a touch-pad, to manage the 3D views and interact with with a very small object that in its virtual dimension is magnified with respect to a

classical museum arrangement. Besides the representation of the high quality of the model (75K faces, 7MB weight of 3D model in obj format with 1,83MB jpeg texture), which allows a large variability of outputs (e.g 3d printing, stereoscopic view and 3D visualization), 3D stereo technologies can be applied to the world of CH as a vessel for preservation, reconstruction, documentation, research and promotion (see Figure 1b for the details of the acquisition). In fact, 3D stereo technology is able to support artifact exploration activities, bringing new excitement for the museums and, more important, for the end-user. As a demonstration of that, the recent study conducted by Sooai et.al [19] is a clear demonstration of how novel interaction techniques can help the user to interact and understand the 3D object. Thus, the main objective of this work is to enable new scenarios in the human-machine interaction applied to the Cultural Heritage, through an User-Centered approach where it is necessary to involve actively all the stakeholders modelling user(s) deriving a set of functional and not-functional requirements that is the base for the system design. The research results, applied to the real case exhibition presented, have proved the innovation and usability of the multimedia solutions, which required the use of complex hardware components and a tricky implementation of the whole software architecture. The digital disruption in the CH domain should be also entrusted in the use of advanced interfaces that at the same time are intuitive and usable interaction methods.

(a) The original Venus standing on a plexiglass pedestal.

(b) A brief description of the steps performed to achieve the 3D virtual reconstruction.

Fig. 1: The photogrammetric survey of the Venus of Frasassi.

The reminder of the paper is organized as follows: following this introduction which consists on the showcase of this work, Section 2 is devoted to a critical state of art analysis of the related works in the field of advanced interactive solutions used in the CH scenario. The full description of the architecture is reported in Section 3, with a specific focus on the implementation details. Concluding remarks, together with a prospective outlook of our future works is reported in Section 4.

2

Related work

Creating an interactive installation which combines a 3D view of the artifact and the manipulation of it with a remote controller requires combining state-of-the-art results in several technological areas, besides a requiring a multi-disciplinary design team. In this section we thus outline different kinds of technologies and interactions that, in the last years, have been adopted in the CH panorama, discussing those methods that most relate to the solution described in this paper. Since we deal with the interaction with complex 3D models, it is necessary to remind the strong enhancement we witnessed in the last years with respect to the exploitation of virtual reconstruction, especially in the archaeological field. Even if the web put at the disposal of the research community many possibilities to remotely navigate virtual museums [20], nowadays the approach of making museums more interactive and attractive is growing. The use of autostereoscopic display, as stated in the introduction section is not a common practice; however, in some cases is used exploiting motion parallax to allow visitors of a science museum to interact with virtual contents [12]. Dealing with 3D models, several pipelines of work have been set up, which range from the data collection, acquisition and processing to the visualization in complex exhibition. The work by Bruno et. al is a good example to be noted [2]. Thanks to the introduction of fast and agile tools for the visualization of 3D models (just think to the development of Web GL, Open GL and other real time rendering engine available) and to the growing availability of tools which enable one to simplify the models without loosing their quality, we can consider the issue of a high quality visualization solved. What is become paramount in the CH panorama is how to make these models, reconstructions or virtual scenario interactive for the users. Several projects and researches attempt to solve this issue. It is the case of Keys to Rome project [14]. This project has the purpose of creating an interactive journey to discover the Roman culture in different venues and making a massive use of different technological installations. In the literature, other examples are worth to be mentioned. In [4] for instance, the proposed solution allows new interactions based on gestures. An important feature is the one to one navigational input based on Kinect skeleton tracking. The framework was used to configure a virtual museum art installation using a real museum room where the user can move freely and interact with virtual contents by adding and manipulating 3D models.

In [11], a natural exploration of extremely detailed surface models with indirect touch control in a room-sized workspace, suitable for museums and exhibitions is showed. Interested readers would find of interest several requirements to be respected, besides an accurate review of the different kinds of interactions and gestures. According to these requirements, several experiences have been set up in the CH panorama [18] to enjoy an immersive and attractive experience, allowing the users to observe 3D archaeological finds, in their original context. Notwithstanding, several technical and technological issues are commonly related to the design of virtual museum exhibits [1]. One of the main issue when developing solutions for the interaction with CH goods is the design of gestures. In fact, whilst the interaction with touch displays has become a de-facto standard at a worldwide scale (thanks to the diffusion of smartphones), body gestures are related to the user’s cultural and provenance background. Hence the user have to be trained to allow him to experience the installation. A good dissertation about this topic can be found in [15]. Another example of complex architectures developed for the engagement of the users is reported in [9]. In this occasion, the Vitruvian man’s model was used to engage the user with a gamification approach. The latter case study is of interest also because it included an holographic view of the Vitruvian man reconstruction. Holograms show virtual objects floating within a box, and there are also examples where the user can interact with the model with touch-less model [6]. Even if the feedback from the users was surprisingly good, one of the most crucial point to consider when designing such types of installation in museums is the simplicity of the system and easiness of maintenance of the multimedia product. This could represent a drawback for every kind of installation, regardless the type of interaction chosen. Both Virtual and Augmented Reality have become important technologies, more and more used in the CH panorama. In fact they allow to see ancient artifact reconstructions as if they are existing with the same point of view of the users. With respect to the kind of installation that should be set up inside museum environment, one important kind of VR experience is represented by the Cave Automatic Virtual Environment (CAVE). It is a polyhedral projection display technology that allows multiple users to experience fully-immersive 3D scenes . A good example is reported in [8]. Also in the case of VR experience, the gesture interaction is an important aspect to be taken into account, as shown in [5] where the user is enabled to virtually interact with virtual contents. Finally, the world of mobile technologies is the one that is probably taking ground in the field of museums navigations, since they can reach the majority of the users thanks to the publications of apps. Some real case exhibitions make use of this kind of CH diffusion. Relevant examples can be found in [13] and [17].

3

System Architecture

The installation is composed of two main hardware components: the autostereoscopic display DIMENCO 50 inches (127 cm) professional QFHD 3D display DM504MAS for the visualization of the contents (both 2D and 3D) and the

Microsoft Surface 3 tablet for the touch interaction. An explanatory description of the components, in their physical installation inside the museum, is showed in Figure 2a, while a schematic view of the installation can be found in Figure 2b.

(a) Snapshot of the installation.

SURFACE 3 (touch)

Websocket (Wired via ethernet)

Autostereoscopic Screen + Personal Computer Via HDMI

(b) Schematic representation of the core components of the system.

Fig. 2: The whole installation is composed of a touchpad used as a remote controller for the interaction with the contents displayed on the screen of the autostereoscopic display. The picture also reports the GUI, designed to switch among 2D/3D contents.

In order to enable the visitor for an in-depth analysis of the Venus, a specific application was designed; the latter presents a simple interface with 8 buttons which bring the user to different sheets and multimedia contents related to the artifacts, the history of similar statues in the worlds and information about the museum. The main multimedia contents that are displayed in the autostereoscopic monitor can be summarized as follows: – HD images: To permit the use of huge imagery for users visualization purposes, we used a virtual texturing technique, a combination of classical MIP-mapping and loading only the tiles needed to perform the texture at user request. For each level of zoom, the set of tiles corresponding to the

observed texture appears (2D+LoDs/scale coordinates). In this case were used leaflet libraries3 ; – 3D model: it can be visualized in different modes. Classical 3D visualization, anaglyph visualization and Stereoscopic view thanks to the autostereoscopic screen; – Gallery: direct interaction with photographic documentation of the museums and in particular of the Venus. For the storage of the contents we built up a webserver (Apache) that was installed in both tablet and the personal computer used to control the monitor. The communication between the two hardware was performed using websockets 4 , a computer communications protocol, providing full-duplex communication channels over a single TCP connection. This was done for three main reasons, explained below: – Communicate the content’s changes from the GUI; – Management of the switching between 2D/3D mode; – Data exchange of gesture interactions. The whole architecture was designed with the twofold aim of browsing all the information related to the Venus model and of displaying the model in its third dimension without the use of any wearable device. Its development is tricky and deserves a deep explanation. The general schema can be found in Figure 3. The software has been developed to be modular, efficient optimizing the use of resources as the CPU and GPU. For the visualization of the model in stereoscopic mode, the screen splits the visualization into two separate areas thank to the parallax. The left half of the screen contains the textured model, while the right parts contains the depth map. Of course, the above mentioned contents have to be synchronizes for the proper visualization. Hence, on the same canvas, the PC have to generate the render of the model and the depth map within the same canvas. Moreover, a header file should be communicated to match both values. There is no existing autostereoscopic monitor that is able to manage the information as described above. The Dimenco autostereoscopic display requires the 2D-plus-Depth format. The format contains the following data: – Header – 2D sub-image with a resolution 960x540 – Depth sub-image with a resolution 960x540 A Depth map in grey scale picture. This depth map belongs to the 2D subimage. Figure 4(c) shows an example of the 3D frame layout for the 2D-plus-Depth format. 3 4

http://leafletjs.com/ - last access March 2017 https://tools.ietf.org/html/rfc6455Last - RFC 6455 The WebSocket Protocol - last access March 2017

Data regarding zoom, camera position on orientation Custom shader

webSocket Server on 8181

interleave

Websocket 7681

PC (APACHE) From tablet

header

Main webapp: • 3D anaglyph • 3D autostereoscopic • Gallery (HD) • Panorama (Chrome+WebGL)

3D model Visualization Dimenco Format (DirectX)

Data regarding web app behavior (selection of desired content)

SWITCHER

webSocket Server on 8182

MONITOR (2D/3D)

It enables the switching from chrome to software that encoded 3D content for Dimenco

Fig. 3: Overall software architecture. Thank to the use of websocket protocol it was possible to convey the contents according to the following schema: 8181 - to manage the switch of the pages towards the browser and the switch into autostereoscopic mode; 8182 - to manage touch gesture as pan, zoom and rotate towards the browser; 7861 - to manage the rasterization of the web page thanks to the 3D visualiser.

Basically the idea is to use the webGL as a render engine that will broadcast the 2D sub-image + Depth image to a dedicated software by using a websocket. This software exploits the Dimenco PlayerAPI based on the Microsoft DirectX9. The processing pipeline is the following one: – configure the monitor; – instantiate a server websocket; – when the socket client connects to the server the incoming base64 image is decoded; – image is encoded into the Dimenco frame format; – encoded image is presented by a full-screen window. From the client side the 2D+depht image is generated by using a webGL page rendered in Chrome by using the threejs to create a dual scene view. Left scene is dedicated to the 2D image while the right one for the depth. Depth image is created by our custom vertex shared as it follows. // switch on high precision floats #ifdef GL_ES precision highp float; #endif varying vec4 vpos; void main() {

gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0); vpos = projectionMatrix * modelViewMatrix * vec4(position,1.0); } #ifdef GL_ES precision highp float; #endif varying vec4 vpos; float zmin = 0.97; float zmax = 1.0; void main() { vec4 v = vec4(vpos); v /= (v.w); float gray = 1.0 - (v.z - zmin) / (zmax - zmin); gl_FragColor = vec4(gray); } The left and right cameras are synchronized and share the same coordinates (x,y,z) and orientation (roll, pitch, yaw). When the renderering of the frame is completed than a base64 image resulting from the scene is generated. This image is sent by using the websocket to the software that is responsible for the conversion to the Dimenco format. Zooming is limited to guarantee a proper level of usability. The position and orientation of synced cameras is transmitted by the Surface tablet over the websocket. This aspect is fundamental to reflect the user’s desired point of view on the final render. Figure 4 shows an example of rendering on tablet and server side. In particular Figure 4 (a) shows the desired view (tablet side) while (b) represents the rendered 2D + depth images with cameras synced to the tablet one. Figure 4 (c) shows the Dimenco encoded image and (d) depicts a detailed area where it is possible to see the header (top-left blue pixels) and the row-interleaved content. The measured delay during the transmission of position from table and rendering to the Dimenco screen is 60ms on average. User has also the capability to select the anaglyph view of the 3D model. Anaglyph image is generated through the AnaglyphEffect.js library of ThreeJS. High definition images of the Venus could also be explored by using a tiled pyramidal representation. Tiles are generated by using GDAL5 and are visualized by using the Leaflet js library. Each high-resolution image could be considered as a tiled map service (TMS) layer. User is able to pan and zoom image to focus on a particular region of interest. Panoramic images could be also viewed. In this case the tablet sends the coordinates to the server that will properly change the camera parameters to match with the user’s preference. 5

http://www.gdal.org/ - Last access March 2017

Fig. 4: Different renders of Venus by tablet and 2D+depth image processors.

4

Conclusion and Outlook

In this paper we presented a novel interactive installation to enhance the visitor’s perception of the complex 3D model of a small size artifact. The use of a touch pad for the remote interaction with high quality contents into a stereoscopic display proved to be a valuable solution to increase the attractiveness of the museum. The advantage is mainly represented by the fact that the interaction is simple and the user do not need to be trained before to start the experience. It is well known that, generally, it is difficult to understand the gestures in more complex installation, where the intuitiveness of the gestures needs more investigations. Moreover, the visualization of the 3D model into an autostereoscopic display can represent a turning point with respect to similar visualization where the user is required to wear cumbersome and uncomfortable headsets. The software solution developed for this case study is a step forward with respect to the autostereoscopic display, since with this solution the contents to be shown can be personalized end incorporated within a wider GUI. Platform is object independent. In this way other 3D objects could be easily integrated. As a future work, we plan to extend the already developed gesture towards a touchless interaction. To achieve this result, we plan to use leap motion 6 sensor, making the experience of interaction more faithful and attractive. Acknowledgments Thanks to the Archaeological Museum of Marche and its director Nicoletta Frapiccini, the Polo Museale Marche and its director Peter Aufreiter. The digital acquisition and the implementation of the threedimensional model have been carried out by the research team DiStoRi Heritage of the Department of Civil Engineering, Construction and Architecture of the Polytechnic University of Marche. The technological and interactive exhibition is provided by EVE-Enjoy Visual Experiences, a spin-off of the Polytechnic University of Marche. Scientific manager: prof. Paolo Clini. Digital documentation: Ludovico Ruggeri, Romina Nespeca, Gianni Plescia. Technological equipment: Adriano Mancini, Carlo Alberto Bozzi. Texts and archaeological consulting: Dr. Gaia Pignocchi.

References 1. Barbieri, L., Bruno, F., Mollo, F., Muzzupappa, M.: User-centered design of a Virtual Museum system: a case study, pp. 155–165. Springer International Publishing, Cham (2017), http://dx.doi.org/10.1007/978-3-319-45781-9_17 2. Bruno, F., Bruno, S., De Sensi, G., Luchi, M.L., Mancuso, S., Muzzupappa, M.: From 3d reconstruction to virtual reality: A complete methodology for digital archaeological exhibition. Journal of Cultural Heritage 11(1), 42–49 (2010) 3. Clini, P., Frontoni, E., Quattrini, R., Pierdicca, R.: Augmented reality experience: From high-resolution acquisition to real time augmented contents. Advances in Multimedia 2014, 18 (2014) 6

https://www.leapmotion.com/ - last access March 2017

4. Dias, P., Pinto, J., Eliseu, S., Sousa Santos, B.: Gesture interactions for virtual immersive environments: Navigation, selection and manipulation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9740, 211–221 (2016), https://www.scopus. com/inward/record.uri?eid=2-s2.0-84978891255&doi=10.1007%2f978-3-31939907-2_20&partnerID=40&md5=8677cbf2dd0aecb82474d1d75442e83a, cited By 0 5. Dias, P., Pinto, J., Eliseu, S., Santos, B.S.: Gesture interactions for virtual immersive environments: Navigation, selection and manipulation. In: International Conference on Virtual, Augmented and Mixed Reality. pp. 211–221. Springer (2016) 6. Dingli, A., Mifsud, N.: Using holograms to increase interaction in museums. Advances in Intelligent Systems and Computing 483, 117–127 (2017), https://www. scopus.com/inward/record.uri?eid=2-s2.0-84986265771&doi=10.1007%2f9783-319-41661-8_12&partnerID=40&md5=5793f0ad27aeb9243331e4dbff422c54, cited By 0 7. Dodgson, N.A.: Autostereoscopic 3d displays. Computer 38(8), 31–36 (2005) 8. He, Y., Zhang, Z., Nan, X., Zhang, N., Guo, F., Rosales, E., Guan, L.: vconnect: perceive and interact with real world from cave. Multimedia Tools and Applications 76(1), 1479–1508 (2017), https://www.scopus.com/inward/record.uri?eid= 2-s2.0-84949777242&doi=10.1007%2fs11042-015-3121-4&partnerID=40&md5= d0aa965578f89a8bc556ee6075a5ce89, cited By 0 9. Maliverni, E.S., d’Annibale, E., Frontoni, E., Mancini, A., Bozzi, C.A.: Multimedia discovery of the leonardos vitruvian man. SCIRES-IT-SCIentific RESearch and Information Technology 5(1), 69–76 (2015) 10. Manferdini, A.M., Gasperoni, S., Guidi, F., Marchesi, M.: Unveiling damnatio memoriae. the use of 3d digital technologies for the virtual reconstruction of archaeological finds and artefacts. Virtual Archaeology Review 7(15), 9–17 (2016) 11. Marton, F., Rodriguez, M., Bettio, F., Agus, M., Villanueva, A., Gobbetti, E.: Isocam: Interactive visual exploration of massive cultural heritage models on large projection setups. Journal on Computing and Cultural 7(2) (2014), https://www.scopus.com/inward/record.uri?eid=2-s2.0-84979824803&doi= 10.1145%2f2611519&partnerID=40&md5=8a3e89038916901d10557d308cf31ba2, cited By 2 12. Mizuno, S., Tsukada, M., Uehara, Y.: Developing a stereoscopic cg system with motion parallax and interactive digital contents on the system for science museums. Multimedia Tools and Applications 76(2), 2515–2533 (2017) 13. Ozden, K., Unay, D., Inan, H., Kaba, B., Ergun, O.: Intelligent interactive applications for museum visits. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8740, 555–563 (2014), https://www.scopus.com/inward/record.uri?eid= 2-s2.0-84911873985&doi=10.1007%2f978-3-319-13695-0&partnerID=40&md5= b1329a6b8a3fac60a89eb1d336f833f2, cited By 0 14. Pescarin, S.: Museums and virtual museums in europe: reaching expectations. SCIRES-IT-SCIentific RESearch and Information Technology 4(1), 131–140 (2014) 15. Pescarin, S., Pietroni, E., Rescic, L., Omar, K., Wallergard, M., Rufa, C.: Nich: a preliminary theoretical study on natural interaction applied to cultural heritage contexts. In: Digital Heritage International Congress (DigitalHeritage), 2013. vol. 1, pp. 355–362. IEEE (2013) 16. Pierdicca, R., Frontoni, E., Malinverni, E.S., Colosi, F., Orazi, R.: Virtual reconstruction of archaeological heritage using a combination of photogrammetric techniques: Huaca arco iris, chan chan, peru. Digital Applications in Archaeology and Cultural Heritage 3(3), 80–90 (2016)

17. Pierdicca, R., Frontoni, E., Zingaretti, P., Sturari, M., Clini, P., Quattrini, R.: Advanced interaction with paintings by augmented reality and high resolution visualization: a real case exhibition. In: International Conference on Augmented and Virtual Reality. pp. 38–50. Springer (2015) 18. Pietroni, E., Pagano, A., Rufa, C.: The etruscanning project: gesture-based interaction and user experience in the virtual reconstruction of the regolini-galassi tomb. In: Digital Heritage International Congress (DigitalHeritage), 2013. vol. 2, pp. 653–660. IEEE (2013) 19. Sooai, A.G., Sumpeno, S., Purnomo, M.H.: User perception on 3d stereoscopic cultural heritage ancient collection. In: Proceedings of the The 2th International Conference in HCI and UX on Indonesia 2016. pp. 112–119. ACM (2016) 20. Sundar, S., Go, E., Kim, H.S., Zhang, B.: Communicating art, virtually! psychological effects of technological affordances in a virtual museum. International Journal of Human-Computer Interaction 31(6), 385–401 (2015), https://www.scopus.com/ inward/record.uri?eid=2-s2.0-84930032158&doi=10.1080%2f10447318.2015. 1033912&partnerID=40&md5=d458ce86d7ce5720030d86301b3bbc15, cited By 0

Suggest Documents