ecosystem and rapid prototyping of web applications. The ability to access ... compatibility (particularly lack of mobile browser WebGL support), and by lower ... barriers for developing cross-platform 3D graphics technology and, based on this ...
Designing a Multiplatform Pipeline for 3D Scenes Alun Evans, Javi Agenjo, Josep Blat Grup de Tecnologies Interactives (GTI) Universitat Pompeu Fabra (UPF) Tanger 122, Barcelona 08018 {alun.evans, javi.agenjo, josep.blat}@upf.edu
Abstract — Many digital production companies (whether for games, animated productions or digital cinema) now control and review their assets via web-based tools, due to the fast-moving ecosystem and rapid prototyping of web applications. The ability to access such tools via mobile interfaces is increasing in importance, particularly with the rise in power of mobile hardware. Yet, in the field of real-time 3D graphics, progress is hampered by both the lack of cross-platform software compatibility (particularly lack of mobile browser WebGL support), and by lower hardware performance for certain plaforms, especially mobile. In this paper, we analyze the barriers for developing cross-platform 3D graphics technology and, based on this analysis, present a suitable cross-platform pipeline that allows the viewing and editing of real-time 3D scenes in a collaborative environment. We compare a mobile implementation of this pipeline with a WebGL-based engine for desktop use, and present and discuss results on the differences between these two implementations. We further demonstrate a system for collaborative 3D scene creation using our pipeline with a custom plugin for the modeling software Autodesk Maya.
in Europe, 3D scenes, models and animations might be created in South America, yet are reviewed in several locations worldwide, before being edited together in the main production facility. This process is currently conducted by email and remote storage, with little in the way of versioning, reporting, annotation etc. And while annotation components exist in some modelling packages (such as Autodesk Maya), such details do not transfer across platforms or even software packages.
Keywords - 3D, graphics, WebGL, Editor, Pipeline, Modelling, Collaborative
The paper is organized as follows. Section II details relevant related work. Section III describes our cross-platform pipeline, along with implementational details for each component. Finally Section IV presents technical results and conclusions.
I.
INTRODUCTION
The concept of ‘convergence’, in which different technologies and hardware come together to work in harmony, is a long sought goal in multimedia production. Be in the professional sector, where production studios create custom pipelines to merge the output of audiovisual tools (e.g. for colour correction, special effects etc.), or in more research orientated fields, such as in large collaborative R&D projects like X 1 . Despite being hampered by hardware and software incompatibility (perhaps created purposefully due to commercial interests), this work has made clear steps towards a convergent collaborative workflow, especially in the field of digital cinema production. Yet, in fields where 3D graphics is the dominant modality (for example, videogame production and, increasingly, web browser-based 3D), there has been little effort to create an open, cross platform solution for creating, editing, and reviewing 3D assets (such as meshes, textures, effects, etc.). The use case for researching such technology is clear, as modern 3D production typically spans several locations and even continents. For example, for a production company based
This work has been partially funded by the IMPART FP7 European Commision project (http://impart.upf.edu) and by the Spanish Ministry of Science and Innovation (TIN2011-28308-C03-03)
In this paper, we present our attempts to address this issue by presenting an initial design for a cross-platform pipeline for 3D graphics and collaborative scene creation, and a test implementation of that design. The implementation features a browser-based 3D engine/editor, a tablet based viewer, a standalone desktop viewer, a modeling package plugin, and a server backend to tie the pipeline together. The implementation allows creation, viewing and editing of 3D scenes on multiple platforms, which is particularly useful in the field of collaborative workflow, where we demonstrate how our pipeline is able to render and review newly created models.
II.
RELATED WORK
Although collaborative workflow in 3D environments has existed for over 25 years, particularly in the field of Computer Aided Design [1]–[3], the concept of collaborative, multiplatform scene creation in 3D environments has been less well developed, possibly due to the lack of a standard desktop graphics pipeline that lends itself to easy collaboration, and to low network bandwidth that restricts the distribution of large files (such as meshes and textures). Nevertheless, there have been some large scale efforts for large scale multiplatform graphics pipelines, such as that created by the Games@Large platform [4], and a Web based viewer for the ParaView system [5]. In terms of collaborative workspaces Nam and Sakong [6] present a collaborative 3D workspace for synchronous distributed product design reviews, using augmented reality to present a 3D model to different collaborators. Marion and Jomier [7] use WebGL technology to create a browser-based renderer which allows remote, real-time visualization of scientific data. Their system explores the possibility of using
the web-browser as the standard piece of software that makes possible collaborative 3D visualization. This concept of using WebGL for cross-platform rendering is being explored by various projects such as ThreeJS.org. It is worth commenting however, that WebGL technology is currently not supported by all mobile browsers. In the mobile domain, although there has been some work carried out on annotation of video on tablet computers [8], and in the E-learning field [9], there has been very little previous work on collaborative 3D. Despite the prevalence of 3D games with high quality graphics effects [10], the ability to view scenes cross-platform, and cross-user, has seen very little research. III.
x x x x
MeshRenderer: Associating a 3D mesh to the node, which should be drawn to screen (Not all Nodes by default have an associated mesh, e.g. Light Nodes) Light: To specify that this node should emit light according to defined parameters (intensity, angle etc.) Camera: To specify that this Node is camera through which it is possible to view the scene Annotation: The Node is an annotation which a user has added to the scene, and consists of a line in 3D space which links the subject to a text variable
3D SCENE PIPELINE
In this paper we present a platform independent 3D Scene Pipeline designed to aid collaborative scene creation, and specifically to be compatible with mobile environments. The pipeline is designed in a manner such that the same 3D scene can be rendered across different platforms, and the rendering engine used on each platform should draw an identical visual reproduction of the scene. The purpose of this platform independence is to enable users on different platforms, whether desktop, laptop, tablet or mobile phone to be able to view, create, comment on, and interact with a 3D scene. The pipeline can be organised into three distinct layers (see Figure 1). The first is an Input layer, consisting of plugins for modelling packages (such as Autodesk Maya or 3D Studio Max) which allows the uploading of assets (such as 3D meshes) to the pipeline. This is combined with a Scene Editor application which allows intuitive user control of scene elements (positioning of assets, cameras, lights, and control over material properties for advanced rendering effects). The second layer is a Data layer, where the 3D assets lie alongside an abstracted Scene Description file, and the shader code used to render it (see below). The final layer is the Presentation layer, which consists of several output rendering engines capable of displaying scene content. This paper will describe each component of pipeline, paying special attention to the performance of the mobile renderer, and those elements that facilitate collaborative scene creation. The entire pipeline is developed in a very independent, modular fashion. A Scene Graph specifies each object in a scene as a Node, and each Node can have a series of Components which define its properties. Components can range from a simple MeshRenderer (which specifies a 3D mesh to be drawn to screen for that Node) to more complex graphical effects such as particle systems. Client editors/viewers (see below) only take care of the interface and rendering, whereas the overall pipeline is based on the interaction between different modules. We define six core classes of components that can be added to a Node in the scene: x Transform: Specifying the position, rotation and scale of the Node x Material: Specifying information about the visual appearance of an object (colour, texture files, lighting parameters etc.)
Figure 1. Overview of Pipeline
These components can be considered mandatory, in the sense that every target platform (e.g. tablet or web browser) must be capable of parsing their descriptions and correctly implementing them. However, an advantage of creating such a modular pipeline is that it is very straightforward for different platforms to declare support (or not) for more advanced or experimental components. For example, one platform may support a component capable of producing particles (used for effects such as smoke or fire) yet this component may not be supported by other platforms. The current version of the pipeline supports four target platforms (which are each described in detail below): desktop/browser (WebGL), desktop/Microsoft-Windows, mobile/iOS and modeling plugin (Autodesk Maya). Each has been developed independently with reference to the core pipeline, and thus supports the six core components. Each is able to explicitly declare support for different components as they are developed, and according to priorities. The engines developed for the four supported platforms are discussed in Sections III.A (WebGL, browser-based engine), III.B (Plugin for Autodesk Maya), III.C (iOS engine) and III.D (Windows Executable).
Communication amongst platforms is specified according to a JSON based data structure. The JSON structure specifies the overall structure of the scene (both in terms of appearance and relevant physical data such as the URL of assets); the structure of the scene graph Nodes, including hierarchical information; and the Components associated with each Node (e.g. Material for rendering, current Transform, MeshRenderer, Annotation, etc.)
the lack of support in WebGL for the attachment of multiple frame buffer objects. The renderer supports any number of lights through multipass rendering (although naturally the performance can suffer when having too many lights). The most visually dominant aspect on the application is the main Scene editor which allows transformation and editing of imported assets. Assets can be inserted into the scene, positioned, and scaled in real-time using tools and a freeroaming camera which will be familiar to any user of existing 3D software such as Autodesk Maya or 3D Studio Max (see Figure 3). The editor instantiates the engine and dynamically creates editable fields and tools (available through dynamical menus as well), based on the selected component in the scene tree. These fields allow direct and real-time changing of the parameters of the 3D scene, thus enabling instant scene configuration without the need to edit code and reload the engine. B. Modeling Plugin The web-based editor allows the positioning of objects within a scene (including lights and cameras) and allows the user to rapidly apply textures and different real-time rendering effects. It does not, however, allow the user to directly edit or model a mesh, or change texture uv coordinates, as this is work better carried out in a dedicated modeling package (such as Autodesk Maya or 3D Studio Max, or an open source alternative such as Blender). Such packages provide an entire suite of modeling and texturing tools, and it is currently beyond the scope and aims of our current work to attempt to match such functionalities.
Figure 2. Overview of Pipeline
A. Web-based Editor Using WebGL technology and our own 3D engine, we have created a rendering pipeline browser-based platform to edit a 3D Scene directly on the web. The tool makes it straightforward to build, edit, and deploy a scene in a working website.
Figure 3. Browser-based Editor
The WebGL render engine (which is wrapped into an independent module) reads the scene graph and renders a frame according to the information stored in the tree. We use a forward rendering solution instead of deferred solutions due to
Yet the ability for a modeler to create a 3D object/scene with a dedicated package, and then upload that scene with the other clients of the pipeline, is a key aspect in our goal of creating a collaborative pipeline. A modeller creates a 3D mesh, optionally adding texture information and scene information (lights, camera). Then, using a simple drag and drop interface which links to the Server Component (see section E below), the created mesh/scene is packaged into a JSON description and uploaded to the server. Once there, the scene can be reviewd either via the web editor or mobile viewer. Functionalities such as Mesh Painting and Annotation allow a review to draw attention to different areas of the scene, and comment on it directly. These modification and comments are now stored in the scene description, which can be sent back to the modeller. We have implemented a plugin for our pipeline for Autodesk Maya. Note that Maya (as well as 3D Studio Max and Blender) have several built in rendering engines based on their own scene graph or Hypergraph. Our plugin for Maya is capable of importing a scene defined according to our pipeline, and uses the rendering engines supported by Maya for display. C. Mobile Viewer and Annotation The WebGL engine and editor presented in section A is suitable for use with several browsers and, thanks to multiplatform browser support, can be used on various desktop or laptop operating systems, yet currently WebGL is not widely supported on mobile platforms, specifically iOS. This means that for our pipeline to be implemented on mobile hardware, a custom 3D engine is required. A brief glance at the visual
component of many mobile videogames shows that it is possible to achieve high quality graphics and performance, and this view is supported by industry analysis[11].
developing a more complete cross-platform system for modeling creation, annotation and review.
Using the design criteria suggested both by [11] and [12] as a guide, and in order to test the concepts of cross platform 3D workflow in the mobile domain, we implemented a 3D engine for iOS in native C++ and Objective-C. The 3D engine fits the modular pipeline that we define in this paper – in order to do so, and to have more precise control over lighting and texturing effects, the engine is implemented in OpenGL 2.0 as opposed to the fixed pipeline of OpenGL 1.1. OpenGL 2.0 allows us to reuse shader code in between mobile and desktop platforms, and we do this using the ‘uber-shader’ approach [13], where a shader is compiled using several fragments defined within macros. This not only allows the use of the same shader code to control effects between platforms, but also allows us to control implementation of platform specific features. A further advantage of using the same shader code is that it can be centrally stored at a known URL, and downloaded to any client platform at will. This means that if the code needs to be updated (e.g. to provide improved performance or quality, it is only necessary to update the central repository. D. Desktop Viewer We also created a standalone Windows executable capable of rendering a scene defined by our pipeline. While the application does not have the control or interaction capabilities of the browser-based WebGL version, there are two scenarios where such an application is useful: offline rendering (where no internet connection is available; and the saving of rendered images/videos to disk, using OpenGL functions which are not available in WebGL E. Server Component To tie together each of the four components, a server application is required to store the scene description (the JSON described above) and the assets (meshes, textures) required for each scene. Our server application is written in PHP and enables uploading/downloading of scene descriptions and assets, querying of server contents, and simple user authentication (with permission control – allowing the creation of different levels of users). IV.
INITIAL RESULTS AND CONCLUSIONS
In this paper we have define a pipeline suitable for crossplatform presentation of 3D scenes. Out future work is now focused on a more formal evaluation of the work. We intend to test the pipeline with students studying 3D modeling and animation, allowing them to upload their coursework to a central server, where the teacher can review/mark their work either from a browser-based editor, or from a mobile application. We plan to conduct a further evaluation with an audiovisual production company, to allow Technical Directors to remotely review the modeling / texturing work of others. These results should enable us to design and implement a
Figure 4. Review and Annotation of scene on a tablet. (Photo shown to demonstrate tablet capabilities alongside WebGL based editor)
REFERENCES [1] [2]
[3] [4]
[5]
[6]
[7]
[8]
[9] [10] [11] [12]
[13]
T. Nam and D. Wright, “CollIDE: A shared 3D workspace for CAD,” 1998 Conf. Netw. Entities, Leeds, …, 1998. T. Nam and D. Wright, “The development and evaluation of Syco3D: a real-time collaborative 3D CAD system,” Des. Stud., 2001. A. Pang and C. Wittenbrink, “Collaborative 3 D visualization with CSpray,” IEEE Comput. Graph. Appl., 1997. A. Jurgelionis, P. Fechteler, P. Eisert, F. Bellotti, H. David, J. P. Laulajainen, R. Carmichael, V. Poulopoulos, A. Laikari, P. Perälä, A. De Gloria, and C. Bouras, “Platform for Distributed 3D Gaming,” Int. J. Comput. Games Technol., vol. 2009, pp. 1–15, Jan. 2009. S. Jourdain, U. Ayachit, and B. Geveci, “Paraviewweb, a web framework for 3d visualization and data processing,” in IADIS International Conference on Web Virtual Reality and ThreeDimensional Worlds, 2010, vol. 7. T. Nam and K. Sakong, “Collaborative 3D workspace and interaction techniques for synchronous distributed product design reviews,” Int. J. Des., 2009. C. Marion and J. Jomier, “Real-time collaborative scientific WebGL visualization with WebSocket,” Proc. 17th Int. Conf. 3D Web Technol., pp. 47–50, 2012. J. Silva, D. Cabral, C. Fernandes, and N. Correia, “Real-time annotation of video objects on tablet computers,” Proc. 11th Int. Conf. Mob. Ubiquitous Multimed., 2012. K. Mock, “Teaching with Tablet PC’s,” J. Comput. Sci. Coll., vol. 20, no. 2, pp. 17–27, Dec. 2004. N. Smedberg, “Bringing AAA graphics to mobile platforms,” in Presentation at the game developers conference GDC12, 2012. M. Ribble, “Next-Gen Tile-Based GPUs,” in Presentation at the game developers conference mobile GDCM08, 2008. A. Evans, J. Agenjo, and J. Blat, “Variable Penumbra Soft Shadows for Mobile Devices,” in 9th International Conference on Computer Graphics Theory and Applications (GRAPP), 2014. W. Engel, ShaderX3: Advanced Rendering with DirectX and OpenGL (Shaderx Series). Charles River Media, 2004, p. 630.